Posts tagged cloud computing
Mesosphere has closed new funding to help it bring the open-source Mesos software to a wider audience
Apache Mesos, a software package for managing large compute clusters that’s been credited with helping Twitter to kill its Fail Whale, is being primed for use in the enterprise.
Practical advice for you to take full advantage of the benefits of APM and keep your IT environment
One of its main backers, a startup called Mesosphere, announced Monday it has closed an additional $10.5 million round of funding and will use the money to develop new tools and support offerings to make Mesos more appealing to large businesses.
Mesos is open-source software originally developed at the University of California at Berkeley. It sits between the application layer and the operating system and makes it easier to deploy and manage applications in large-scale clustered environments.
Twitter adopted Mesos several years ago and contributes to the open-source project. The software helped Twitter overcome its scaling problems and make the Fail Whale — the cartoon symbol of its frequent outages — a thing of the past.
Mesosphere’s CEO, Florian Leibert, was an engineer at Twitter who pushed for its use there. He left Twitter a few years ago to implement Mesos at AirBnB, and last year he left AirBnB to cofound Mesosphere, which distributes Mesos along with documentation and tools.
On Monday Mesosphere said it had secured a new round of funding led by Silicon Valley investment firm Andreessen Horowitz. It will use the money to expand its commercial support team and develop Mesos plug-ins that it plans to license to businesses.
Mesos has several advantages in a clustered environment, according to Leibert. In a similar way that a PC OS manages access to the resources on a desktop computer, he said, Mesos ensures applications have access to the resources they need in a cluster. It also reduces a lot of the manual steps in deploying applications and can shift workloads around automatically to provide fault tolerance and keep utilization rates high.
A lot of modern workloads and frameworks can run on Mesos, including Hadoop, Memecached, Ruby on Rails and Node.js, as well as various Web servers, databases and application servers.
For developers, Mesos takes care of the base layer of “plumbing” required to build distributed applications, and it makes applications portable so they can run in different types of cluster environments, including on both virtualized hardware and bare metal.
It improves utilization by allowing operations staff to move beyond “static partitioning,” where workloads are assigned to a fixed set of resources, and build more elastic cluster environments.
The software is used today mostly by online firms like Netflix, Groupon, HubSpot and Vimeo, but Mesosphere will target large enterprises — “the Global 2,000,” Leibert said — that are wrestling with large volumes of data and struggling to manage it all at scale.
That includes customer data collected at busy websites and operational data gathered in the field. “A lot of organizations are under pressure to do things at scale, they’re running a lot of diverse applications and the wheels are coming off,” said Matt Trifiro, a senior vice president at Mesosphere.
Mesos can manage clusters in both private and public clouds, and in December Mesosphere released a tool for deploying Mesos to Amazon Web Services. Both AirBnB and HubSpot manage their Amazon infrastructures with Mesos.
Mesosphere will continue to provide its Mesos distribution for free, including tools it developed such as Marathon. On Monday it released an update to the core Mesos distribution, version 0.19, along with new documentation.
It plans to make money by developing and licensing plug-ins for Mesos for tasks like dashboard management, debugging, monitoring and security, and by selling professional services.
It has 25 full-time employees today, spread between Germany and San Francisco. “We’re building out our services operation as we speak,” Trifiro said.
The cloud-based backup and recovery service was introduced last year
Symantec plans to close down its Backup Exec.cloud service, saying it lacks mobile and content-sharing features and wouldn’t be the right platform for delivering them.
Backup Exec.cloud is a pure cloud-based offering designed to make it easy for small businesses and remote branch offices to back up their data. It was announced in February 2012.
Symantec disclosed its plans to shut down the service in an email to channel partners that was seen by IDG News Service. An FAQ about the shutdown was on Symantec’s site earlier on Tuesday but appears to have been removed. In its email to channel partners, Symantec said it would start informing end users by email in early December. The company will stop selling subscriptions or renewals for Backup Exec.cloud on Jan. 6, 2014.
Cloud-based options for both backing up data and working with files have proliferated in recent years and taken on growing importance as more employees work at home and on the road. Backup Exec.cloud, introduced along with Symantec’s Backup Exec 2012 product suite, is a standalone system focused on off-premises data protection for sites with no IT staff.
“Customers want features such as synch & share and mobile access. Backup Exec.cloud was not designed with these features in mind,” Symantec’s FAQ said. “As a result, Symantec has decided to discontinue Backup Exec.cloud in order to focus on more productive and feature-rich cloud-based applications which include this type of functionality.”
Symantec offers a range of other data protection products, including on-premise and hybrid backup and recovery, as well as Norton Zone for file synchronization and sharing. It will continue to invest in backup and recovery products, including its on-premise Backup Exec product and its NetBackup software, and will expand the functionality of Norton Zone, the company said.
Asked for comment on the change, Symantec said it was simplifying its product lineup.
“As we align with our new offering strategy and efforts to streamline our product range to provide fewer, more integrated solutions for our customers, Symantec has made the decision to retire Backup Exec.cloud,” the company said in a statement. “We are firmly committed to doing everything we can to help our partners and customers successfully navigate this process.”
In January, Symantec announced a reorganization of its software business under President and CEO Steve Bennett, appointed in 2012, that is focused on more integrated products.
Cloud-based backup has both grown in popularity and become more complex over the past several years, said Eran Farajun, executive vice president of Asigra, which supplies software for cloud backup services from companies including Hewlett-Packard and Terremark.
Among other things, enterprises need to back up data from more locations, including virtual machines, mobile devices and cloud-based services such as Salesforce.com, he said. Meanwhile, the per-gigabit price of cloud backup has fallen as the volume of data involved continues to grow. However, there’s still room for standalone cloud-based backup, Farajun believes.
The service and support for it will cease on Jan. 6, 2015. Existing customers will be able to keep using Backup Exec.cloud until the end of their annual subscriptions. For customers with multiyear subscriptions that go beyond that date, Symantec will provide information about refunds at a later date, according to the FAQ. Users are entitled to Backup Exec on-premises backup software for 35 percent off list price, Symantec said. The company also suggested Norton Zone as an alternative for some customers.
Users will have to migrate their own data to any alternative service. “We are here to help you navigate this process, but we are not able to provide any data migration services as part of this announcement,” Symantec said in the FAQ.
Customers’ data in the cloud will be deleted after their subscriptions expire. However, customers shouldn’t have to download all the data after expiration because it will already exist on their own PCs and servers, the company said. The service is a near-term backup that normally maintains data for just 90 days.
“Your privacy is very important to us,” Microsoft is fond of saying. But if a former Microsoft Privacy Chief no longer trusts Microsoft, should you?
Bowden’s statements were made during a conference about privacy and surveillance that was held in Lausanne, Switzerland, and reported on by the Guardian. At one point, Bowden’s presentation slide showed a “NSA surveillance octopus” to help illustrate the evils of surveillance in the U.S. cloud; but this was not a PowerPoint presentation. He was using LibreOffice 3.6 because he doesn’t trust Microsoft software at all anymore. In fact, he said he only uses open source software so he can examine the underlying code.
An attendee pointed out that free software has been subverted too, but Bowden called open source software “the least worst” and the best option to use if you are trying to avoid surveillance. Another privacy tip…the privacy pro also does not carry a personal tracker on him, meaning Bowden gave up on carrying a mobile phone two years ago.
No privacy in the cloud: zero, zippy, none
According to Bowden, “In about 2009 the whole industry turned on a dime and turned to cloud computing – massively parallel computation sold as a commodity at a distance.” He said, “Cloud computing leaves you no privacy protection.” However, “cloud computing is too useful to be disinvented. Unlike Echelon, though, which was only interception, potentially all EU data is at risk. FISA (Foreign Intelligence Surveillance Act) can grab data after it’s stored, and decrypted.”
Bowden authored a paper about “the U.S. National Security Agency (NSA) surveillance programs (PRISM) and Foreign Intelligence Surveillance Act (FISA) activities and their impact on EU citizens’ fundamental rights.” While it mostly dissects how “surveillance activities by the U.S. authorities are conducted without taking into account the rights of non-U.S. citizens and residents,” it also looks at some “serious limitations to the Fourth Amendment for U.S. citizens.”
“The thoughts prompted in the mind of the public by the revelations of Edward Snowden cannot be unthought. We are already living in a different society in consequence,” Bowden wrote [pdf]. He again pointed out the dangers to privacy in cloud computing. “The scope of FAA creates a power of mass-surveillance specifically targeted at the data of non-U.S. persons located outside the U.S., including data processed by ‘Cloud computing’, which eludes EU Data Protection regulation.”
Data can only be processed whilst decrypted, and thus any Cloud processor can be secretly ordered under FISA 702 to hand over a key, or the information itself in its decrypted state. Encryption is futile to defend against NSA accessing data processed by US Clouds (but still useful against external adversaries such as criminal hackers). Using the Cloud as a remote disk-drive does not provide the competitiveness and scalability benefits of Cloud as a computation engine. There is no technical solution to the problem.
He concluded that there is an “absence of any cognizable privacy rights for ‘non-U.S. persons’ under FISA.”
Microsoft’s strategy: Grind down people’s privacy expectations
It was Bowden’s position over privacy policies for Microsoft that makes his point of view important. This man, a privacy expert, no longer trusts Microsoft as a company, nor its software.Microsoft ‘your privacy is our priority’ Yet Microsoft (and most all other companies) love to publicize the quote, “Your privacy is very important to us.” But does Microsoft really care about your privacy?
During an interview with Bowden, the London School of Economics and Political Science (LSE) asked, “Do you think the general public understands how much privacy they have in the digital world?”
Bowden replied, “There’s been a grinding down of people’s privacy expectations in a systematic way as part of the corporate strategy, which I saw in Microsoft.”
Regarding the Guardian’s report that Bowden does not trust the Redmond giant, Microsoft sent this PR-damage control statement to CNET:
“We believe greater transparency on the part of governments – including the U.S. government – would help the community understand the facts and better debate these important issues. That’s why we’ve taken a number of steps to try and secure permission, including filing legal action with the U.S. government.”
About that transparency…LSE asked Bowden, “What’s your view on the transparency policies of tech-companies?”
Bowden replied, “It is purely public relations strategy – corporate propaganda aimed at the public sphere – and due to the existence of secret mass-surveillance laws will never be truly transparent.”
Which SaaS vendor just passed the billion-dollar mark? Microsoft
Office 365 seems to be catching on.
Despite a lot of confusion around how it works, it seems Microsoft’s SaaS version of the flagship Office suite has pretty quickly grown into a billion-dollar business. According to the most recent financials from Redmond, Office 365 is now on a billion dollar run rate and continuing to grow at a brisk pace.
For those who have been quick to throw dirt on Microsoft’s still warm body, Q3 showed the company exceeding $20 billion in revenue and $6 billion in profits. This at a time when everyone laments the drop in PC sales. Most companies would give away their CEO’s children to have those kinds of numbers.
Truth be told, $1 billion, in terms of the total revenue, suggests that Microsoft Office is not a major piece of the pie. The division that makes up Office did more than $6 billion this quarter alone, for instance. That being said, the billion-dollar mark is a watershed for this new way to consume Office, and shows Microsoft’s muscle in competing with other online productivity suites like Google Drive. From the briefings, it seems all of Microsoft’s cloud-based businesses, including the Azure Cloud, Xbox Live and Office 365, are doing pretty well.
Another factor that was discussed around the earnings is that many of the Office 365 seats are coming from large enterprise accounts. About 25% of enterprise customers are using at least some Office 365 seats. Also, many of the Office 365 seats are the higher-cost, premium versions, which translates to higher revenue and profit for Microsoft. This bodes well for Microsoft as more and more attention and revenue shifts to the cloud.
All in all, Office 365 has grown about 500% in just one year. Of course, maintaining that sort of growth rate over the course of the next couple of years will be difficult, if not impossible. But it is clear that Microsoft has used its cash cow productivity suite to give itself an anchor in the cloud/SaaS business landscape.
Microsoft has also made Office 365 more channel-friendly, allowing VARs and MSPs to bill customers directly via Office 365 portal. Putting Office 365 into the hands of Microsoft’s sizeable and powerful channel is a surefire way to increase its sales.
As I have written before, I use Office 365 for Home, which allows me to put it on five computers in the house. The only thing missing for me is if I could run it on Android tablets. But at $9.95 a month with 25GB of Skydrive and Skype minutes included, I think it is an excellent value.
Some of the initial confusion that held back earlier adoption of Office 365 is that many people didn’t realize that the applications are installed on the machine. You can access web-based versions of the apps on guest computers, but on your own computers there is little difference between the SaaS-based and traditional versions.
So maybe the old dog can learn new tricks. Good for Microsoft, if it has been able to adopt the new SaaS-based methods. Now, for their next trick, let’s see if they could only sell more Windows 8 phones and tablets.
2013: Year of the hybrid cloud
Hybrid clouds, cloud brokers, big data and software-defined networking (SDN) predicted to be the major trends in cloud computing in 2013.
The time for dabbling in cloud computing is over, say industry analysts. 2013 is the year that companies need to implement a hybrid cloud strategy that puts select workloads in the public cloud and keeps others in-house.
“Next year has to be the year that enterprises get serious about having real cloud operations as part and parcel of their IT operations,” says John Treadway, vice president at Cloud Technology Partners, a consultancy.
10 cloud predictions for 2013
Careers in the cloud
Treadway says that in the last year, he and his colleagues have worked with many large enterprise clients who have implemented half-baked, haphazard cloud infrastructure schemes – most of them private and developed in-house.
They have some virtualization, explained Treadway. And they may even have some automation. “But when you peel back the onion you can’t find the type of cloud infrastructure where you can request a resource and have it provisioned automatically on the fly. There is still a lot of human labor involved in those processes,” Treadway says.
He expects most of these in-house private clouds to be abandoned in favor of more strategic hybrid mixes of public cloud services and more commercially packaged private clouds services like those based on OpenStack standard or VMware’s vCloud.
Prediction 1: Hybrid clouds will take off
“I’m convinced 2013 is going to be the year of the hybrid cloud infrastructure,” says Tracy Corbo, principal research analyst at Enterprise Management Associates.
“Cloud infrastructure outages happen. That’s a fact that is not going to change. So it only makes sense for an enterprise to take a look at the workloads they can put in the public cloud where there lies the bigger risk of outage and data loss and those that should be placed on a more controlled private cloud,” Corbo says.
Meeting the challenges of hybrid cloud computing infrastructures
That hybrid infrastructure as a service (IaaS) split is likely to be divided between the systems of engagement (customer service systems, for example) and the systems of record (like back-end financials), explains Chandar Pattabhiram, former marketing executive at Cast Iron, a cloud integration company purchased by IBM, and currently vice president of marketing at Badgeville, a company specializing in gamifying cloud applications.
Hybrid cloud deployment is not a new concept. Research published by Gartner shows that the hype surrounding hybrid cloud reached its peak last summer. According to Gartner’s research scheme, early adopters take on a technology at the peak of the hype cycle, then there’s a period of disillusionment when stories of early adoption failures come out. That’s followed by a slow adoption phase, when vendors begin delivering on second- and third-generation services. Finally, there’s the phase where adoption becomes mainstream.
Amazon is still the uncontested leader in the public IaaS cloud space, with expectations that it will continue to pull down more than $2 billion annually in that market. But vendors with long and strong ties to the enterprise are all rolling out public offerings alongside their private cloud services.
For example, HP jumped into the public cloud market last summer when it rolled out its OpenStack-based HP Cloud Services. According to Dan Baigent, senior director of business development for HP Cloud Services, there is certainly a “pent-up” need for public IaaS. “We expect to see the most interesting growth patterns in that space,” he says, arguing that HP’s long-standing relationship with enterprise customers will help it make inroads there. It’s difficult for enterprises to support multiple clouds from different vendors, Baigent says, and getting both parts of the hybrid cloud from the same provider can simplify that prospect.
Treadway argues that many public cloud vendors will go under. “It’s very hard to play in the Amazon game. The margins are small and if you don’t offer a differentiating value, you are very likely going to fail,” Treadway says.
Lydia Leong, research vice president at Gartner, agrees that 2013 will see some corrections to the public cloud market, pointing to Web hosting vendor GoDaddy quietly closing the doors on its public cloud operation in October as a prime example. “These closures certainly don’t give any kind of signal that cloud computing is a failure. They simply demonstrate that it doesn’t make sense for every vendor to compete in that market,” she says.
Prediction 2: Hybrid-cloud management becomes key
If hybrid clouds are the deployment of choice, EMA’s Corbo says the IT industry has to make significant inroads on how to manage that type of environment in terms of resource provisioning, scalability and performance.
“It’s unfortunate that the IT industry seems to build infrastructure, and managing it is always an afterthought,” Corbo says.
IDC issued a report in August that said the worldwide cloud systems management software market grew dramatically, totaling an estimated $754 million in 2011, an increase of 84.4% over 2010. The top two vendors, CA Technologies and VMware, benefited from market demand for a range of capabilities beyond self-service provisioning.
These include automated infrastructure orchestration and virtualization management used to enable dynamic infrastructure resource pooling and sharing across multiple workloads and user groups, and the ability to track cloud resource consumption to support life-cycle management, capacity planning and chargeback.
IDC listed the other top players in that market – determined by revenue – as HP, IBM and BMC. That said, more than 63% of the revenue these companies raked in came from sales to companies managing private clouds only.
IDC expects most successful cloud systems management software vendors will offer customers a wide range of capabilities beyond self-service portals and automation and will be architected to support heterogeneous hypervisor and hardware platforms, as well as a range of hybrid cloud scenarios.
RightScale, a company that bucks Corbo’s assertion that management is an afterthought, has offered a service since 2006 that integrates with multiple clouds and allows users to view federated cloud deployments from a single dashboard. The company boasts of a 4.7 million customer base supporting a variety of public and private cloud platforms, including Amazon Web Services, Windows Azure, Google Compute Engine, Datapipe, HP, Logicworks, SoftLayer and Tata. On the private cloud side, RightScale can be used to manage workloads on the OpenStack, CloudStack and Eucalyptus platforms, all of which are open source.
Other startups that have jumped into this space include Cohuman, Okta, Scalr, Tier 3.
Brian Donaghy, CEO of Appcore, a cloud services company in Des Moines, Iowa, that offers a portfolio of private, public and hybrid services, says that developing the skills to manage a multi-cloud environment will make IT professionals a hot commodity in the next year as well.
Prediction 3: Cloud brokerages and integration hubs will explode
Early adopters of the cloud tended to take on the technology when they were building singularly focused greenfield applications. “So the issues associated with integrating either legacy systems or other cloud-based application was not so urgent,” says Martin Capurro, a product manager at Savvis Direct, a public cloud service offered by national telco CenturyLink. “They are now.”
IDC predicts that by 2015, nearly $1 of every $6 spent on packaged software, and $1 of every $5 spent on applications, will be consumed via the software as a service (SaaS) model. As enterprises buy more and more of their applications as SaaS, issues of integrating the applications themselves, developing security and auditing processes across them, and figuring out how to create B2B links with partners using the same applications will all need to be addressed.
Cloud service brokerage (CSB) schemes set up by cloud providers themselves seek to address the first problem while systems integration services and integration hubs seek to address the latter two.
10 SaaS delivery companies to watch
“CSB” was the phrase for cloud arbitrage that Gartner coined in 2009. More recently, NIST has defined this category of service providers as “an entity that manages the use, performance and delivery of cloud services and negotiates relationships between cloud providers and cloud consumers.”
Practically speaking, CSBs are the middlemen that aggregate SaaS applications in the cloud and supply a portal by which its customers can buy, access and somewhat control the use of multiple multi-tenant cloud applications within their own companies. The broker negotiates a good price that is passed onto the customer, provides a single point for end users to sign onto these applications and presents the IT department with one monthly bill.
According to Treadway, integration hubs – defined as single integration points between multiple cloud applications – are much needed today, but are more difficult to pull off than CSBs. That’s because many custom-built cloud applications are not built using standard APIs, which means that linking them to any other application requires a spaghetti network of connections that is nearly impossible to maintain, he says. The problem is further exacerbated by the proliferation of devices most cloud applications are now required to support.
Almost every major player in the IT space that bases a big chunk of its business on integration has a play in this market, as well. Startups include Cordys and Informatica.
“The advice I give clients is to make sure they have a comprehensive integration strategy developed upfront, and only build or buy applications that have standard APIs and were built within a service-oriented architecture,” Treadway says.
Prediction 4: Big data analytic tools will get better
Big data — the voluminous amount of unstructured or semi-structured data a company creates for which it is cost prohibitive to load into a relational database for analysis — just gets bigger and bigger in the cloud and businesses are realizing they can’t afford to ignore that fact.
Using very geeky predictive modeling and data mining principles, big data analytics tools let users digest volumes of transactional data and other streams like those collected from Web server logs, social media reports and mobile-phone call records that have not previously been tapped by business intelligence tools.
Cloud Illustration: Stephen Sauer
“What they want is actionable analytic big data tools that give them the right information to make business decisions in real time,” Treadway says. But innovation in this space, he says, is random at best.
According to CB Insights, a consultancy that tracks venture capital activities, analytics companies have taken the majority of $1.1 billion in big data venture capital funding deals on record since the second quarter of 2011. These analytics companies include those offering real-time data, such as Metamarkets, and others offering analytics solutions, such as Datameer.
But established companies are also investing in this area. Take HP’s acquisitions of both Vertica (a data analytics firm bought in February 2011 for an undisclosed amount) and Autonomy (a U.K.-based information management software firm bought in August 2011 for $10.3 billion). Prior to HP’s spree, IBM and EMC had already bought big data analytics databases, scooping up Netezza and Greenplum, respectively.
“It’s invaluable to our customers to be able to have the ability to put a wrapper of knowledge around the hordes of data coming into a company through its cloud deployments,” HP’s Baigent says.
Prediction 5: “SDN” will become just “networking”
The idea of software-defined networking rocked the networking world in 2012.
Inside the SDN scheme, the control plane gets decoupled from the data plane in network switches and routers. The control plane runs as software on servers and the data plane is implemented in commodity network equipment.
In July, cloud server software giant VMware plunked down $1.05 billion in cash and another $210 million in assumed unvested equity for Nicira, an SDN startup that had lured high-profile talent away from both Juniper and Cisco.
Cisco’s initial reaction to the prospect of SDN was called Cisco Open Network Environment (Cisco ONE), which is a architectural scheme designed to enable Cisco networks to be flexible and customizable to meet the needs of newer networking and IT trends such as cloud, mobility, social networking and video. And then Cisco made some announcements with the OpenStack community in support of its open source SDN projects, and in November, the company agreed to pay $1.2 billion in cash to acquire Meraki, a San Francisco-based provider of networking systems that can be managed from the cloud.
“All of these moves just point to the eventual realization that software-defined networking is just going to collapse back into a new definition of networking that is going to evolve in 2013,” says Terremark CTO John Considine.
Prediction 6: Gamification will drive sales and customer service.
Gartner predicts that by 2014, 70% of all Fortune 2000 companies will have at least one cloud-based application that employs game theory to influence employee or customer behavior. According to Badgeville’s Pattabhiram, many of them already do, and next year will simply be the year that people sit up and take notice of how effective those applications are in driving business opportunity.
Gamification is the concept of applying the psychology of game-design thinking to non-game applications to make them more fun, engaging and addicting. The psychological carrots include the need for public recognition and the thrill of competition. The applications in the business world include boosting sales, encouraging collaboration and information sharing among employees and partners, and increasing customer service satisfaction.
There are more than 50 gamification products, platforms and services available on the market including Badgeville, BunchBall, Crowd Factory, Gamify.it, Hoopla, Kudos, Objective Logistics and Rypple (a Salesforce.com company) to name only a very few.
Pattabhiram contends that the potential benefits of gamification to the enterprise, should it be implemented next year, are behavior management, rewarding participation, controlled social mechanics and behavior analytics.
Prediction 7: Hybrid security options will bloom
IDC security analyst Phil Hochmuth has no doubt that there will be security breaches in the cloud next year, whether we get wind of them or not, mainly because of the fact that using hard-to-control mobile devices is the dominant means by which employees are accessing the cloud.
“That is one of the biggest reasons we are seeing most vendors take on a hybrid delivery model for their security products,” Hochmuth says. Under this scheme, security vendors are offering – and enterprises are deploying – the traditional appliance-based security products for on-premise access and then enlisting a SaaS product – most of the time from the same vendor to help facilitate unified security management policies — to shore up secure access from mobile clients. IDC predicts that over the next three years, hybrid deployments will comprise 60% of all deployments, a market the firm says will balloon to $3.3 billion by 2016.
Prediction 8: Data sovereignty issues will multiply
Controversy about the jurisdiction and legality of data stored in the cloud and outside of a customer’s home country will erupt as cloud adoption grows in 2013, says Jim Reavis, executive director of the Cloud Security Alliance.
But don’t expect government policy changes to help mitigate the problem, Reavis says. Greater customer awareness of data residency options, such as format-preserving encryption, will help mitigate these concerns and technological innovation will have a greater impact on solving this global policy question than government action.
Prediction 9: IaaS-based services will expand
EMA’s Corbo predicts there will be an increase in services delivered as part of standard IaaS offerings.
“You will see IT folks thinking hard and long about what other infrastructure services can be off-loaded into the cloud,” Corbo says. Specifically, she expects to see growth in the areas of WAN optimization (a service is already offered by a startup called Aryaka and mainstays such as Cisco and Akamai have made some movement in this direction) and load balancing as a service in the cloud (Amazon and Rackspace both offer these services).
“It’s not a question of being able to do this stuff in-house. It’s a matter of figuring out if it’s cheaper and more efficient to do it in the cloud,” Corbo says.
Prediction 10: Prepare for more outages and shakeouts
Corbo and Gartner’s Leong were in sync on their prediction that if customers are asking the public cloud infrastructures to take on more and more responsibilities, then they should be prepared to accept more downtime as well.
Outages are bigger risk than breaches
“Outages will happen as a matter of course,” Leong says. “It can’t really be helped when you take into consideration all of the permutations of all the services riding on these infrastructures. There is no way every contingency can be tested for.”
CSA’s Reavis warns that customers should be prepared for other kinds of failures in the cloud in 2013 as well: business failures.
Since we are in a natural part of the entrepreneurial business cycle for cloud, we can expect to see several cloud startups get acquired, change their business focus or go out of business entirely, Reavis says.
“These shakeouts will have differing consequences impacting the availability of customer data and information systems. Customers need to make sure they are mitigating these risks through a combination of building redundancy in cloud security architectures and performing due diligence in cloud business relationships,” he says.
Moves are meant to encourage users to explore cloud options
Two of the bigger names in cloud computing — VMware and Rackspace — have each released today low-cost or free trial versions of their cloud offerings.
The news follows an announcement by Red Hat earlier this week that it too would offer a free version of its cloud computing platform. Red Hat’s and Rackspace’s offerings help users build private clouds based on the OpenStack software code, while VMware is offering a free trial version of its vCloud software, which allows access to public cloud resources.
The moves signal an effort by cloud service providers to entice businesses that may have virtualized environments to expand to a public or private cloud, one analyst says.
VMware officials say they are hoping to lower the adoption barrier for its customers who are interested in expanding from an on-premise virtualized environment powered by VMware to a public cloud service offered by one of the more than 150 VMware-certified public cloud vendors. Through the swipe of a credit card on the vCloud portal at VMware.com, customers can launch a vCloud Service Evaluation, public cloud instance of Linux virtual machines that are hosted by a vCloud service provider that the company does not disclose. VMware announced the service today and it will be available in the coming weeks, says Joe Andrews, director of product marketing for vCloud Services. The VMs cost $0.04 per hour per gigabyte of memory.
VMware has not had this “instant gratification,” says Gartner cloud analyst Lydia Leong, which has resulted in some VMware customers who want to experiment with public cloud services to do so at competitors such as Amazon Web Services and Rackspace. “This is a basic offering that doesn’t really have any bells and whistles, but is a reasonable way to get the ‘feel’ of a VMware cloud,” she says, noting that it’s not meant to be a formal proof-of-concept evaluation tool.
Rumors have circulated in recent weeks that VMware will be releasing its own infrastructure-as-a-service offering, as opposed to the vCloud network of service providers the company currently works with. Andrews, the VMware spokesperson, says he couldn’t comment on speculation, but says VMware will “continue to invest in our service provider ecosystem,” noting that VMware is a “partner-led company.” VMware’s chief, Paul Maritz, is slated to step down from his role as president and CEO next month and he will be replaced by EMC President Pat Gelsinger.
Rackspace, meanwhile, is doubling down on the company’s investment in OpenStack, including recently releasing a product fully powered by OpenStack, which is the largest OpenStack-powered public cloud deployment, Leong says. Rackspace’s free private cloud software, code-named “Alamo,” includes a Ubuntu operating system running a KVM hypervisor. The private cloud software can be deployed in a customer’s own data center, in a Rackspace data center, or in a colocation facility, and Rackspace private cloud guru Jim Curry says the announcement is meant to encourage hybrid cloud deployments that would allow customers to have a private cloud that connects in with the company’s OpenStack-powered public cloud.
“We believe that the majority of our customers and cloud users will be running hybrid cloud environments for a long time,” Curry says. This move could be a first step for organizations in that direction.
When operating systems break their hardware bonds and deepen their ties with hypervisors, virtual machines will seem almost quaint
I find it puzzling whenever I come across any reasonably sized IT infrastructure that has little or no virtualization in place, and my puzzlement turns to amazement if there’s no plan to embrace virtualization in the near future. Whether it’s due to the “get off my lawn” attitude that eventually killed off many AS/400 admins or simply a budget issue, clinging to a traditional physical infrastructure today is madness.
For one thing, what are these companies buying for servers? If they’re replacing old single- and dual-core servers with new quad-core boxes (at minimum) and simply moving the services over, then they’ve got a whole lot more hardware than they need. Each of their server workloads is running on hardware that could easily handle a half-dozen virtual servers, even with free hypervisors. Are these companies only going to embrace server virtualization when the rest of us have already moved past it?
[ Also on InfoWorld: Download the “Server Virtualization Deep Dive” PDF guide. | Keep up on virtualization by signing up for InfoWorld’s Virtualization newsletter. ]
If we look forward a few years, we can expect to see fundamental changes in the way we manage virtual servers. As if the advent of stem-to-stern enterprise virtualization wasn’t already revolutionary, I think there’s plenty more to come. The next leap will occur when we start seeing operating systems that were explicitly designed and tuned to run as VMs — and won’t even function on physical hardware due to the lack of meaningful driver support and other attributes of the physical server world. What’s the point in carrying a few thousand drivers and hardware-specific functions around in a VM anyway? They’re useless.
Eventually we’ll start to see widely available OS builds that dispense with a vast majority of today’s underpinnings, excising years of fat and bloat in favor of kernels that are highly specific to the hypervisor in use, and with different ideas on how memory, CPU, and I/O resources are worked and managed. We’re backing into that now with the advent of introducing CPU and RAM to a VM through the use of hot-plug extensions that were originally conceived to be used for physical RAM and CPU additions.
But this is the long way around — it’s a kludge. When an OS kernel can instantly request, adapt, and use additional compute resources without going through these back alleys, we’ll be closer to the concept of true cloud computing: server instances with no concern for underlying hardware, no concept of fixed CPUs, no concept of fixed or static RAM. These will be operating systems that tightly integrate with the hypervisor at the scheduling level, that care not one whit about processor and core affinity, about optimizing for NUMA, or about running out of RAM.
(I should note that Microsoft has already done something like this in the forthcoming Windows Server 8 Hyper-V, enabling the hypervisor to hot-add RAM to a Microsoft OS even as old as Windows Server 2003 R2. It just says, “Oh, more RAM,” and goes with it — good show.)
This will naturally take years to be fully realized, but I’m sure it’s coming. The hypervisor will become strong and smart enough to take over these functions for any compliant host OS and essentially turn each VM into an application silo. As the load on the application increases, the kernel is more concerned with communicating the growing resource requirements to the hypervisor, which then makes the lower-level decisions on what physical resources to allow the VM to consume, and transition other instances to different physical servers to make room when necessary.
In turn, this will dispense with the notion of assigning emulated CPU and RAM limits to VMs and instead set minimum, maximum, and burst limits — like paravirtualization, but completely OS-agnostic, without relying on the core host OS to play the middleman and fixed kernel. Each VM may know it’s running as a VM, but will also run its own kernel and be able to address hardware directly, with the hypervisor managing the transaction.
We’re talking about fixed-purpose servers that have embedded hypervisors, probably redundant, with the ability to upgrade on the fly by upgrading a passive hypervisor, transitioning the load nearly instantly, then upgrading the other. While we’re at it, we’re also talking about small-footprint physical servers that are essentially CPU and RAM sockets with backplane-level network interfaces completely managed in software.
Are these advanced virtual servers virtual servers at all? Or are they services? If they evolve to become software services like databases and application servers that understand their environment, there may be no need to run more than one or two instances — period.
Rather than dozens of Web server VMs, you might have just two — but two that can grow from, say, 500MHz to 32GHz in an instant as the load jumps, from consuming 6GB RAM to 512GB in the same timeframe, but without the need to assign any fixed value. With a suitably high-speed backplane, it’s conceivable that a single server instance could span two or more physical servers, with the hypervisors divvying up prioritized tasks, keeping RAM local to the individual process, wherever that happens to be. Naturally, we’re talking about massively multithreaded apps, but isn’t that the whole idea behind virtualization and multi-core CPUs?
Perhaps, at some point in the future, we’ll look at a blade chassis or even several racks of them, and instead of seeing a few dozen physical servers running our hypervisor du jour, we instead view them as a big pile of resources to be consumed as needed by any number of services, running on what might be described as an OS shim. If there’s a hardware failure, that piece is easily hot-swapped out for replacement, and the only loss might be a few hundred processes disappearing from a table containing tens of thousands, and those few hundred would be instantly restarted elsewhere with no significant loss of service.
Maybe, if we ever get to a reality such as this, those stalwarts who are still nursing a decrepit physical data center might have finally warmed up to the idea of running more than one server per physical box. Heck, at that point, they won’t have a choice. There will be no other alternative.