Posts tagged Twitter
Mesosphere has closed new funding to help it bring the open-source Mesos software to a wider audience
Apache Mesos, a software package for managing large compute clusters that’s been credited with helping Twitter to kill its Fail Whale, is being primed for use in the enterprise.
Practical advice for you to take full advantage of the benefits of APM and keep your IT environment
One of its main backers, a startup called Mesosphere, announced Monday it has closed an additional $10.5 million round of funding and will use the money to develop new tools and support offerings to make Mesos more appealing to large businesses.
Mesos is open-source software originally developed at the University of California at Berkeley. It sits between the application layer and the operating system and makes it easier to deploy and manage applications in large-scale clustered environments.
Twitter adopted Mesos several years ago and contributes to the open-source project. The software helped Twitter overcome its scaling problems and make the Fail Whale — the cartoon symbol of its frequent outages — a thing of the past.
Mesosphere’s CEO, Florian Leibert, was an engineer at Twitter who pushed for its use there. He left Twitter a few years ago to implement Mesos at AirBnB, and last year he left AirBnB to cofound Mesosphere, which distributes Mesos along with documentation and tools.
On Monday Mesosphere said it had secured a new round of funding led by Silicon Valley investment firm Andreessen Horowitz. It will use the money to expand its commercial support team and develop Mesos plug-ins that it plans to license to businesses.
Mesos has several advantages in a clustered environment, according to Leibert. In a similar way that a PC OS manages access to the resources on a desktop computer, he said, Mesos ensures applications have access to the resources they need in a cluster. It also reduces a lot of the manual steps in deploying applications and can shift workloads around automatically to provide fault tolerance and keep utilization rates high.
A lot of modern workloads and frameworks can run on Mesos, including Hadoop, Memecached, Ruby on Rails and Node.js, as well as various Web servers, databases and application servers.
For developers, Mesos takes care of the base layer of “plumbing” required to build distributed applications, and it makes applications portable so they can run in different types of cluster environments, including on both virtualized hardware and bare metal.
It improves utilization by allowing operations staff to move beyond “static partitioning,” where workloads are assigned to a fixed set of resources, and build more elastic cluster environments.
The software is used today mostly by online firms like Netflix, Groupon, HubSpot and Vimeo, but Mesosphere will target large enterprises — “the Global 2,000,” Leibert said — that are wrestling with large volumes of data and struggling to manage it all at scale.
That includes customer data collected at busy websites and operational data gathered in the field. “A lot of organizations are under pressure to do things at scale, they’re running a lot of diverse applications and the wheels are coming off,” said Matt Trifiro, a senior vice president at Mesosphere.
Mesos can manage clusters in both private and public clouds, and in December Mesosphere released a tool for deploying Mesos to Amazon Web Services. Both AirBnB and HubSpot manage their Amazon infrastructures with Mesos.
Mesosphere will continue to provide its Mesos distribution for free, including tools it developed such as Marathon. On Monday it released an update to the core Mesos distribution, version 0.19, along with new documentation.
It plans to make money by developing and licensing plug-ins for Mesos for tasks like dashboard management, debugging, monitoring and security, and by selling professional services.
It has 25 full-time employees today, spread between Germany and San Francisco. “We’re building out our services operation as we speak,” Trifiro said.
Another day, another pretty infographic. This one breaks down the demographic differences between Facebook and Twitter.
Facebook and Twitter are the big boys in the social networking space. So big, in fact, that we’ve probably written about them a bit too much in 2010. But hey, why stop in December? This breakdown was put together by Digital Surgeons and shows demographic statistics (and a few fun facts) for both sites. You may know that Facebook is much larger with 500 million users compared to Twitter’s 106 million, but did you know that 52 percent of Tweeters update their status every day while only 12 percent of Facebook users do the same? How about the fact that half of Twitter’s users are in college compared to only 28 percent of Facebook users. It shows just how much Facebook has changed since its days as a university-only social network. Enjoy.
And should developers build websites or applications?
Is HTML5 the Holy Grail for building next-generation Web applications?
And should developers ditch the browser for client applications that run on specific devices, like the iPhone and Android?
Those are the questions an all-star lineup of Web and application designers from Microsoft, Google, Twitter and other companies debated Thursday during a panel discussion at the annual USENIX technical conference in Portland, Ore.
HTML5 in action: First look at Internet Explorer 9
Moderator Michael Maximilien, a software researcher, architect and engineer at IBM Research, asked panel members whether HTML5 is the answer for building browser-based applications that act like native applications and can be “written once and run everywhere.”
“We have always tried to come up with this universal GUI thing and I don’t think it has ever worked,” said Erik Meijer, a programming language designer who runs the cloud programmability team at Microsoft. “HTML5 in a sense is another attempt.”
But while HTML5 – which is prominent in the Google Chrome and Internet Explorer browsers – is allowing new kinds of interactive Web applications, even ones with offline storage, Meijer said. “It’s not really native. You still see rough edges. There is no silver bullet.”
Google’s Patrick Chanezon, developer relations manager for cloud and tools, argued that whether to use the HTML5 language comes down to how widely you want your application to be deployed. “If you’re doing iOS only, sure, just do everything native,” he said. But if you want something that works across Android and desktop browsers, HTML5 is the way to go, he said.
“So, build a sucky version in HTML5 but it works everywhere?” Maximilien asked with a smile.
Chanezon countered that HTML5 allowed Google to build some pretty good Gmail clients.
Google’s browser: First look at Chrome 10
But Raffi Krikorian, infrastructure engineer at Twitter, also called out the limitations of HTML5, saying it’s “really nice to look at,” but can’t do things such as send notifications to users.
“A mobile app to me is more than just a UI,” Krikorian said.
The other member of the panel was Charles Ying, an engineer at Flipboard, which builds a personalized magazine for the iPad that gathers in a user’s Facebook and Twitter streams and customized views of media sites.
Ying said HTML5 applications running at 60 frames a second, which Google has demonstrated in Chrome on desktops with WebGL-generated 3D graphics, are fast enough. But that speed is harder to achieve on mobile devices.
“HTML5 is successful because it’s the new moniker for the modern Web browser, the modern Web platform. But it’s still got a ways to go,” Ying said. “We try to build great experiences with it but we find that frame rates just aren’t cutting it when we try to do new animation.”
Most panelists seemed to agree that HTML5 is a big step forward for desktop Web browsers, but is still lacking on the mobile side.
That leads to the question of whether mobile developers should build Web applications or applications downloaded from an app store.
Google may have become heavy-handed in pressuring its Android device manufacturers to follow certain guidelines, recently released internal documents show. The documents have been released as part of a continuing lawsuit between it and Skyhook wireless over Google’s insistence that Motorola use its own GPS location services.
Skyhook had originally won a contract to replace Google’s location services with its own in all Motorola phones. The move apparently bothered the Mountain View, Calif.-based company, and it allegedly pressured Motorola into dropping the agreement. Skyhook then sued Google, alleging anti-competitive behavior.
In one of the emails from May 2010, Android group manager Dan Morrill makes reference to a “compatibility standard.” While such a set of guidelines shouldn’t be all that surprising, the way he described it is: that it was obvious that “we are using compatibility as a club to make them do things we want,” according to the New York Times.
Such terminology seems to suggest that Google’s oft-repeated boast about Android being “open” may not be true. Indeed, carriers have increasingly clamped down on what it will allow phones to do, and now it appears Google is ready to make sure phone manufacturers do what it wants as well.
There could be a valid reason for this, however: unlike Apple, Google must deal with a multitude of devices and ensure that Android works properly on every device. Such a conundrum is the same type of problem that Microsoft has with Windows, and also required the Redmond company to set standards for what it would support.
In any case, Google seems to be treading a fine line between acting in the best interest of the entire ecosystem and outright anticompetitive behavior: Morrill’s off-color comments certainly give critics fodder that Google is practicing the latter.
Betanews is looking for its readers’ opinions on Google and Android. Do you feel that the Mountain View company is heading down the same monopolistic path as Microsoft did more than a decade ago? Sound off in the comments.
We’ll run your opinions in a future story.
It’s hard to envision what Microsoft intends to do with Skype for corporate IT. So what can users expect to get out of Microsoft’s $8.5 billion investment?
The people who paid $2.5 billion for Skype two years ago are starting to look very astute.
A week or two ago that wasn’t the case. Skype’s IPO had been delayed. Google and Facebook were sniffing around Skype, but a buyout didn’t seem likely — too many samolies for Facebook to muster, and all sorts of potential problems for Google, including an antitrust hurdle of Brobdingnagian proportions.
But this morning comes the announcement that Microsoft will purchase Skype for $8.5 billion. As a defensive move, Microsoft buying Skype has some merit: a Google Voice-and-Skype combination would prove a formidable challenge to Windows Live Messenger and Lync, both in the consumer market and in the enterprise. The Skype international telephone number inventory — and Skype’s long experience with local telcos all over the world — would provide an instant presence that Google Voice is still struggling to establish.
But $8.5 billion?
It’s hard to envision what Microsoft intends to do with Skype for corporate IT. Skype is widely regarded by network admins as anathema. Five years ago, at the BlackHat conference in Europe, Philippe Biodi and Fabrice Desclaux described Skype’s obfuscated code, saying it “looks like /dev/random” and it hasn’t gotten any better. Like any P2P program, Skype basically runs a backdoor, with random pings and relays going out even when there’s nobody using the phone. Security people love software like that.
So if the software’s no good in the corporate environment, what can enterprise IT expect to get out of Microsoft’s $8.5 billion investment?
Not much, as far as I can tell. Speculation that Skype will integrate into the Lync or Exchange environment seem completely far-fetched: The architectures are completely different, and the software isn’t reusable. Surely, Microsoft isn’t expecting to keep many key Skype developers around, even with fat paychecks. Those international phone numbers and telco connections could help extend Lync, at least in theory. Skype has a good-sized user base, with 120 million active users every month, but that’s small potatoes compared to Live Messenger.
Some analysts speculate that Microsoft will meld the Kinect (currently Xbox-only, but coming soon to a Windows 8 near you) with Skype, but that doesn’t make a very compelling argument. Offering a $100 Kinect as a replacement for a $2.95 webcam makes about as much sense as … as … as buying Skype for $8.5 billion, eh?
The only positive note for IT, as best I can tell, is possible integration of P2P VoIP technology with Windows Phone. Microsoft may be playing a long game, with a Skype client for Windows Phone 8. Presumably the client wouldn’t send chillls down the spine of Exchange and Lync admins. Having Skype available for free corporate calls worldwide certainly has a nice ring to it.
But $8.5 billion?
Dirty job No. 3: The human server rack
The panicked call at 3 a.m. is a sad fact of life for many system administrators. But not as many admins are woken in the dead of night and asked to part the floodwaters, perform acts of impromptu structural engineering, or serve as a piece of inanimate equipment.
Brian Saunier got such a call six years ago when he was a sys admin for a small Internet service provider in Georgia. An unusually large summer storm had clogged the drain outside the ISP’s building, causing a foot of rainwater to flood the first floor, where the server closet was housed.
Fortunately, the servers were protected by an airtight glass door, says Saunier, who’s now a network administrator for Cobb Energy Management. Unfortunately, the storm also knocked out the power, causing the cooling system to shut down and putting the servers in danger of overheating.
The door had to be opened. To complicate matters, the machine containing the ISP’s customer database was sitting on the floor of the server room, directly in the flood path.
First, Saunier and two fellow sys admins constructed a dam out of cardboard, towels, and anything else they could get their hands on to keep the water out. Then Saunier was elected to run in and grab the server before the waters reached it.
“Our plan was to open the door and run in and pick up the server, which I managed to do without incident,” he recalls. “But on the way in my foot clipped the dam and the water started pouring in. I was standing in a flooded server room in two feet of water holding a powered-on server and power cords. That was disconcerting.”
After about 10 minutes, Saunier’s colleagues located a table that fit inside the closet, so he could put the machine down and commence with mop-up operations, which lasted well into the following evening.
As with all storms, there was a silver lining. Saunier submitted his story via Facebook to Ipswitch Network Management Division, which named him a “SysAdmin All-Star” for going above and beyond the call of duty. His prize: an Apple iPad, which should prove easy to hold no matter how much water is swirling around his knees.
“It was actually an entertaining experience and a great story for getting a laugh now,” he says. “Besides, there’s really no way to avoid things like this, unless you want to be in the unemployment line.”
Dirty jobs survival tip: Always pack hip waders. And make sure your server room has a raised floor — before the floodwaters start to rise.
Microsoft Office 2010 takes on all comers: IBM Lotus Symphony 3.0
Don’t let the name fool you. IBM’s Lotus Symphony suite has almost nothing to do with the earlier incarnations of the Lotus Symphony suite — it’s now a rebranded spin-off of OpenOffice.org, with a heavily reworked interface courtesy of IBM’s programmers. It also features only three applications from the OpenOffice.org suite, but they’re the ones that matter: word processor, spreadsheet, and presentations.
Launch Symphony and you’d scarcely know you were dealing with anything derived from OpenOffice.org at all. The look of the program is markedly different and, in my opinion, substantially more attractive. Open a word processing document, for instance, and you’ll see a familiar toolbar along the top, but also a set of slide-out panels to the right of the text area: text properties, a document explorer/organizer, clip art, text styles, and a Widgets window. Also, multiple documents opened within Symphony are now organized as tabs within a single window by default, although you can undock them into their own window by right-clicking the relevant tab and selecting “Open in new window.”
The Widgets panel lets you add various Internet-based services — Google Gadgets or other Web pages — into that window for reference or access to online applications. The usefulness of this feature is a little unclear, but it seems like it’s being positioned as an open-ended version of the reference panel that’s used in Word for translations, word definitions, and more.
One omission that would have been handy is support for the enhanced right-click context menu available through the Windows 7 Taskbar and Start menu. This typically provides access to recently used documents or common program functions. OpenOffice.org or LibreOffice don’t have this either, but IBM could have easily added this extra bit of system integration while it was redesigning the rest of the program’s look.
While Symphony may look different, most of its features (apart from obviously new things like the Widgets panel) and their behavior are almost identical to the OpenOffice.org counterparts. Anyone who has cut his or her teeth on the former program shouldn’t have trouble figuring out how Symphony works. Most of the menus sport the same option sets, and utilities like the Template Organizer behave the same way.
Many of the new features that have come to Symphony 3.0 are courtesy of the new OpenOffice.org code base — such as support for Microsoft Office VBA macros, or the Detective (dependency and debug tracer) for spreadsheet equations. Symphony can open most Office 2007 documents — although you get a warning that some documents may not render with total fidelity. A number of files I tried, like the mortgage calculator spreadsheet I tested with OpenOffice.org and LibreOffice, opened but had the same issues as with those two programs. Password-protected Word and Excel files can also be opened, but only if they’re saved in the Office 97-2003 binary format; password-protected Office 2007 XML-format files can’t be opened.
The relatively stripped-down focus of Symphony means some features found in OpenOffice.org proper aren’t found here. WordPerfect users looking to open their documents in Symphony are likely to be let down; support for WordPerfect documents is not included and is not available through the plug-in directory either. Format conversion also doesn’t seem as well-supported in Symphony as it does in OpenOffice.org. When I couldn’t open an .html document, I looked for a plug-in to allow that. The closest I could find was an output filter that saves ODF as a .html document and a plug-in that converts .html files to ODF spreadsheets (not text documents), but no import filter. To that end, those already using ODF as their standard document format will find Symphony a lot more accommodating.
The future of computing, so we’re told, lies in the cloud. It’s already possible to spend practically all your entire computing life in your browser window, using web-based apps like Google Docs and Photoshop Express Editor in place of more traditional desktop apps like OpenOffice and Paint.NET.
The only downside of this approach is keeping all your online apps and accounts together in one neat place. This is where Jolicloud comes in, offering a desktop-like experience in your browser window. But why stop there? Jolicloud has gone one step further and developed an entire operating system, Joli OS, that takes your browser-based desktop and places it right in front of you.
There are two ways of installing Joli OS: there’s the traditional standalone approach. Download the ISO file, burn it to disc or flash drive and install it on a computer with no operating system. Jolicloud is keen to promote Joli OS in this way; its relatively small demands make it suitable for running on any computer built since the turn of the millennium, allowing you to press older computers back into service.
There’s also a Windows installer available — this installs Joli OS into its own folder (you’ll need 18GB spare) on your existing system drive, then updates your boot menu to give you the choice of booting into Windows or Joli OS. Annoyingly, Joli OS is made the default, but you can remedy the situation from the Advanced tab of the System Properties Control Panel (click “Settings” under Startup and Recovery).
Once installed, Joli OS replicates the desktop view seen in your browser window; the key difference between the OS version and your browser is that you can install traditional desktop apps alongside web-based shortcuts and apps. The best way to do this is to search the Jolicloud App Store; if your program of choice is there, select it and it’ll download and install silently in the background.
If that’s not enough, you can also install most applications manually too: Linux, Adobe AIR and Windows are all supported to varying degrees. You may need to install an add-on from the App Store first (for example, Wine for Windows).
Your Joli OS desktop is part of your Jolicloud account, which means it’s synchronized and accessible from web browsers on other computers (minus any native apps). If you install Joli OS on two or more computers, only those apps you install through the App Store will be synced between devices.
While Joli OS doesn’t provide an instant-on experience, it’s still much quicker to load than most flavors of Windows, which makes it a real option should you want to fire up your laptop, netbook or PC for a quick spot of browsing, social networking or word processing. And as time goes by, who’s to say you won’t find yourself spending more time in Joli OS and less in Windows?
I’ve always thought that one of the keys to Microsoft’s success in business computing is its support lifecycle policy. When you buy a Microsoft product for your business you can count on a long period of support and bug fixes and an even longer period of security updates. Now Microsoft is adapting its support lifecycle policy to the cloud.
Click here to read Microsoft’s main page on its support lifecycle. I’m running Windows 7 64-bit on a ThinkPad. The OS shipped October 22, 2009 and “mainstream support” ends January 15, 2015. After that (for business products) there are 5 years of “extended support” in which free (well, no such thing, let’s say included with the software price) Microsoft support ends (other than security updates), and you can’t request feature changes anymore. But you can at least buy all other support options. After 10 years, usually the “in the wilderness” phase of support starts, but at least Microsoft keeps support info on its web site. This is the phase into which, for example, Windows 2000 recently entered.
In fact, if you ask me, in some cases the support lifecycle has gone too far. From the standpoint of wanting to improve the security of “the Windows ecosystem,” Microsoft’s decision to extend support for Windows XP to 2014 was counterproductive. As I said, typically this level of support runs out after about 10 years. Companies should be moving away from XP with all due speed. Not to digress too far on this; the real point I’m trying to make is that Microsoft has always been liberal about support periods, and this has helped it.
Imagine, by contrast, that you’re considering buying Macs for your business. Apple provides support for the current and previous OS X versions. Support for 10.5 (Leopard), which shipped October 26, 2007, will end with the release of 10.7 (Lion), which will ship later this year. It’s probably fair to say that generational upgrades of OS X aren’t the life-changing event that moving from XP to Vista or Windows 7 is, but it’s not nothing. IT has to test apps and configurations and develop a plan for rollout. You can’t take your time upgrading the way Windows shops do.
But as Microsoft says in their ads, “To The Cloud!”
I’m a Google Apps customer myself and I’ve experienced the scary/exciting moment of cloud computing from the customer standpoint: You start up your apps one morning and things are different. Wasn’t that button over there before? Where’d the View menu go? And the cosmetic changes are the small stuff. Who knows what’s changed in the internal behavior?
The big new concept in online support lifecycle is “disruptive change.” Certain changes in software will be lableled as disruptive changes and trigger a set of rules including a minimum of 12 months of prior notice before implementation.
What is a disruptive change? “Disruptive change broadly refers to changes that require significant action whether in the form of administrator intervention, substantial changes to the user experience, data migration or required updates to client software.” An example Microsoft provides is a required update to Outlook in order for it to work properly with Exchange Hosted Services.
Speaking of Outlook and other non-cloud apps, such applications sometimes communicate — one might actually say integrate — with cloud apps. The Outlook-hosted Exchange pairing is the most common and obvious example, but the number of potential pairings is large and the potential complexity great. Microsoft is clear that even if PC software changes are mandated by cloud changes, they don’t affect the standard mainstream/extended support scheme for “on-premesis software.”
And the new policies don’t apply to security updates. Such updates need to be implemented quickly and, since Microsoft owns the implementation, will be.
There are two other cloud lifecycle policies Microsoft announced: The company will provide a minimum of 12 months prior notification before ending an online service for business and developer customers. Also Microsoft will retain customer data for a minimum of 30 days to facilitate customer migrations, renewal activities or the deprovisioning of the Online Service.
I imagine that private clouds are a different matter. I’ll have to check into that.
Meru announced on Wednesday high-performance Wi-Fi access points and software designed to let enterprise IT groups replace wired Ethernet switches at the network edge.
Dubbed Teton, the new 802.11n platform includes software to optimize usage by increasingly diverse Wi-Fi clients, including iPads and other tablets as well as smartphones. Teton introduces what Meru calls the “WLAN 500 mode,” which is a network-wide service with features that let one access point deal with up to 500 Wi-Fi clients in a 500-square foot area.
ANALYSIS: “Major Wi-Fi changes ahead”
(Earlier this week, in separate announcement addressing big wireless networks, Motorola Solutions said it was jacking up its WiFi controller tenfold to handle up to 10,000 access points.)
As for Meru, its Teton-based AP400 indoor and outdoor models will have three 802.11n radios, with an option for a fourth via USB port, with each radio supporting three data streams. For three radios, the total throughput per access point is 450Mbps (compared to the prior two-radio, two-stream Meru models of 300Mbps). Adding the optional fourth radio, boosts this 1.8Gbps.
Meru is not yet announcing prices for the new hardware, due out later this year. The company’s pitch is that the new product line will enable enterprises to phase out Ethernet edge switches, which are increasingly left idle as laptops and other clients connect via Wi-Fi. But even idle, there are support contracts, electricity, operational costs and traditional switch replacement cycles for which enterprises are paying. It seems likely there will be some premium for the powerful new radios and the software features, but Meru’s pricing calculations may take into account the capital and operational costs of edge switches to spur adoption.
The idea of eliminating wired Ethernet as the primary network access has been controversial for the past two or three years. But even in 2009, a range of enterprises (many of them colleges and universities) were discovering that a majority of their wired Ethernet ports (90% at one university) were completely idle, because users were relying on Wi-Fi.
The new products make use of Meru’s distinctive “WLAN virtualization” software, which among other things, lets you assign one channel to all access points, simplifying access and management. Additional channel assignments, for specific groups or types of clients or applications, can be in effect stacked across the access points, in what Meru calls channel layering.
Q&A: Meru Networks closing in on the all-wireless enterprise network
But Meru is adding several capabilities that give the AP400 series the power, flexibility and intelligence to replace edge Ethernet switches (see the AP400 data sheet). The WLAN 500 “mode” or service already mentioned is one: a set of Meru algorithms let each radio coordinate with others, load balance, and steer radio signals to optimize throughput.
A second is called Distribution Mode, which Meru proposes to replace or at least reduce the racks of wiring closet switches. In Distribution Mode, AP400 becomes an aggregation point (or “root AP” in Meru lingo) for other access points in the network, via Meru’s Wi-Fi meshing software. They pass their traffic wirelessly back to these aggregators, which can offer 900M to 1.8Gbps of backhaul capacity depending on the number of radios. The aggregator has a Gigabit Ethernet port to a higher-end, aggregation-level Ethernet switch.