Want proof? Industry leading vendors are snatching up OpenStack-based companies
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
IT is headed toward being something more akin to a utility service, transformed by OpenStack’s open standardized cloud architecture, which will improve interoperability and render vendor lock-in a thing of the past.
Initially a solution adopted by smaller ISVs lacking the capital to build private clouds, OpenStack-based cloud solutions are shaping up to be the logical choice for large enterprise as industry leaders, including IBM, Cisco, EMC, HP and Oracle, bet on its value for defining the next-generation model for business computing.
These industry giants have been snatching up OpenStack-based companies over the past couple years, building up their capabilities around the architecture. IBM and Cisco are some of the latest to close deals, with their respective acquisitions of Blue Box and Piston Cloud Computing. Other relevant acquisitions include EMC’s purchase of Cloudscaling, Oracle’s Nimbula acquistion, and Cisco’s MetaCloud acquisition.
OpenStack’s value for business lies in its capacity for facilitating seamless private-to-public scalability and extensive workload portability, while removing the need to lay out capital to acquire and maintain depreciating commodity hardware.
These companies see that innovations in open clouds will inevitably win out as the premiere solution for business data management. The days of commodity hardware and internally managed datacenters are rapidly fading. With cloud services available on a pay-as-you-go basis and infrastructure as a service (IaaS) removing the need to invest in commodity hardware, customers will look at performance, pricing and quality of service as the most important factors in choosing a cloud provider, while maintaining the freedom to easily switch if a better option comes along.
OpenStack’s core strength is interoperability, allowing for seamless scaling across private and public environments, as well as easier transition and connectivity across vendors and networks.
Companies like IBM and Cisco buying up OpenStack-based providers to bolster their own hybrid cloud solutions does not mean the architecture will lose touch with its open-source roots. Open standards and interoperability go hand-in-hand and are at the heart of OpenStack’s unique capabilities.
What we are seeing is the maturation of OpenStack, with major names in business computing positioned to mainstream its adoption by leveraging their financial, IP, R&D resources and brand trust to meet complex demands and ensure confidence from large enterprise organizations transitioning to the cloud.
Cisco listed OpenStack’s capabilities for enhancing automation, availability and scale for hybrid clouds as playing a major role in its new Intercloud Network, while HP is utilizing OpenStack to facilitate its vendor-neutral Helion Network, which will pool the services of Helion partners to offer global workload portability for customers of vendors within their network.
Adoption of OpenStack by these providers signals a major shift for the industry, moving away from dependence on hardware sales and heavy contractual service agreements to a scalable IaaS utilities model, where customers pay for what they need when they need it and expect it to just work. Providers may need to shoulder the burden of maintaining datacenters but will reap the reward of pulling the maximum value from their commodity investments.
Interoperability may seem like a double-edged sword for companies that were built on their own software running exclusively on their own hardware. But the tide is shifting and they realize that closed platforms are losing relevance, while open architecture offers new opportunities to expand their business segments, better serve customers, and thrive with a broader customer base.
Cisco recently added new functionalities for its Intercloud offering, extending virtual machine on-boarding to support Amazon Virtual Private Cloud and extending its zone-based firewall services to include Microsoft Azure. Last year, IBM partnered with software and cloud competitor Microsoft, each offering their respective enterprise software across both Microsoft Azure and the IBM Cloud to help reduce costs and spur development across their platforms for their customers. OpenStack furthers these capabilities across the quickly expanding list of providers adapting the cloud architecture, enabling a vendor-agnostic market for software solutions.
Open standardized cloud architecture is the future of business IT, and OpenStack currently stands as the best and only true solution to make it happen. Its development was spurred by demand from small ISVs who will continue to require its capabilities and promote its development, regardless of whether large enterprise service providers are on board.
However, its inevitable development and obvious potential for enterprise application is forcing the hand of IT heavyweights to conform. Regardless if they’d prefer to maintain the status quo for their customers, the progress we’ve seen won’t be undone and the path toward vendor neutrality has been set.
Patrick Heim is the (relatively) new head of Trust & Security at Dropbox. Formerly Chief Trust Officer at Salesforce, he has served as CISO at Kaiser Permanente and McKesson Corporation. Heim has worked more than 20 years in the information security field. Heim discusses security and privacy in the arena of consumerized cloud-based tools like those that employees select for business use.
What security and privacy concerns do you still hear from those doing due diligence prior to placing their trust in the cloud?
A lot of them are just trying to figure out what to do with the cloud in general. Companies right now have really three choices, especially with respect to the consumer cloud (i.e., cloud tools like Dropbox). One of them is to kind of ignore it, which is always a horrible strategy because when they look at it, they see that their users are adopting it en masse. Strategy two is to build IT walls up higher and pretend it’s not happening. Strategy three is adoption, which is to identify what people like to use and convert it from the uncontrolled mass of consumerized applications into something security feels comfortable with, something that is compliant with the company’s rules with a degree of manageability and cost control.
Are there one or two security concerns you can name? Because if the cloud was always entirely safe in and of itself, the enterprise wouldn’t have these concerns.
If you look at the track record of cloud computing, it’s significantly better from a security perspective than the track record of keeping stuff on premise. The big challenge organizations have, when you look at some of these breaches, is they’re not able to scale up to secure the really complicated in-house infrastructures they have.
We’re [as a cloud company] able to attract some of the best and brightest talent in the world around security because we’re able to get folks that quite frankly want to solve really big problems on a massive scale. Some of these opportunities aren’t available if they’re not in a cloud company.
How do you suggest that enterprises take that third approach, which is to adopt consumerized cloud applications?
The first step is through discovery. Understand how employees use cloud computing. There are a number of tools and vendors that help with that process. With that, IT has to be willing to rethink their role. Employees should really be the scouts for innovation. They’re at the forefront of adopting new apps and cloud technology. The role of IT will shift to custodian or curator of those technologies. IT will provide integration services to make sure that there is a reasonable architecture for piecing these technologies together to add value and to provide security and governance to make sure those kinds of cloud services align with the overall risk objectives of the organization.
“If you look at the track record of cloud computing, it’s significantly better from a security perspective than the track record of keeping stuff on premise.”
Patrick Heim, Head of Trust & Security, Dropbox
How can the enterprise use the cloud to boost security and minimize company overhead?
If you think about boosting security, there is this competition for talent and the lack of resources for the enterprise to do it in-house. If you look at the net risk concept, where you evaluate your security and risk posture prior to and after you invest in the cloud, and you understand what changes, one of those changes is: what do I not have to manage anymore? If you look at the complexity of the tech stack, there are security accountabilities, and the enterprise shifts the vast majority of security accountabilities on the infrastructure side to the cloud computing provider; that leaves your existing resources free to perform more value-added functions.
What are the security concerns in cloud collaboration scenarios?
When I think about collaboration especially outside of the boundaries of an individual organization, there is always the question of how do you maintain reasonable control over that information once it’s in the hands of somebody else? There is that underlying tension that the recipient of that shared information may not continue to protect it.
In response to that, there is ERM, which provides a document-level control that’s cryptographically enforced. We’re looking at ways of minimizing the usability tradeoff that can come with adding in some of these kinds of security advancements. We’re working with some vendors in this space to identify what do we have to do from an interface and API perspective to integrate this so that the impact on the end user for adopting some of these advanced encryption capabilities is absolutely minimized, meaning that when you encrypt a document using some of these technologies that you can still, for example, preview it and search for it.
How do enterprises need to power their security solutions in the current IT landscape?
When they look at security solutions, I think more and more they have to think beyond the old model of the network parameter. When they send data to the cloud, they have to adopt a security strategy that also involves cloud security, where the cloud actually provides the security as one of its functions.
There are a number of cloud-access security brokers, and the smart ones aren’t necessarily sitting on the network and monitoring, but the smart ones are interacting, using access and APIs, and looking at the data people are placing into cloud environments, analyzing them for policy violations, and providing for archiving and backup and similar capabilities.
Security tools that companies need to focus on could be oriented to how these capabilities are going to scale across multiple cloud vendors as well as how do I get away from inserting it into our network directly and focus more on API integration with multiple cloud vendors?
VMI offers an effective, efficient way to provide access to sensitive mobile apps and data without compromising security or user experience
This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.
Corporate use of smartphones and tablets, both enterprise- and employee-owned (BYOD), has introduced significant risk and legal challenges for many organizations.
Other mobile security solutions such as MDM (mobile device management) and MAM (mobile app management) have attempted to address this problem by either locking down or creating “workspaces” on users’ personal devices. For BYOD, this approach has failed to adequately secure enterprise data, and created liability issues in terms of ownership of the device – since it is now BOTH a personal and enterprise (corporate)-owned device.
MAM “wrap” solutions in particular require app modification in exchange for ‘paper thin’ security. You cannot secure an app running on a potentially hostile (unmanaged) operating system platform, and critically you can’t wrap commercial mobile applications.
By contrast, Virtual Mobile Infrastructure (VMI) offers an effective, efficient way to provide access to sensitive mobile apps and data without compromising enterprise security or user experience.
Like VDI for desktops, VMI offers a secure approach to mobility without heavy-handed management policies that impact user experience and functionality.
From IT’s perspective, VMI is a mobile-first platform that provides remote access to an Android virtual mobile device running on a secure server in a private, public or hybrid cloud. The operating system, the data, and the applications all reside on a back-end server — not on the local device.
From a user’s perspective, VMI is simply another app on their iOS, Android or Windows device that provides the same natural, undiluted mobile experience, with all the accustomed bells and whistles. Written as native applications, these client apps can be downloaded from the commercial app stores, or installed on devices using MAM or app wrapping technologies.
As Ovum states, “Put more simply, this [VMI] means in effect that your mobile device is acting only as a very thin client interface with all the functionality and data being streamed to it from a virtual phone running in the cloud.”
Getting started with VMI
After downloading and installing the VMI client, users go through an easy setup process, inputting server names, port numbers, account names and access credentials. When users connect to the VMI device they see a list of available applications, all running on a secure server that communicates with the client through encrypted protocols.
The client accesses apps as if they were running on a local device, yet because they are hosted in a data center, no data is ever stored on the device. Enterprises can secure and manage the entire stack from a central location, neutralizing many of the risks that mobile devices often introduce to a network.
Two-factor authentication is supported via PKI certificates in the physical phone’s key store. The physical device forces the user to have a PIN number (or biometric) to unlock the phone when there is a certificate in the hardware-backed key store. Additionally, the client supports variable session lengths with authentication tokens.
The server infrastructure that supports VMI clients can be implemented as multiple server clusters across geographic regions. As users travel, the client synchronizes with the server cluster closest to its physical location to access the applications on its virtual mobile device. The client continues to communicate with one server at a time, choosing the server location that provides the best performance.
In a typical deployment, there are compute nodes that host the virtual mobile devices, a storage service that holds user settings and data, and controller nodes that orchestrate the system.
The controller node(s) can be connected to an Enterprise Directory service, such as Active Directory, for user authentication and provisioning, and systems management tools such as Nagios and Monit can be used to monitor all parts of the system to ensure they are up and behaving properly (e.g. are not overloaded). The server hosting the devices creates detailed audit logs, which can be imported into a third party auditing tool such as Splunk or ArcSight.
VMI is platform-neutral, which means organizations can write, test, run and enhance a single instance of an app on a ‘gold disk’ OS image, rather than building separate apps for each supported end-user platform. This represents significant time and cost savings for resource-constrained IT organizations.
And while VMI takes a different approach to securing mobile endpoints than MDM, it does not aim to replace those solutions. Instead, VMI can integrate with MDM, MAM and other container solutions allowing organizations to use MDM to configure and manage an enterprise-owned virtual mobile device running in a data center, and MAM to support version management and control upgrade scheduling of VMI thin clients.
Mobile by design
Because VMI is optimized for smartphones and tablets with small touch screens and many sensors, users enjoy native apps and a full mobile experience. VMI supports unmodified commercial apps, allowing for greater workflow and productivity, and complements sandbox container solutions that provide limited offline access to apps such as corporate email by providing a richer user experience when the user is online (the vast majority of the time).
Users can also access separate work and personal environments from a single device, enjoying Facebook and Instagram and sending personal emails without worrying that corporate IT teams will seize data or wipe their data. When an employee leaves an organization, IT simply revokes their access privileges to the virtual mobile device.
Similar to VDI, there are many different business scenarios in which organizations should evaluate VMI. The most common include:
Healthcare – Enables access to electronic health records and other sensitive apps and data from mobile devices, in compliance with HIPAA privacy requirements.
Financial Services – Facilitates access to more sensitive client transaction data and business processes, from both personally owned and enterprise owned devices.
Retail – Supports secure Point of Sale as a Service for credit card transactions; Protecting the confidentiality of customer data accessed both from on and off premises.
Enterprise BYOD – Provides secure access to native apps from employee-owned mobile devices; keeping all data secure in the data center while at the same time not infringing on personal privacy.
Commercial Services – Extends the mobile enterprise to contractors, partners and customers.
Classified mobility – Allows government and security services to access data and applications from classified mobile devices, ensuring compliance with the thin client requirements of NSA’s
Mobility Capability Package.
With 1.9 billion devices expected to hit the market by 2018, IT professionals are on the hunt for a more effective way to secure the enterprise. VMI provides the access they need without compromising security or user experience.
Despite insourcing efforts, the expansion of nearshore centers is not necessarily taking work away from offshore locations. Eric Simonson of the Everest Group discusses the five main drivers responsible for the rise in domestic outsourcing, why Indian providers dominate the domestic landscape and more.
IT service providers placed significant focus on staffing up their offshore delivery centers during the previous decade. However, over the past five years, outsourcing providers have revved their U.S. domestic delivery center activity, according to recent research by outsourcing consultancy and research firm Everest Group.
The American outsourcing market currently employs around 350,000 full-time professionals and is growing between three and 20 percent a year depending on function, according to Everest Group’s research.
Yet the expansion of nearshore centers is not necessarily taking work away from offshore locations in India and elsewhere. Big insourcing efforts, like the one announced by GM, remain the exception. Companies are largely sticking with their offshore locations for existing non-voice work and considering domestic options for new tasks, according to Eric Simonson, Everest Group’s managing partner for research.
We spoke to Simonson about the five main drivers for domestic outsourcing growth, the types of IT services growing stateside, why Indian providers dominate the domestic landscape, and the how providers plan to meet the growing demand for U.S. IT services skills.
Interest in domestic IT outsourcing is on the rise, but you say that that does not indicate any dissatisfaction with the offshore outsourcing model.
Simonson: This isn’t about offshore not working and companies deciding to bring the work back. That’s happening a bit with some call center and help desk functions. But, by and large, these delivery center setups are more about bringing the wisdom of global delivery into the domestic market. The fundamental goal is industrializing the onshore model vs. fixing what’s broken offshore.
Can you talk about the five main drivers behind their increased interest in locating stateside?
Simonson: The first is diversification of buyer needs. As buyers have to support new types of services, certain types of tasks may be better delivered nearshore rather than offshore.
Secondly, there may be a desire to leverage the soft skills of onshore talent. This occurs when you need someone with a certain type of domestic business knowledge or dialect or cultural affinity.
Thirdly, domestic sourcing can be a way to overcome the structural challenges associated with offshore delivery, such as high attrition and burn out in graveyard shifts.
Fourth, companies may be seeking to manage certain externalities like regulatory requirements of fears about visa availabilities. To some extent, these reasons are often not necessarily based on true requirements, but are a convenient reason to give for choosing to outsource domestically rather than the potential risks of offshore.
Finally, there may be client-specific needs that demand domestic solutions—a local bank that wants to keep jobs in the community or a company with no experience offshore looking to start the learning curve.
Within IT services, what types of work currently dominate the domestic landscape?
Simonson: Application development is most prominent, with 123 domestic delivery centers in tier-one and -two cities serving financial services, public sector, manufacturing, retail and consumer packaged goods clients. Just behind that is IT infrastructure in similar geographies focused on those verticals as well. There are 80 consulting and systems integration centers and 68 testing centers as well.
It’s interesting to note that while U.S.-based providers tend to operate larger IT service centers domestically, it’s actually the Indian providers that dominate the landscape.
security tools 1
Simonson: Traditional U.S.-based multinationals have captured more scale in individual centers and have been able to grow them, in some ways, more strategically. They’ve been able to set up shop in smaller tier-4 cities like Ann Arbor or Des Moines and have more proven local talent models.
But the majority of domestic centers are operated by India-centric providers. Part of that is driven by their desire to get closer to their customers. With application and systems integration work, the ability to work more closely with the client is increasingly valuable. And with infrastructure work, concerns about data and systems access have encouraged Indian companies to offer more onshore options.
In addition, some of the bad press they’ve received related to visa issues is encouraging them to balance out their delivery center portfolios.
But Indian providers are not necessarily staffing up their centers with American workers.
Simonson: Indian providers are more likely to use visas to bring citizens of other countries (predominantly India) into the country to work on a temporary or permanent basis in a delivery center. About 32 percent of their domestic workforce working in delivery centers is comprised of these ‘landed resources.’ Across all providers, landed resources account for six percent of domestic service delivery employees. However, tightening visa norms and higher visa rejection rates are making it more difficult for providers to rely on foreign workers.
You found that approximately 43 percent of the delivery centers are located in the South, with almost half of those concentrated in the South Atlantic. And Texas has more than fifty. Is that
simply due to the fact that it’s cheaper to operate there?
Simonson: Cheap helps. But equally important are overall population trends. The South is growing, while regions like the Northeast or Midwest are either stable or on the decline. If you look at where people are going to school or moving and where corporations are relocating their headquarters, it’s taking place from the Carolinas down through Florida and over through Arkansas, Oklahoma and Texas. Those states are also more progressive about attracting services businesses (although there are some exceptions outside of the south like North Dakota and Missouri).
Do you expect the domestic IT outsourcing market to continue to grow?
Simonson: Yes, service providers expect an increase in demand for domestic outsourcing services by new and existing customers, and plan to increase their domestic delivery capabilities by adding more full time employees to their existing centers and establishing new delivery centers. In fact, 60 percent of delivery centers are planning to add headcount over the next three years with India-centric service providers expected to lead the expansion.
Tier-2 and tier-3 cities, like Orlando, Atlanta and Rochester, are poised for the greatest growth, with tier-1 and rural centers expecting the least amount of growth.
Will the supply of domestic IT talent keep up with this increased demand?
Simonson: The pressure to find IT talent has led service providers to adopt a range of approaches to extend their reach and develop ecosystems of talent. Many have developed educational partnerships, creating formal and informal relationships with colleges and technical institutes. They’re also basing themselves in cities known for their quality of life and recruiting entry-level and experienced talent from elsewhere. It all impacts what communities they decide to work in.
All service providers will have to expand their talent pools, particularly in IT. Automation of some tasks could increase capacity, but doesn’t provide the higher-complexity skills that are most valued onshore.
The products we reviewed show good signs that encryption has finally come of age.
best tools email encryption 1
Recipients of encrypted email once had to share the same system as the sender. Today, products have a “zero knowledge encryption” feature, which means you can send an encrypted message to someone who isn’t on your chosen encryption service. Today’s products make sending and receiving messages easier, with advances like an Outlook or browser plug-in that gives you nearly one-button encryption. And the products we reviewed have features like setting expiration dates, being able to revoke unread messages or prevent them from being forwarded. (Read the full review.)
AppRiver CipherPost Pro
Basically, you layer CipherPost Pro on top of your existing email infrastructure via a plug-in. It has mobile apps for iOS, Android, Windows phones and BlackBerry 10s that offer the ability to send and receive encrypted messages, but not attachments. To correspond with people outside your email domain, send a message with a Web link, which recipients click on and register with the system. The heart of the product is a special “Delivery Slip” sidebar that appears on the page as you are composing your message. This is where controls are located to enable message-tracking options, and to add an extra security layer. These are all nice features. If you have to send large attachments, then CipherPost should be on your short list.
DataMotion has a very mature offering that makes use of a gateway to process mail. Getting it set up will require a couple of hours, and most of that is in understanding the many mail processing rules. Users need to append a [SECURE] tag in the subject line to trigger the encryption process. You can also set up rules that will encrypt messages containing sensitive information. DataMotion doesn’t have any limits on the size of the user’s inbox. However, it does place a limit of up to 500MB worth of messages that can be sent in a user’s Track Sent Folder. Features include the ability to see exactly when your recipient opened the message and the attachment.
Voltage was recently purchased by HP and rebranded. The technology is an email gateway, software that sits on either a Linux or Windows server or in the cloud and inserts the encryption process between mail client and server. There are numerous add-on modules that come as part of this ecosystem. You administer the gateway via a Web browser, and there are dozens of options to set, similar to the DataMotion product. Voltage has a zero download client, as it calls its software that can be used to exchange messages with someone not on their system. While parts of Voltage are showing their age, the overall experience is quite capable, and the add-ons for mobile and Outlook/Office are quite nifty.
Hushmail for Business
Hushmail is the easiest of the products we tested to set up and use. There is no software to install on the client side; all mail is accessed via two ways: First, via a secure webmail client that connects to the Hush servers. This is the only way you can send encrypted email to someone who isn’t part of the Hush network. The second method is for users fond of their existing email clients and who are communicating with other Hush users. In this situation there is literally nothing for them to do: they make use of their existing client to send an encrypted message. Between the client and the Hush server, mail is encrypted using either SSL or TLS. Once it arrives on the server, it is then encrypted via PGP. Hush has a 20MB limit on attachment size, and this could be a deal breaker for some businesses.
Proton is one of the newer encrypted email services that have come along post-Snowden, with an emphasis on keeping your emails private. It makes a point of this by being based in Switzerland. However, the company is still building its product out and as a result it has a very simple Web UI for its client and admin tool. Proton uses double password protection. The first is used to authenticate the user. After that, encrypted data is sent to the user. The second password is a decryption key used to decrypt data on your device. Proton never sees that latter key so they do not have access to the decrypted data. On top of all this encryption, they also employ SSL connections so your data is encrypted across the Internet to and from their servers. There is no option for on-premises servers. While Proton is not really suitable for an enterprise deployment, it shows what the latest encryption products can deliver.
Of the products tested, Tutanota is the least reliable and least feature-laden. Tutanota uses a variety of clients to set up encrypted mail connections across your existing email infrastructure. There are no changes to your servers and you can continue using Outlook for sending unencrypted communications. We had some trouble with the installation, mainly because the software version has German instructions and installs the German version of .Net Framework. Once installed, though, the menus and commands are in English. Tutanota is based in Germany, which could be important for customers concerned about American email privacy. One of the distinguishing features is that its zero knowledge encryption process hides the message subject. Most of its competitors still send this information in the clear.
Virtru has a nice balance of plug-ins and mobile apps that support its easy-to-use encryption operations across a variety of email circumstances. If you have installed the necessary plug-in, when you want to send something, there is a small toggle switch on the top of the compose screen. Turning that on will bring up a “send secure” button to encrypt your message. There are tool tips that appear as you hover over the various options with your mouse, a nice touch. These include the ability to add an unencrypted introductory message that will introduce your recipient to the context of the message that you are sending, and why you want to encrypt the remainder of the message. You can also set when your message will expire or disable any forwarding for additional security.
Virtru also supports zero knowledge encryption, although it adds a separate activation step when a new user receives the first encrypted message.
Designed for developing economies, the Endless computer (which runs Linux) aims to deliver affordable and useful computing
Rural Mexico, the backstreets of Guatemala City, the outskirts of Mumbai; these aren’t places you find a lot of computers for one simple reason; most computers are far too expensive. What you do find are lots of TVs so why not build a cheap, flexible computer without a display? And ship it without a keyboard and mouse because those are items that can usually be sourced locally at low cost.
What would computers do for people in these places? They would deliver information, education, and opportunity. Record keeping for farmers, reading lessons for children, tools for creating and communicating … the potential for computers to improve the lot of millions of people is just waiting on the right gear and I think the right gear is what a new company, Endless, is about to launch.
The result of three years of development, the company’s eponymous machine is a slightly eccentric design which, I’m told, was very successfully tested in its target markets. The device uses an Intel® Celeron® N2807 1.7 GHz Dual-Core processor (burst speed 2.1 GHz) with 2 GB of RAM. It has an RJ-45 Gigabit Ethernet port, two USB 2.0 ports (front, lower rear), a USB 3.0 port (upper rear), stereo line out, and HDMI and VGA outputs.
There are two Endless models: The $169 version with 32 GB eMMC (embedded MultiMedia storage) and SD Storage, and the $229 version with a 500GB hard drive. They are both powered by 12V input (the included adapter handles 100V to 240V at 50Hz or 60Hz) and the versions draw 24W and 30W respectively. The 500GB hard drive version (the version I tested) also includes an integrated speaker, 802.11 b/g/n WiFi, and Bluetooth 4.0.
What sets the Endless apart from other low cost machines is Endless OS, a highly customized version of Ubuntu Linux with Gnome (and lots of other interesting technology such as Xapian and OStree) that not only handles TVs as output devices (it scales and formats video output for readability), but also includes a huge library of applications and educational content. This is important because in emerging markets the Endless system will be useful and well-featured even if you don’t have any kind of networking services available.
While it’s based on open source projects, the Endless OS is not completely open source because it contains proprietary commercial code. The company’s open source philosophy is:
We embrace the principles of free and open-source software and acknowledge a great debt to it in creating Endless OS. Whenever we can, we work upstream and contribute back to open source. Although not everything we create can be open source, we release most components of our system under free software licenses. Many members of our core team have a long history with open source projects, and continue to be an active part of those communities. / You might notice that we maintain forks of many upstream packages. In most cases, this is because we submit our patches upstream and backport them to the stable versions that we ship.
Endless OS has been localized for a remarkable number of languages and installation is polished and simple. It was in the installation process I found the only issue I could identify in the whole system: I used a Vizio VP50 50-inch 720P HD Plasma TV via HDMI and when the setup asked me if I could see the menu bars at the top and bottom of the screen I clicked on “no” and the system adjusted the overscan. The result was that I could see a little of the menu bars but I had to go into the TV setup to fix the display. It’s a minor problem but Endless OS could do with a more comprehensive overscan adjustment system.
endless os appstore pt
In operation, the system is smooth, fast, stable, and easy to understand and navigate. The applications (which include both productivity software as well as games) and content on the 500GB version I tested are extensive and the system includes a huge amount of Wikipedia and the Khan Academy (if an Internet connection is available, the system will automatically download software and content updates). You choose what content and software you want from what is essentially a built-in app store.
Endless also makes information available for developers and while the operating system is only available on Endless’ own hardware all open source modifications are available on GitHub (the company notes that it may make the disk images available in the future which will likely spawn a wave of similar hardware products).
My only concern with the Endless system are that it doesn’t have a reset button or startup so if you forget your password there’s no obvious way to wipe and start again (I tried the usual way of entering Linux recovery mode – holding down shift at boot – but that didn’t work). A similar concern applies for a way to easily wipe the system, for example, if you were going to give your Endless computer to someone else.
So, who’s the Endless computer aimed at? Endless plans to sell their machines initially into markets such as Mexico and Guatemala where it should be a good fit for schools and colleges as well as the emerging middle class. What I think is really powerful about the Endless concept is the operating system and its focus on being useful even when there’s no Internet connectivity. If we can add to that mesh networking and good old sneaker net for updates and enhancements the potential for business and education in developing economies to get a computing boost is huge.
You can’t buy an Endless computer just yet (it’s due to ship in the near future) but you can register to be notified when it will be available.
The Endless computer gets a Gearhead rating of 5 out of 5.
Can a business-grade cloud storage service that doesn’t come from Google, Microsoft or Apple make it big in the enterprise? Here’s why Dropbox for Business makes a strong case.
Apple iCloud. Google Drive. Microsoft OneDrive. Box. Dropbox. Hightail (formerly YouSendIt). Online storage services have been a mainstream option for consumers for some time now. But as the business world wrestles with adopting cloud-based collaboration services, can a so-called independent company offer a competitive product to the business-centric offerings by Google
(Apps/Drive), Apple (iCloud for Work) and Microsoft (Office 365)?
To answer this question, we take a closer look at Dropbox, arguably one of the most popular online storage services today, with more than 400 million registered users as of July 2015. Though it went through some security missteps in its early days, Dropbox successfully leveraged its popularity and success with consumers to develop a credible business-grade service – Dropbox for Business – that was launched in April 2013.
Despite being priced at $15 per user per month – compared to $10 per month for Dropbox Pro – Dropbox says the service now has 100,000 customers around the globe. (Unfortunately for power users looking to make the switch to Dropbox for Business, the plan starts at a minimum of five users. This means that small companies with fewer than five users will have to pay the equivalent of $150 per user, or $750 per year.) So what does the more expensive Dropbox for Business offer over the nonbusiness version of the product?
dropbox for business – webinterface
Administrators will see an additional “Admin Console” option added their minimalistic Dropbox Web interface. Note also the additional Dropbox for “CIO.com.”
What you get is more than what you see
To be clear, Dropbox for Business builds off the basic Dropbox offering, which includes strong encryption, support for two-step authentication and the trademark simplicity of Dropbox. In addition, both “personal” Dropbox and Dropbox for Business accounts are supported by the official software clients – albeit separately; both can also be accessed from the Dropbox home page.
How the Dropbox app looks like on Android after signing in to Dropbox for Business.
This is where the similarity ends. Unlike Dropbox Pro, Dropbox for Business comes with a long list of capabilities that include unlimited storage (available upon request; users are initially allocated 1GB each), centralized billing, phone support and an Admin Console for administrators. The Admin Console is used to access a range of other capabilities and controls endemic only to Dropbox for Business:
Depending on industry vertical, some businesses may be more concerned about the possibility of data leakage due to “over-sharing” or accidental leaks. On that front, Dropbox for Business offers various ways that organizations can tighten the lid with such controls as the ability to limit the sharing of links to external parties, or the joining of shared folders outside of your organization.
In addition, administrators can also mandate that only one Dropbox account can be linked to each computer – though users would still be able to access their private Dropbox accounts from the Web. Ultimately, while the controls won’t stop a determined insider from leaking confidential data to competitors, they should go a long way towards preventing any unintended sharing of files.
Finally, organizations will be interested in such Dropbox for Business features as its comprehensive audit log, creation of groups, unlimited file recovery and integration with third party services, each of which are outlined below.
You can also specify a date range to download the entire Activity feed as a CSV file.
Dropbox for Business maintains a comprehensive feed of various activities under the “Activity” tab, ranging from the sharing and un-sharing of a folder, and the creating and sharing of links. Similarly, activities including those related to passwords, groups, membership, logins, admin actions, apps and devices are also logged.
Audit logs brings increased visibility and control over sharing and access of company data, and could be inordinately useful to trace data leaks, as well as to narrow down misconfigured devices. By being able to track permissions and apps that are linked to the Dropbox for Business account, administrators could also potentially find successful phishing attacks, and even identify data that’s been compromised.
It’s important to note that individual file edits, deletions and additions are not currently shown in the Activity feed reports, though a running history of edits, deletions and additions of all files can be viewed from the main Dropbox Events page.
Creating a group
Larger organizations will appreciate the Group feature in Dropbox for Business, and how it allows them to create departmental or project-level groups for easier collaboration. This feature makes it possible to share new information directly with an entire group instead of having to add each person individually – and likely missing some team members. Moreover, any new members that are added to a group will be automatically granted access to all shared folders to which the group has previously been invited.
You can also manage the permission of a Group as a single entity when it comes to granting editing or view-only access, while the ability to create Groups can be restricted by the Dropbox administrator, or be left open to everyone. When individual and group permission settings differ, Dropbox will always grant the permissions that grant users with the highest level of file or folder access.
The many versions saved of this feature as it was being written. In this case, you can see that cloudHQ is used to cloud sync from a different online storage service to Dropbox.
security tools 1
One of the most powerful capabilities reserved for Dropbox for Business is undoubtedly its automatic storing of all versions of a file, as well as the ability to recover deleted files. In fact, it’s this author’s opinion that Dropbox for Business currently offers the best versioning support among the top cloud services.
Specifically, there is no limit to the number of versions that are saved, and versioning does not contribute your account’s total storage cap – which is unlimited anyway. Similarly, there are no time limits on when deleted data can be recovered.
While this feature certainly shouldn’t supplant a proper offline backup and disaster recovery strategy, storing multiple versions of a single file can be help users, groups and companies quickly recover from editing mistakes, whether the mistake is noticed hours, days or even weeks later.
Third-party enterprise integration
Dropbox for Business also stands out due to the many third-party apps and services that are built on top of the Dropbox for Business API. The API essentially gives developers access to the members, groups and audit log data for a particular Dropbox for Business deployment.
While there are too many for an in-depth evaluation in this space, a few categories stand out:
Data loss prevention (DLP). For organizations that require better tools to manage sensitive data stored on Dropbox for Business, services like CloudLock and Elastica promises enterprise-class DLP with auditing and compliance functionality.
Identity management. Larger organizations or those using Active Directory can rely on cloud services such as Microsoft Azure AD or third-party offerings such as Centrify and Meldium to keep their Dropbox for Business managed and authenticated in a seamless fashion.
eDiscovery. Integration with industry leading tools (Nuix, Splunk) makes it possible for administrators to respond to litigation, arbitration and regulatory investigations involving files stored on Dropbox for Business. The comprehensive Activity feed data is automatically collected and visualized to help businesses better understand activities related to sharing, devices and security.
Of course, there are also the many third-party apps and services that work perfectly fine with the Dropbox platform without relying on the Dropbox for Business API. For organizations that are already on Dropbox for Business, this translates into usability and flexibility that is not matched by other cloud storage services.
If you’re planning to buy a new smartphone this year, but haven’t bought one yet it might be better to wait a bit longer: Apple, Samsung Electronics and OnePlus are all expected to launch new models in the next couple of months.
Here are some of the models you should see during the second half of the year:
MORE: 10 mobile startups to watch
While most of the products on this list (and their specs) are just rumors, Chinese smartphone maker OnePlus has been busy detailing its 2 model, which will be launched on July 27.
So far, OnePlus has revealed the phone will have a fingerprint sensor and be powered by Qualcomm’s Snapdragon 810. The company is using an upgraded version of the processor, v2.1, that isn’t susceptible to the overheating issues that the first version reportedly suffered from, it said.
OnePlus has also said the 2 will be the first high-end smartphone with a USB-C port, which is meant to be an all-in-one solution for power, video, and data delivery using a single cable with a reversible connector. There are already laptops that use the technology.
Some things OnePlus is still keeping some things under wraps, including what the 2 will look like and cost.
Just like OnePlus, Dutch company Fairphone has started to build some hype for its second product. The goal is to build a smartphone that won’t easily break and can be easily repaired.
Hardware specs include a Qualcomm Snapdragon 801 processor and a 5-inch, Full HD screen. The camera has an 8-megapixel resolution and there is 32GB of storage that can be expanded using a microSD card. The LTE smartphone also has 2GB of RAM and two SIM slots. The operating system will be Android 5.1.
The Fairphone 2 will be available for pre-order before the end of August, and then ship during the following couple of months.
Samsung Galaxy Note 5
A new Galaxy Note model arriving during the second half of the year has become a bit of a tradition. A launch at the IFA trade show in the beginning of September looks likely. With the fifth version Samsung needs to step up its game if it wants to compete more successfully with Apple’s iPhone 6 Plus, the upgrade of which before the end of the year is also a forgone conclusion.
Anticipated improvements include a new design that follows in the footsteps of the Galaxy S6. The Note 4 was with its metal frame and plastic back was a step in the right direction. But the metal frame and glass back on the S6 looks classier Another reported upgrade is a screen that’s slightly larger than the Note 4’s 5.7-inch display, with a 2K or 4K resolution.
LG G4 Pro
Launching a high-end smartphone during the second half of the year would be a departure for LG. That strategy has worked well for Samsung with the Galaxy Note family, so LG might want to emulate that to boost sales instead of just relying on dropping the price tag of the G4.
The G4 Pro is rumored to have some really impressive specs, including a 5.8-inch, 1440 by 2560 pixel screen, a 27-megapixel main camera, 4GB of RAM and Qualcomm’s Snapdragon 820 processor.
Most of the parts to build a phone with those specs are shouldn’t cause LG much of a problem. The big question mark is whether the Snapdragon 820 will be ready for use in a smartphone before the end of the year. LG was the first to announce smartphones powered by the Snapdragon 808 and the 810, so the company is a likely candidate to be among the first to get its hands on the new model.
Apple iPhone 6s and 6s Plus
The iPhone 6 and 6 Plus with its bigger screens have been unmitigated successes. The challenge for the company this year will be to come up with upgrades to continue to build on that success.
Cameras are one aspect the company is expected to focus on with the iPhone 6s and 6s Plus. Upgrading the current 1.2-megapixel front camera makes a lot of sense since competing products launched this year have at least 5-megapixel cameras. To what extent an upgrade of the main camera to a reported 12-megapixel resolution will result in better image quality remains to be seen. The new models are anticipated to have a faster processor, more RAM and a speedier LTE connection.
It has been a few years since we last looked at single sign-on products, the field has gotten more crowded and more capable.
Since we last looked at single sign-on products in 2012, the field has gotten more crowded and more capable. For this round of evaluations, we looked at seven SSO services: Centrify’s Identity Service, Microsoft’s Azure AD Premium, Okta’s Identity and Mobility Management, OneLogin, Ping Identity’s Ping One, Secure Auth’s IdP, and SmartSignin. Our Clear Choice test winner is Centrify, which slightly outperformed Okta and OneLogin. (Read the full review.)
Centrify Identity Service
Centrify has put together a solid single sign-on tool that also has some terrific mobile device management features. If you are in the market for both kinds of products, this should be on your short list. The admin user interface is well thought-out. Set up was quickly accomplished. Multi-factor authentication settings are located in the policy tab for users and in the apps tab for individual apps. The MFA choices are numerous, including email, SMS texts and phone calls, and security questions. Centrify comes with dozens of canned reports, plus the ability to create your own using custom SQL queries.
Microsoft Azure Active Directory Access Control
Earlier this year Microsoft added Azure Active Directory to its collection of cloud-based offerings. It is difficult to setup because you tend to get lost in the hall of mirrors that is the Azure setup process. It is still very much a work in progress and mainly a developer’s toolkit rather than a polished service. But clearly Microsoft has big plans for Azure AD, as its new Windows App Store is going to rely on it for authentication. If you already are using Azure, then it makes sense to take a closer look at Azure AD. If you are looking for a general purpose SSO portal, then you should probably look elsewhere.
Okta Identity and Mobility Management
Okta tied for first place in our 2012 review and it remains a very capable product. Okta’s user interface is very simple to navigate. Okta has beefed up its multi-factor authentication functionality. It now offers a mobile app, Okta Verify, as a one-time password generator. It also supports other MFA methods. Okta has its own mobile app that can provide a secure browsing session and allow you to sign in to your apps from your phone. It contains some MDM functionality, although it is not a full MDM tool. Reports have been strengthened as well, but reports only show the last 30 days.
OneLogin was the other co-winner of our 2012 review and while it is still strong, its user interface has become a bit unwieldy. OneLogin has numerous SAML toolkits in a variety of languages to make it easier to integrate your apps into its SSO routines. It also has specific configuration screens to set up a VPN login and take you to specific apps. OneLogin’s AD Connector requires all of the various components of Net Framework v3.5 to be installed. Once that was done, it was a simple process to install their agent and synchronize our AD with their service. OneLogin has 11 canned reports and you can easily create additional custom ones.
Ping Identity PingOne
Ping began as on-premises solution with PingFederate, but now offers cloud-based PingOne, web access tool PingAccess and OTP soft token generator PingID. Multi-factor authentication support is somewhat limited in PingOne. You can use PingID or SafeNet’s OTP tokens. If you want more factors, you have to purchase the on-premises Ping Federate. Reports are not this product’s strong suit. The dashboard gives you an attractive summary, but there isn’t much else. Ping would be a stronger product if consolidated their various features and focused on the cloud as a primary delivery vehicle. If that isn’t important to you, or if you have complex federation needs, then you should give them more consideration and look at PingFederate.
Of the products we tested, SecureAuth has the most flexibility and the worst user interface, a combination that can be vexing at times. SecureAuth is the only product tested that has to run on a Windows Server. The interface is supposed to get a refresh later this year, but the current version makes it easy to get lost in a series of cascading menus. The real strength of SecureAuth always has been its post-authentication workflow activities. SecureAuth’s MFA support is strong, featuring a wide selection of factors and tokens to choose from. This is a testimonial to its flexibility.
SmartSignin has been acquired by PerfectCloud and integrated into their other cloud-based security offerings. They now support seven identity providers (Amazon, Netsuite and AD) with more on the horizon and more than 7,000 app integrations. The identity providers make use of SAML or other federated means, and come with extensive installation instructions. This is a little more complex than some of its competitors. When it comes to MFA support, SmartSignin is the weakest of the products we reviewed. They are working on other MFA methods, including SMS and voice, but didn’t have them when we tested. Also, MFA is just for protecting your entire user account, there is no mechanism for protecting individual apps.
There’s a lot more to it than just how many apps you can put in a box
Name a tech company, any tech company, and they’re investing in containers. Google, of course. IBM, yes. Microsoft, check. But, just because containers are extremely popular, doesn’t mean virtual machines are out of date. They’re not.
Yes, containers can enable your company to pack a lot more applications into a single physical server than a virtual machine (VM) can. Container technologies, such as Docker, beat VMs at this part of the cloud or data-center game.
VMs take up a lot of system resources. Each VM runs not just a full copy of an operating system, but a virtual copy of all the hardware that the operating system needs to run. This quickly adds up to a lot of RAM and CPU cycles. In contrast, all that a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program.
What this means in practice is you can put two to three times as many as applications on a single server with containers than you can with a VM.
In addition, with containers you can create a portable, consistent operating environment for development, testing, and deployment. That’s a winning trifecta.
If that’s all there was to containers vs. virtual machines then I’d be writing an obituary for VMs. But, there’s a lot more to it than just how many apps you can put in a box.
Container problem #1: Security
The top problem, which often gets overlooked in today’s excitement about containers, is security. As Daniel Walsh, a security engineer at Red Hat who works mainly on Docker and containers puts it: Containers do not contain. Take Docker, for example, which uses libcontainers as its container technology. Libcontainers accesses five namespaces — Process, Network, Mount, Hostname, and Shared Memory — to work with Linux. That’s great as far as it goes, but there’s a lot of important Linux kernel subsystems outside the container.
These include all devices, SELinux, Cgroups and all file systems under /sys. This means if a user or application has superuser privileges within the container, the underlying operating system could, in theory, be cracked.
That’s a bad thing.
Now, there are many ways to secure Docker and other container technologies. For example, you can mount a /sys file system as read-only, force container processes to write only to container-specific file systems, and set up the network namespace so it only connects with a specified private intranet and so on. But, none of this is built in by default. It takes sweat to secure containers.
The basic rule is that you’ll need to treat containers the same way you would any server application. That is, as Walsh spells out:
Another security issue is that many people are releasing containerized applications. Now, some of those are worse than others. If, for example, you or your staff are inclined to be, shall we say, a little bit lazy, and install the first container that comes to hand, you may have brought a Trojan Horse into your server. You need to make your people understand they cannot simply download apps from the Internet like they do games for their smartphone.
OK, so if we can lick the security problem, containers will rule all, right? Well, no. You need to consider other container aspects.
Rob Hirschfeld, CEO of RackN and OpenStack Foundation board member, observed that: “Packaging is still tricky: Creating a locked box helps solve part of [the] downstream problem (you know what you have) but not the upstream problem (you don’t know what you depend on).”
Breaking deployments into more functional discrete parts is smart, but that means we have MORE PARTS to manage. There’s an inflection point between
To this, I would add that while this is a security problem, it’s also a quality assurance problem. Sure, X container can run the NGINX web server, but is it the version you want? Does it include the TCP Load Balancing update? It’s easy to deploy an app in a container, but if you’re installing the wrong one, you’ve still ended up wasting time.
Hirschfeld also pointed that out container sprawl can be a real problem. By this he means you should be aware that “Breaking deployments into more functional discrete parts is smart, but that means we have MORE PARTS to manage. There’s an inflection point between separation of concerns and sprawl.”
Remember, the whole point of a container is to run a single application. The more functionality you stick into a container, the more likely it is you should been using a virtual machine in the first place.
So how do you go about deciding between VMs and containers anyway? Scott S. Lowe, a VMware engineering architect, suggests that you look at the “scope” of your work. In other words if you want run multiple copies of a single app, say MySQL, you use a container. If you want the flexibility of running multiple applications you use a virtual machine.
In addition, containers tend to lock you into a particular operating system version. That can be a good thing: You don’t have to worry about dependencies once you have the application running properly in a container. But it also limits you. With VMs, no matter what hypervisor you’re using — KVM, Hyper-V, vSphere, Xen, whatever — you can pretty much run any operating system. Do you need to run an obscure app that only runs on QNX? That’s easy with a VM; it’s not so simple with the current generation of containers.
So let me spell it out for you.
Do you need to run the maximum amount of particular applications on a minimum of servers? If that’s you, then you want to use containers — keeping in mind that you’re going to need to have a close eye on your systems running containers until container security is locked down.
If you need to run multiple applications on servers and/or have a wide variety of operating systems you’ll want to use VMs. And if security is close to job number one for your company, then you’re also going to want to stay with VMs for now.
In the real world, I expect most of us are going to be running both containers and VMs on our clouds and data-centers. The economy of containers at scale makes too much financial sense for anyone to ignore. At the same time, VMs still have their virtues.
As container technology matures, what I really expect to happen, as Thorsten von Eicken, CTO of enterprise cloud management company RightScale, put it is that VM and containers will come together to form a cloud portability nirvana. We’re not there yet, but we will get there.