OpenStack is redefining the business model for data solutions

Want proof? Industry leading vendors are snatching up OpenStack-based companies

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

IT is headed toward being something more akin to a utility service, transformed by OpenStack’s open standardized cloud architecture, which will improve interoperability and render vendor lock-in a thing of the past.

Initially a solution adopted by smaller ISVs lacking the capital to build private clouds, OpenStack-based cloud solutions are shaping up to be the logical choice for large enterprise as industry leaders, including IBM, Cisco, EMC, HP and Oracle, bet on its value for defining the next-generation model for business computing.

These industry giants have been snatching up OpenStack-based companies over the past couple years, building up their capabilities around the architecture. IBM and Cisco are some of the latest to close deals, with their respective acquisitions of Blue Box and Piston Cloud Computing. Other relevant acquisitions include EMC’s purchase of Cloudscaling, Oracle’s Nimbula acquistion, and Cisco’s MetaCloud acquisition.

OpenStack’s value for business lies in its capacity for facilitating seamless private-to-public scalability and extensive workload portability, while removing the need to lay out capital to acquire and maintain depreciating commodity hardware.

These companies see that innovations in open clouds will inevitably win out as the premiere solution for business data management. The days of commodity hardware and internally managed datacenters are rapidly fading. With cloud services available on a pay-as-you-go basis and infrastructure as a service (IaaS) removing the need to invest in commodity hardware, customers will look at performance, pricing and quality of service as the most important factors in choosing a cloud provider, while maintaining the freedom to easily switch if a better option comes along.

OpenStack’s core strength is interoperability, allowing for seamless scaling across private and public environments, as well as easier transition and connectivity across vendors and networks.

Companies like IBM and Cisco buying up OpenStack-based providers to bolster their own hybrid cloud solutions does not mean the architecture will lose touch with its open-source roots. Open standards and interoperability go hand-in-hand and are at the heart of OpenStack’s unique capabilities.

What we are seeing is the maturation of OpenStack, with major names in business computing positioned to mainstream its adoption by leveraging their financial, IP, R&D resources and brand trust to meet complex demands and ensure confidence from large enterprise organizations transitioning to the cloud.

Cisco listed OpenStack’s capabilities for enhancing automation, availability and scale for hybrid clouds as playing a major role in its new Intercloud Network, while HP is utilizing OpenStack to facilitate its vendor-neutral Helion Network, which will pool the services of Helion partners to offer global workload portability for customers of vendors within their network.

Adoption of OpenStack by these providers signals a major shift for the industry, moving away from dependence on hardware sales and heavy contractual service agreements to a scalable IaaS utilities model, where customers pay for what they need when they need it and expect it to just work. Providers may need to shoulder the burden of maintaining datacenters but will reap the reward of pulling the maximum value from their commodity investments.

Interoperability may seem like a double-edged sword for companies that were built on their own software running exclusively on their own hardware. But the tide is shifting and they realize that closed platforms are losing relevance, while open architecture offers new opportunities to expand their business segments, better serve customers, and thrive with a broader customer base.

Cisco recently added new functionalities for its Intercloud offering, extending virtual machine on-boarding to support Amazon Virtual Private Cloud and extending its zone-based firewall services to include Microsoft Azure. Last year, IBM partnered with software and cloud competitor Microsoft, each offering their respective enterprise software across both Microsoft Azure and the IBM Cloud to help reduce costs and spur development across their platforms for their customers. OpenStack furthers these capabilities across the quickly expanding list of providers adapting the cloud architecture, enabling a vendor-agnostic market for software solutions.

Open standardized cloud architecture is the future of business IT, and OpenStack currently stands as the best and only true solution to make it happen. Its development was spurred by demand from small ISVs who will continue to require its capabilities and promote its development, regardless of whether large enterprise service providers are on board.

However, its inevitable development and obvious potential for enterprise application is forcing the hand of IT heavyweights to conform. Regardless if they’d prefer to maintain the status quo for their customers, the progress we’ve seen won’t be undone and the path toward vendor neutrality has been set.

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Microsoft fires back at Google with Bing contextual search on Android

“Snapshots on Tap” echoes a feature coming with the next version of Android

Microsoft has pre-empted a new feature Google plans to include in the next version of Android with an update released Thursday for the Bing Search app that lets users get information about what they’re looking at by pressing and holding their device’s home button.

Called Bing Snapshots, the feature is incredibly similar to the Now on Tap functionality Google announced for Android Marshmallow at its I/O developer conference earlier this year. Bing will look over a user’s screen when they call up a Snapshot and then provide them with relevant information along with links they can use to take action like finding hotels at a travel destination.

For example, someone watching a movie trailer can press and hold on their device’s home button and pull up a Bing Snapshot that will give them easy access to reviews of the film in question, along with a link that lets them buy tickets through Fandango.

Google Now On Tap, which is slated for release with Android Marshmallow later this year, will offer similar features with a user interface that would appear to take up less screen real estate right off the bat, at least in the early incarnations Google showed off at I/O.

The new functionality highlights one of the major differences between Android and iOS: Microsoft can replace system functionality originally controlled by Google Now and use that to push its own search engine and virtual assistant. Microsoft is currently beta testing a version of its virtual assistant Cortana on Android for release later this year as well.

A Cortana app is also in the cards for iOS, but Apple almost certainly won’t allow a virtual assistant to take over capabilities from Cortana, especially since Google Now remains quarantined inside the Google app on that mobile platform.

All of this comes as those three companies remained locked in a tight battle to out-innovate one another in the virtual assistant market as a means of controlling how users pull up information across their computers and mobile devices. For Microsoft and Google, there’s an additional incentive behind the improvements: driving users to their respective assistants has the potential to boost use of the connected search engines.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Google pushes back Project Ara testing to 2016

The company plans to test the device in the U.S.

Google is delaying initial testing for its modular smartphone, known as Project Ara, to 2016.

The company plans to test the device in the U.S., according to several tweets posted Monday by the Project Ara team. Neither the exact location nor precise timing of the tests was given.

The Project Ara smartphone is designed to let users easily swap out its components.

The idea is that users purchase the hardware modules, like processors and sensors, themselves and snap them together to create a customized smartphone. In so doing, users could improve their device on their own terms, rather than buying a new phone outright.

Google had planned to commence initial testing in Puerto Rico this year, though those plans were scrapped as part of a “recalculation,” announced last week.

The hastag #Yeswearelate was affixed to one of the tweets on Monday.

Google did not immediately respond to comment further.

 


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Dropbox security chief defends security and privacy in the cloud

Patrick Heim is the (relatively) new head of Trust & Security at Dropbox. Formerly Chief Trust Officer at Salesforce, he has served as CISO at Kaiser Permanente and McKesson Corporation. Heim has worked more than 20 years in the information security field. Heim discusses security and privacy in the arena of consumerized cloud-based tools like those that employees select for business use.

What security and privacy concerns do you still hear from those doing due diligence prior to placing their trust in the cloud?
A lot of them are just trying to figure out what to do with the cloud in general. Companies right now have really three choices, especially with respect to the consumer cloud (i.e., cloud tools like Dropbox). One of them is to kind of ignore it, which is always a horrible strategy because when they look at it, they see that their users are adopting it en masse. Strategy two is to build IT walls up higher and pretend it’s not happening. Strategy three is adoption, which is to identify what people like to use and convert it from the uncontrolled mass of consumerized applications into something security feels comfortable with, something that is compliant with the company’s rules with a degree of manageability and cost control.

Are there one or two security concerns you can name? Because if the cloud was always entirely safe in and of itself, the enterprise wouldn’t have these concerns.

If you look at the track record of cloud computing, it’s significantly better from a security perspective than the track record of keeping stuff on premise. The big challenge organizations have, when you look at some of these breaches, is they’re not able to scale up to secure the really complicated in-house infrastructures they have.

We’re [as a cloud company] able to attract some of the best and brightest talent in the world around security because we’re able to get folks that quite frankly want to solve really big problems on a massive scale. Some of these opportunities aren’t available if they’re not in a cloud company.

How do you suggest that enterprises take that third approach, which is to adopt consumerized cloud applications?
The first step is through discovery. Understand how employees use cloud computing. There are a number of tools and vendors that help with that process. With that, IT has to be willing to rethink their role. Employees should really be the scouts for innovation. They’re at the forefront of adopting new apps and cloud technology. The role of IT will shift to custodian or curator of those technologies. IT will provide integration services to make sure that there is a reasonable architecture for piecing these technologies together to add value and to provide security and governance to make sure those kinds of cloud services align with the overall risk objectives of the organization.

“If you look at the track record of cloud computing, it’s significantly better from a security perspective than the track record of keeping stuff on premise.”

Patrick Heim, Head of Trust & Security, Dropbox

How can the enterprise use the cloud to boost security and minimize company overhead?
If you think about boosting security, there is this competition for talent and the lack of resources for the enterprise to do it in-house. If you look at the net risk concept, where you evaluate your security and risk posture prior to and after you invest in the cloud, and you understand what changes, one of those changes is: what do I not have to manage anymore? If you look at the complexity of the tech stack, there are security accountabilities, and the enterprise shifts the vast majority of security accountabilities on the infrastructure side to the cloud computing provider; that leaves your existing resources free to perform more value-added functions.

What are the security concerns in cloud collaboration scenarios?
When I think about collaboration especially outside of the boundaries of an individual organization, there is always the question of how do you maintain reasonable control over that information once it’s in the hands of somebody else? There is that underlying tension that the recipient of that shared information may not continue to protect it.

In response to that, there is ERM, which provides a document-level control that’s cryptographically enforced. We’re looking at ways of minimizing the usability tradeoff that can come with adding in some of these kinds of security advancements. We’re working with some vendors in this space to identify what do we have to do from an interface and API perspective to integrate this so that the impact on the end user for adopting some of these advanced encryption capabilities is absolutely minimized, meaning that when you encrypt a document using some of these technologies that you can still, for example, preview it and search for it.

How do enterprises need to power their security solutions in the current IT landscape?
When they look at security solutions, I think more and more they have to think beyond the old model of the network parameter. When they send data to the cloud, they have to adopt a security strategy that also involves cloud security, where the cloud actually provides the security as one of its functions.

There are a number of cloud-access security brokers, and the smart ones aren’t necessarily sitting on the network and monitoring, but the smart ones are interacting, using access and APIs, and looking at the data people are placing into cloud environments, analyzing them for policy violations, and providing for archiving and backup and similar capabilities.

Security tools that companies need to focus on could be oriented to how these capabilities are going to scale across multiple cloud vendors as well as how do I get away from inserting it into our network directly and focus more on API integration with multiple cloud vendors?


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Virtual Mobile Infrastructure: Secure the data and apps, in lieu of the device

VMI offers an effective, efficient way to provide access to sensitive mobile apps and data without compromising security or user experience

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Corporate use of smartphones and tablets, both enterprise- and employee-owned (BYOD), has introduced significant risk and legal challenges for many organizations.

Other mobile security solutions such as MDM (mobile device management) and MAM (mobile app management) have attempted to address this problem by either locking down or creating “workspaces” on users’ personal devices. For BYOD, this approach has failed to adequately secure enterprise data, and created liability issues in terms of ownership of the device – since it is now BOTH a personal and enterprise (corporate)-owned device.

MAM “wrap” solutions in particular require app modification in exchange for ‘paper thin’ security. You cannot secure an app running on a potentially hostile (unmanaged) operating system platform, and critically you can’t wrap commercial mobile applications.

By contrast, Virtual Mobile Infrastructure (VMI) offers an effective, efficient way to provide access to sensitive mobile apps and data without compromising enterprise security or user experience.

Like VDI for desktops, VMI offers a secure approach to mobility without heavy-handed management policies that impact user experience and functionality.

From IT’s perspective, VMI is a mobile-first platform that provides remote access to an Android virtual mobile device running on a secure server in a private, public or hybrid cloud. The operating system, the data, and the applications all reside on a back-end server — not on the local device.

From a user’s perspective, VMI is simply another app on their iOS, Android or Windows device that provides the same natural, undiluted mobile experience, with all the accustomed bells and whistles. Written as native applications, these client apps can be downloaded from the commercial app stores, or installed on devices using MAM or app wrapping technologies.

As Ovum states, “Put more simply, this [VMI] means in effect that your mobile device is acting only as a very thin client interface with all the functionality and data being streamed to it from a virtual phone running in the cloud.”

Getting started with VMI

After downloading and installing the VMI client, users go through an easy setup process, inputting server names, port numbers, account names and access credentials. When users connect to the VMI device they see a list of available applications, all running on a secure server that communicates with the client through encrypted protocols.

The client accesses apps as if they were running on a local device, yet because they are hosted in a data center, no data is ever stored on the device. Enterprises can secure and manage the entire stack from a central location, neutralizing many of the risks that mobile devices often introduce to a network.

Two-factor authentication is supported via PKI certificates in the physical phone’s key store. The physical device forces the user to have a PIN number (or biometric) to unlock the phone when there is a certificate in the hardware-backed key store. Additionally, the client supports variable session lengths with authentication tokens.

The server infrastructure that supports VMI clients can be implemented as multiple server clusters across geographic regions. As users travel, the client synchronizes with the server cluster closest to its physical location to access the applications on its virtual mobile device. The client continues to communicate with one server at a time, choosing the server location that provides the best performance.

In a typical deployment, there are compute nodes that host the virtual mobile devices, a storage service that holds user settings and data, and controller nodes that orchestrate the system.

The controller node(s) can be connected to an Enterprise Directory service, such as Active Directory, for user authentication and provisioning, and systems management tools such as Nagios and Monit can be used to monitor all parts of the system to ensure they are up and behaving properly (e.g. are not overloaded). The server hosting the devices creates detailed audit logs, which can be imported into a third party auditing tool such as Splunk or ArcSight.

VMI is platform-neutral, which means organizations can write, test, run and enhance a single instance of an app on a ‘gold disk’ OS image, rather than building separate apps for each supported end-user platform. This represents significant time and cost savings for resource-constrained IT organizations.

And while VMI takes a different approach to securing mobile endpoints than MDM, it does not aim to replace those solutions. Instead, VMI can integrate with MDM, MAM and other container solutions allowing organizations to use MDM to configure and manage an enterprise-owned virtual mobile device running in a data center, and MAM to support version management and control upgrade scheduling of VMI thin clients.

Mobile by design
Because VMI is optimized for smartphones and tablets with small touch screens and many sensors, users enjoy native apps and a full mobile experience. VMI supports unmodified commercial apps, allowing for greater workflow and productivity, and complements sandbox container solutions that provide limited offline access to apps such as corporate email by providing a richer user experience when the user is online (the vast majority of the time).

Users can also access separate work and personal environments from a single device, enjoying Facebook and Instagram and sending personal emails without worrying that corporate IT teams will seize data or wipe their data. When an employee leaves an organization, IT simply revokes their access privileges to the virtual mobile device.

Similar to VDI, there are many different business scenarios in which organizations should evaluate VMI. The most common include:

Healthcare – Enables access to electronic health records and other sensitive apps and data from mobile devices, in compliance with HIPAA privacy requirements.
Financial Services – Facilitates access to more sensitive client transaction data and business processes, from both personally owned and enterprise owned devices.
Retail – Supports secure Point of Sale as a Service for credit card transactions; Protecting the confidentiality of customer data accessed both from on and off premises.
Enterprise BYOD – Provides secure access to native apps from employee-owned mobile devices; keeping all data secure in the data center while at the same time not infringing on personal privacy.

Commercial Services – Extends the mobile enterprise to contractors, partners and customers.
Classified mobility – Allows government and security services to access data and applications from classified mobile devices, ensuring compliance with the thin client requirements of NSA’s
Mobility Capability Package.

With 1.9 billion devices expected to hit the market by 2018, IT professionals are on the hunt for a more effective way to secure the enterprise. VMI provides the access they need without compromising security or user experience.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

So long (Vista), it’s been good to know yah

Windows 8’s predecessor in Microsoft’s every-other-OS-flops series now has a user share of just 2%

Windows Vista, the perception-plagued operating system Microsoft debuted to the general public in early 2007, has sunk to near insignificance, powering just two out of every 100 Windows personal computers, new data shows.

According to analytics provider Net Applications, Windows Vista’s user share, an estimate based on counting unique visitors to tens of thousands of websites, stood at 2% at the end of July.

Vista has been in decline since October 2009, when it peaked at 20% of all in-use Windows editions. Not coincidentally, that month also saw the launch of Vista’s replacement — and Microsoft’s savior — Windows 7. Within a year, Vista’s user share had slumped to less than 15%, and in less than two years fell below 10%.

Since then, however, Vista users have dragged their feet: The OS took another four years to leak another eight percentage points of user share. Projections based on the current average monthly decline over the past year signal that Vista won’t drop under the 1% mark until April 2016.

Vista’s problems have been well chronicled. It was two-and-a-half years late, for one. Then there were the device driver issues and ballyhoo over User Account Control (UAC). It was even the focus of an unsuccessful class-action lawsuit that alleged Microsoft duped consumers into buying “Vista Capable”-labeled PCs, a case that revealed embarrassing admissions by senior executives who had trouble figuring it out.

Even former CEO Steve Ballmer admitted it was a blunder. In a pseudo-exit interview in 2013 with long-time Microsoft watcher Mary Jo Foley of ZDNet, Ballmer cited Vista as “the thing I regret most,” tacitly setting most of Microsoft’s then-problems on the OS’s doorstep, from its failure in mobile to the slump in PC shipments.

Those still running Vista — using Microsoft’s claim that 1.5 billion devices run Windows, Vista’s share comes to around 30 million — have been left out in the cold by Microsoft and its Windows 10 upgrade: Vista PCs are not eligible for the free deal.

It’s actually good, at least for Microsoft, that Vista is on so few systems. The company will ship the last security updates for the aged OS on April 17, 2017, 20 months from now.

And there is a silver lining for Vista owners: At least their OS is more popular than Linux.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Top 5 factors driving domestic IT outsourcing growth

Despite insourcing efforts, the expansion of nearshore centers is not necessarily taking work away from offshore locations. Eric Simonson of the Everest Group discusses the five main drivers responsible for the rise in domestic outsourcing, why Indian providers dominate the domestic landscape and more.

IT service providers placed significant focus on staffing up their offshore delivery centers during the previous decade. However, over the past five years, outsourcing providers have revved their U.S. domestic delivery center activity, according to recent research by outsourcing consultancy and research firm Everest Group.

The American outsourcing market currently employs around 350,000 full-time professionals and is growing between three and 20 percent a year depending on function, according to Everest Group’s research.

Yet the expansion of nearshore centers is not necessarily taking work away from offshore locations in India and elsewhere. Big insourcing efforts, like the one announced by GM, remain the exception. Companies are largely sticking with their offshore locations for existing non-voice work and considering domestic options for new tasks, according to Eric Simonson, Everest Group’s managing partner for research.

We spoke to Simonson about the five main drivers for domestic outsourcing growth, the types of IT services growing stateside, why Indian providers dominate the domestic landscape, and the how providers plan to meet the growing demand for U.S. IT services skills.

Interest in domestic IT outsourcing is on the rise, but you say that that does not indicate any dissatisfaction with the offshore outsourcing model.

Simonson: This isn’t about offshore not working and companies deciding to bring the work back. That’s happening a bit with some call center and help desk functions. But, by and large, these delivery center setups are more about bringing the wisdom of global delivery into the domestic market. The fundamental goal is industrializing the onshore model vs. fixing what’s broken offshore.

Can you talk about the five main drivers behind their increased interest in locating stateside?
Simonson: The first is diversification of buyer needs. As buyers have to support new types of services, certain types of tasks may be better delivered nearshore rather than offshore.

Secondly, there may be a desire to leverage the soft skills of onshore talent. This occurs when you need someone with a certain type of domestic business knowledge or dialect or cultural affinity.

Thirdly, domestic sourcing can be a way to overcome the structural challenges associated with offshore delivery, such as high attrition and burn out in graveyard shifts.

Fourth, companies may be seeking to manage certain externalities like regulatory requirements of fears about visa availabilities. To some extent, these reasons are often not necessarily based on true requirements, but are a convenient reason to give for choosing to outsource domestically rather than the potential risks of offshore.

Finally, there may be client-specific needs that demand domestic solutions—a local bank that wants to keep jobs in the community or a company with no experience offshore looking to start the learning curve.

Within IT services, what types of work currently dominate the domestic landscape?
Simonson: Application development is most prominent, with 123 domestic delivery centers in tier-one and -two cities serving financial services, public sector, manufacturing, retail and consumer packaged goods clients. Just behind that is IT infrastructure in similar geographies focused on those verticals as well. There are 80 consulting and systems integration centers and 68 testing centers as well.

It’s interesting to note that while U.S.-based providers tend to operate larger IT service centers domestically, it’s actually the Indian providers that dominate the landscape.
security tools 1

Simonson: Traditional U.S.-based multinationals have captured more scale in individual centers and have been able to grow them, in some ways, more strategically. They’ve been able to set up shop in smaller tier-4 cities like Ann Arbor or Des Moines and have more proven local talent models.

But the majority of domestic centers are operated by India-centric providers. Part of that is driven by their desire to get closer to their customers. With application and systems integration work, the ability to work more closely with the client is increasingly valuable. And with infrastructure work, concerns about data and systems access have encouraged Indian companies to offer more onshore options.

In addition, some of the bad press they’ve received related to visa issues is encouraging them to balance out their delivery center portfolios.

But Indian providers are not necessarily staffing up their centers with American workers.
Simonson: Indian providers are more likely to use visas to bring citizens of other countries (predominantly India) into the country to work on a temporary or permanent basis in a delivery center. About 32 percent of their domestic workforce working in delivery centers is comprised of these ‘landed resources.’ Across all providers, landed resources account for six percent of domestic service delivery employees. However, tightening visa norms and higher visa rejection rates are making it more difficult for providers to rely on foreign workers.

You found that approximately 43 percent of the delivery centers are located in the South, with almost half of those concentrated in the South Atlantic. And Texas has more than fifty. Is that
simply due to the fact that it’s cheaper to operate there?

Simonson: Cheap helps. But equally important are overall population trends. The South is growing, while regions like the Northeast or Midwest are either stable or on the decline. If you look at where people are going to school or moving and where corporations are relocating their headquarters, it’s taking place from the Carolinas down through Florida and over through Arkansas, Oklahoma and Texas. Those states are also more progressive about attracting services businesses (although there are some exceptions outside of the south like North Dakota and Missouri).

Do you expect the domestic IT outsourcing market to continue to grow?
Simonson: Yes, service providers expect an increase in demand for domestic outsourcing services by new and existing customers, and plan to increase their domestic delivery capabilities by adding more full time employees to their existing centers and establishing new delivery centers. In fact, 60 percent of delivery centers are planning to add headcount over the next three years with India-centric service providers expected to lead the expansion.

Tier-2 and tier-3 cities, like Orlando, Atlanta and Rochester, are poised for the greatest growth, with tier-1 and rural centers expecting the least amount of growth.

Will the supply of domestic IT talent keep up with this increased demand?
Simonson: The pressure to find IT talent has led service providers to adopt a range of approaches to extend their reach and develop ecosystems of talent. Many have developed educational partnerships, creating formal and informal relationships with colleges and technical institutes. They’re also basing themselves in cities known for their quality of life and recruiting entry-level and experienced talent from elsewhere. It all impacts what communities they decide to work in.

All service providers will have to expand their talent pools, particularly in IT. Automation of some tasks could increase capacity, but doesn’t provide the higher-complexity skills that are most valued onshore.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Windows 10: Fact vs. fiction

With Win10 slated to drop July 29, we give you the straight dope on support, upgrades, and the state of the bits

It’s a few days before Windows 10 is officially slated to drop, and still, confusion abounds. Worse, many fallacies regarding Microsoft’s plans around upgrades and support for Win10 remain in circulation, despite efforts to dispel them.

Here at InfoWorld, we’ve been tracking Windows 10’s progress very closely, reporting the evolving technical details with each successive build in our popular “Where Windows 10 stands right now” report. We’ve also kept a close eye on the details beyond the bits, reporting on the common misconceptions around Windows 10 licensing, upgrade paths, and updates. If you haven’t already read that article, you may want to give it a gander. Many of the fallacies we pointed out six weeks ago are still as fallacious today — and you’ll hear them repeated as fact by people who should know better.

Here, with Windows 10 nearing the finish line, we once again cut through the fictions to give you the true dirt — and one juicy conjecture — about Windows 10, in hopes of helping you make the right decisions regarding Microsoft’s latest Windows release when it officially lands July 29.

Conjecture: Windows Insiders already have the “final” version of Windows 10

Give or take a few last-minute patches, members of the Windows Insider program may already have what will be the final version of Win10. Build 10240, with applied patches, has all the hallmarks of a first final “general availability” version.

If you’re in the Insider program, either Fast or Slow ring, and your computer’s been connected to the Internet recently, you’ve already upgraded, automatically, to the Windows 10 that’s likely headed out on July 29. No, I can’t prove it. But all the tea leaves point in that direction. Don’t be surprised if Terry Myerson announces on July 29 that Insiders are already running the “real” Windows 10 — and have been running it for a couple of weeks. Everyone else can get a feel for the likely “final” Windows 10, build 10240, by checking out our ongoing Windows 10 beta coverage at “Where Windows stands right now.”

Fact: Windows 10 has a 10-year support cycle

Like Windows Vista, Win7, and Win8 before it, Windows 10 has a 10-year support cycle. In fact, we’re getting a few extra months for free: According to the Windows Lifecycle fact sheet, mainstream support ends Oct. 13, 2020, and extended support ends Oct. 14, 2025. Of course, if your sound card manufacturer, say, stops supporting Windows 10, you’re out of luck.

ALSO ON NETWORK WORLD: What if Windows went open source tomorrow?

I have no idea where Microsoft’s statement about covering Windows 10 “for the supported lifetime of the device” came from. It sounds like legalese that was used to waffle around the topic for seven frustrating months. Microsoft’s publication of the Lifecycle fact sheet shows that Windows 10 will be supported like any other version of Windows. (XP’s dates were a little different because of SP2.)

Fiction: The 10 years of support start from the day you buy or install Windows 10

There’s been absolutely nothing from Microsoft to support the claim that the Win10 support clock starts when you buy or install Windows 10, a claim that has been attributed to an industry analyst.

The new Windows 10 lifecycle and updating requirements look a lot like the old ones, except they’re accelerated a bit. In the past we had Service Packs, and people had a few months to get the Service Packs installed before they became a prerequisite for new patches. With Windows 8.1, we had the ill-fated Update 1: You had to install Update 1 before you could get new patches, and you only had a month (later extended) to get Update 1 working. The new Windows 10 method — requiring customers to install upgrades/fixes/patches sequentially, in set intervals — looks a whole lot like the old Win 8.1 Update 1 approach, although corporate customers in the Long Term Servicing Branch can delay indefinitely.

Fact: You can clean install the (pirate) Windows 10 build 10240 ISO right now and use it without entering a product key

Although it isn’t clear how long you’ll be able to continue to use it, the Windows 10 build 10240 ISO can be installed and used without a product key. Presumably, at some point in the future you’ll be able to feed it a new key (from, say, MSDN), or buy one and use it retroactively.
Fiction: You can get a free upgrade to Windows 10 Pro from Win7 Home Basic/Premium, Win8.1 (“Home” or “Core”), or Win8.1 with Bing

A common misconception is that you can upgrade, for free, from Windows 7 Home Basic or Home Premium, Windows 8.1 (commonly called “Home” or “Core”), or Windows 8.1 with Bing, to Windows 10 Pro. Nope, sorry — all of those will upgrade to Windows 10 Home. To get to Windows 10 Pro, you would then have to pay for an upgrade, from Win10 Home to Pro.

Fact: No product key is required to upgrade a “genuine” copy of Win7 SP1 or Win8.1 Update
According to Microsoft, if you upgrade a “genuine” copy of Windows 7 SP1 or Windows 8.1 Update, come July 29 or later, Windows 10 won’t require a product key. Instead, keep Home and Pro versions separate — upgrade Home to Home, Pro to Pro. If you upgrade and perform a Reset (Start, Settings, Update & Security, Recovery, Reset this PC) you get a clean install of Windows 10 — again, per Microsoft. It’ll take a few months to be absolutely certain that a Reset performs an absolutely clean install, but at this point, it certainly looks that way.

Fiction: Windows 10 requires a Microsoft account to install, use, or manage

Another common misconception is that Microsoft requires users have a Microsoft account to install, use, or manage Windows 10. In fact, local accounts will work for any normal Windows 10 activity, although you need to provide a Microsoft account in the obvious places (for example, to get mail), with Cortana, and to sync Edge.

Fact: If your tablet runs Windows RT, you’re screwed

Microsoft has announced it will release a new version of Windows RT, called Windows RT 3, in September. If anybody’s expecting it to look anything like Windows 10, you’re sorely mistaken. If you bought the original Surface or Surface RT, you’re out of luck. Microsoft sold folks an obsolete bucket of bolts that, sad to say, deserves to die. Compare that with the Chromebook, which is still chugging along.

Fiction: Microsoft pulled Windows Media Player from Windows 10

One word here seems to be tripping up folks. What Microsoft has pulled is Windows Media Center, which is a horse of a completely different color. If you’re thinking of upgrading your Windows Media Center machine to Windows 10, you’re better off retiring it and buying something that actually works like a media center. WMP is still there, although I wonder why anybody would use it, with great free alternatives like VLC readily available.

Fiction: Windows 10 is a buggy mess
In my experience, Windows 10 build 10240 (and thus, presumably, the final version) is quite stable and reasonably fast, and it works very well. There are anomalies — taskbar icons disappear, some characters don’t show up, you can’t change the picture for the Lock Screen, lots of settings are undocumented — and entire waves of features aren’t built yet. But for day-to-day operation, Win10 works fine.

Fact: The current crop of “universal” apps is an electronic wasteland
Microsoft has built some outstanding universal apps on the WinRT foundation, including the Office trilogy, Edge, Cortana, and several lesser apps, such as the Mail/Calendar twins, Solitaire, OneNote, and the Store. But other software developers have, by and large, ignored the WinRT/universal shtick. You have to wonder why Microsoft itself wasn’t able to get a universal OneDrive or Skype app going in time for July 29. Even Rovio has given a pass on Angry Birds 2 for the universal platform. Some games are coming (such as Rise of the Tomb Raider), but don’t expect a big crop of apps for the universal side of Windows 10 (and, presumably, Windows 10 Mobile) any time soon.

Fiction: Microsoft wants to control us by forcing us to go to Windows 10
I hear variations on this theme all the time, and it’s tinfoil-hat hooey. Microsoft is shifting to a different way of making money with Windows. Along the way, it’s trying out a lot of moves to reinvigorate the aging cash cow. Total world domination isn’t one of the options. And, no, the company isn’t going to charge you rent for Windows 10, though it took seven months to say so, in writing.

Fiction: Windows 7 and Windows 8 machines will upgrade directly to Windows 10

Win7 and Win8 machines won’t quite upgrade directly to Win10. You need Windows 7 Service Pack 1, or Windows 8.1 Update 1, in order to perform the upgrade. If you don’t have Windows 7 SP1, Microsoft has official instructions that’ll get you there from Windows 7. If you’re still using Windows 8, follow these official instructions to get to Windows 8.1 Update. Technically, there’s a middle step on your way to Win10.

Fact: We have no idea what will happen when Microsoft releases a really bad patch for Windows 10

If there’s an Achilles’ heel in the grand Windows 10 scheme, it’s forced updates for Windows 10 Home users and Pro users not attached to update servers. As long as Microsoft rolls out good-enough-quality patches — as it’s done for the past three months — there’s little to fear. But if a real stinker ever gets pushed out, heaven only knows how, and how well, Microsoft will handle it.

Fact: You’d have to be stone-cold crazy to install Windows 10 on a production machine on July 29
There isn’t one, single killer app that you desperately need on July 29. Those in the know have mountains of questions, some of which won’t be answered until we see how Win10 really works and what Microsoft does to support it. If you want to play with Windows 10 on a test machine, knock yourself out. I will, too. But only a certified masochist would entrust a working PC to Windows 10, until it’s been pushed and shoved and taken round several blocks, multiple times.

You have until July 29, 2016, to take advantage of the free upgrade. There’s no rush. Microsoft won’t run out of bits.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Slide show: Best tools for email encryption

The products we reviewed show good signs that encryption has finally come of age.
best tools email encryption 1

Email encryption

Recipients of encrypted email once had to share the same system as the sender. Today, products have a “zero knowledge encryption” feature, which means you can send an encrypted message to someone who isn’t on your chosen encryption service. Today’s products make sending and receiving messages easier, with advances like an Outlook or browser plug-in that gives you nearly one-button encryption. And the products we reviewed have features like setting expiration dates, being able to revoke unread messages or prevent them from being forwarded. (Read the full review.)

AppRiver CipherPost Pro
Basically, you layer CipherPost Pro on top of your existing email infrastructure via a plug-in. It has mobile apps for iOS, Android, Windows phones and BlackBerry 10s that offer the ability to send and receive encrypted messages, but not attachments. To correspond with people outside your email domain, send a message with a Web link, which recipients click on and register with the system. The heart of the product is a special “Delivery Slip” sidebar that appears on the page as you are composing your message. This is where controls are located to enable message-tracking options, and to add an extra security layer. These are all nice features. If you have to send large attachments, then CipherPost should be on your short list.

DataMotion SecureMail
DataMotion has a very mature offering that makes use of a gateway to process mail. Getting it set up will require a couple of hours, and most of that is in understanding the many mail processing rules. Users need to append a [SECURE] tag in the subject line to trigger the encryption process. You can also set up rules that will encrypt messages containing sensitive information. DataMotion doesn’t have any limits on the size of the user’s inbox. However, it does place a limit of up to 500MB worth of messages that can be sent in a user’s Track Sent Folder. Features include the ability to see exactly when your recipient opened the message and the attachment.

HP/Voltage SecureMail
Voltage was recently purchased by HP and rebranded. The technology is an email gateway, software that sits on either a Linux or Windows server or in the cloud and inserts the encryption process between mail client and server. There are numerous add-on modules that come as part of this ecosystem. You administer the gateway via a Web browser, and there are dozens of options to set, similar to the DataMotion product. Voltage has a zero download client, as it calls its software that can be used to exchange messages with someone not on their system. While parts of Voltage are showing their age, the overall experience is quite capable, and the add-ons for mobile and Outlook/Office are quite nifty.

Hushmail for Business
Hushmail is the easiest of the products we tested to set up and use. There is no software to install on the client side; all mail is accessed via two ways: First, via a secure webmail client that connects to the Hush servers. This is the only way you can send encrypted email to someone who isn’t part of the Hush network. The second method is for users fond of their existing email clients and who are communicating with other Hush users. In this situation there is literally nothing for them to do: they make use of their existing client to send an encrypted message. Between the client and the Hush server, mail is encrypted using either SSL or TLS. Once it arrives on the server, it is then encrypted via PGP. Hush has a 20MB limit on attachment size, and this could be a deal breaker for some businesses.

ProtonMail

Proton is one of the newer encrypted email services that have come along post-Snowden, with an emphasis on keeping your emails private. It makes a point of this by being based in Switzerland. However, the company is still building its product out and as a result it has a very simple Web UI for its client and admin tool. Proton uses double password protection. The first is used to authenticate the user. After that, encrypted data is sent to the user. The second password is a decryption key used to decrypt data on your device. Proton never sees that latter key so they do not have access to the decrypted data. On top of all this encryption, they also employ SSL connections so your data is encrypted across the Internet to and from their servers. There is no option for on-premises servers. While Proton is not really suitable for an enterprise deployment, it shows what the latest encryption products can deliver.

Tutao Tutanota
Of the products tested, Tutanota is the least reliable and least feature-laden. Tutanota uses a variety of clients to set up encrypted mail connections across your existing email infrastructure. There are no changes to your servers and you can continue using Outlook for sending unencrypted communications. We had some trouble with the installation, mainly because the software version has German instructions and installs the German version of .Net Framework. Once installed, though, the menus and commands are in English. Tutanota is based in Germany, which could be important for customers concerned about American email privacy. One of the distinguishing features is that its zero knowledge encryption process hides the message subject. Most of its competitors still send this information in the clear.

Virtru Pro
Virtru has a nice balance of plug-ins and mobile apps that support its easy-to-use encryption operations across a variety of email circumstances. If you have installed the necessary plug-in, when you want to send something, there is a small toggle switch on the top of the compose screen. Turning that on will bring up a “send secure” button to encrypt your message. There are tool tips that appear as you hover over the various options with your mouse, a nice touch. These include the ability to add an unencrypted introductory message that will introduce your recipient to the context of the message that you are sending, and why you want to encrypt the remainder of the message. You can also set when your message will expire or disable any forwarding for additional security.

Virtru also supports zero knowledge encryption, although it adds a separate activation step when a new user receives the first encrypted message.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Endless: A computer the rest of the world can afford

Designed for developing economies, the Endless computer (which runs Linux) aims to deliver affordable and useful computing

Rural Mexico, the backstreets of Guatemala City, the outskirts of Mumbai; these aren’t places you find a lot of computers for one simple reason; most computers are far too expensive. What you do find are lots of TVs so why not build a cheap, flexible computer without a display? And ship it without a keyboard and mouse because those are items that can usually be sourced locally at low cost.

What would computers do for people in these places? They would deliver information, education, and opportunity. Record keeping for farmers, reading lessons for children, tools for creating and communicating … the potential for computers to improve the lot of millions of people is just waiting on the right gear and I think the right gear is what a new company, Endless, is about to launch.

The result of three years of development, the company’s eponymous machine is a slightly eccentric design which, I’m told, was very successfully tested in its target markets. The device uses an Intel® Celeron® N2807 1.7 GHz Dual-Core processor (burst speed 2.1 GHz) with 2 GB of RAM. It has an RJ-45 Gigabit Ethernet port, two USB 2.0 ports (front, lower rear), a USB 3.0 port (upper rear), stereo line out, and HDMI and VGA outputs.

There are two Endless models: The $169 version with 32 GB eMMC (embedded MultiMedia storage) and SD Storage, and the $229 version with a 500GB hard drive. They are both powered by 12V input (the included adapter handles 100V to 240V at 50Hz or 60Hz) and the versions draw 24W and 30W respectively. The 500GB hard drive version (the version I tested) also includes an integrated speaker, 802.11 b/g/n WiFi, and Bluetooth 4.0.

What sets the Endless apart from other low cost machines is Endless OS, a highly customized version of Ubuntu Linux with Gnome (and lots of other interesting technology such as Xapian and OStree) that not only handles TVs as output devices (it scales and formats video output for readability), but also includes a huge library of applications and educational content. This is important because in emerging markets the Endless system will be useful and well-featured even if you don’t have any kind of networking services available.

While it’s based on open source projects, the Endless OS is not completely open source because it contains proprietary commercial code. The company’s open source philosophy is:

We embrace the principles of free and open-source software and acknowledge a great debt to it in creating Endless OS. Whenever we can, we work upstream and contribute back to open source. Although not everything we create can be open source, we release most components of our system under free software licenses. Many members of our core team have a long history with open source projects, and continue to be an active part of those communities. / You might notice that we maintain forks of many upstream packages. In most cases, this is because we submit our patches upstream and backport them to the stable versions that we ship.

Endless OS has been localized for a remarkable number of languages and installation is polished and simple. It was in the installation process I found the only issue I could identify in the whole system: I used a Vizio VP50 50-inch 720P HD Plasma TV via HDMI and when the setup asked me if I could see the menu bars at the top and bottom of the screen I clicked on “no” and the system adjusted the overscan. The result was that I could see a little of the menu bars but I had to go into the TV setup to fix the display. It’s a minor problem but Endless OS could do with a more comprehensive overscan adjustment system.
endless os appstore pt

In operation, the system is smooth, fast, stable, and easy to understand and navigate. The applications (which include both productivity software as well as games) and content on the 500GB version I tested are extensive and the system includes a huge amount of Wikipedia and the Khan Academy (if an Internet connection is available, the system will automatically download software and content updates). You choose what content and software you want from what is essentially a built-in app store.

Endless also makes information available for developers and while the operating system is only available on Endless’ own hardware all open source modifications are available on GitHub (the company notes that it may make the disk images available in the future which will likely spawn a wave of similar hardware products).

My only concern with the Endless system are that it doesn’t have a reset button or startup so if you forget your password there’s no obvious way to wipe and start again (I tried the usual way of entering Linux recovery mode – holding down shift at boot – but that didn’t work). A similar concern applies for a way to easily wipe the system, for example, if you were going to give your Endless computer to someone else.

So, who’s the Endless computer aimed at? Endless plans to sell their machines initially into markets such as Mexico and Guatemala where it should be a good fit for schools and colleges as well as the emerging middle class. What I think is really powerful about the Endless concept is the operating system and its focus on being useful even when there’s no Internet connectivity. If we can add to that mesh networking and good old sneaker net for updates and enhancements the potential for business and education in developing economies to get a computing boost is huge.

You can’t buy an Endless computer just yet (it’s due to ship in the near future) but you can register to be notified when it will be available.

The Endless computer gets a Gearhead rating of 5 out of 5.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Go to Top