SDN and NFV: The brains behind the “smart” city

In major metropolitan areas and smaller cities alike, governments are adopting software-defined networking (SDN) and network function virtualization (NFV) to deliver the agility and flexibility needed to support adoption of “smart” technologies that enhance the livability, workability and sustainability of their towns.

Today there are billions of devices and sensors being deployed that can automatically collect data on everything from traffic to weather, to energy usage, water consumption, carbon dioxide levels and more. Once collected, the data has to be aggregated and transported to stakeholders where it is stored, organized and analyzed to understand what’s happening and what’s likely to happen in the future.

There’s a seemingly endless list of potential benefits. Transportation departments can make informed decisions to alleviate traffic jams. Sources of water leaks can be pinpointed and proactive repairs scheduled. Smart payments can be made across city agencies, allowing citizens to complete official payments quickly and reducing government employee time to facilitate such transactions. And even public safety can be improved by using automated surveillance to assist the police watch high-crime hotspots.

Of particular interest is how healthcare services can be improved. There is already a push to adopt more efficient and effective digital technology management systems to better store, secure and retrieve huge amounts of patient data. Going a step further, a smart city is better equipped to support telemedicine innovations that require the highest quality, uninterrupted network service. Telesurgery, for example, could allow for specialized surgeons to help local surgeons perform emergency procedures from remote locations — the reduction of wait time before surgery can save numerous lives in emergency situations, and can help cities and their hospital systems attract the brightest minds in medical research and practice.

The smart city of today

While the smart city is expected to become the norm, examples exist today. Barcelona is recognized for environmental initiatives (such as electric vehicles and bus networks), city-wide free Wi-Fi, smart parking, and many more programs, all of which benefit from smart city initiatives. With a population of 1.6 million citizens, Barcelona shows that smart city technologies can be implemented regardless of city size.

But even smaller cities are benefitting from going “smart.” In 2013 Cherry Hill, New Jersey, with a population of only 71,000, began using a web-based data management tool along with smart sensors to track the way electricity, water, fuel and consumables are being utilized, then compared usage between municipal facilities to identify ways to be more efficient. Chattanooga, Tennessee, population 170,000, along with its investment to provide the fastest Internet service in the U.S., has recently begun developing smart city solutions for education, healthcare and public safety.

How do cities become smart? The most immediate need is to converge disparate communications networks run by various agencies to ensure seamless connectivity. To achieve this, packet optical based connectivity is proving critical, thanks largely to the flexibility and cost advantages it provides. Then atop the packet optical foundation sits technology that enables NFV and the applications running on COTS (commercial off-the-shelf) equipment in some form of virtualized environment. SDN and NFV allow for the quick and virtual deployment of services to support multiple data traffic and priority types, as well as increasingly unpredictable data flows of IoT.

Decoupling network functions from the hardware means that architectures can be more easily tweaked as IoT requirements change. Also, SDN and NFV can yield a more agile service provision process by dynamically defining the network that connects the IoT end devices to back-end data centers or cloud services.

The dynamic nature of monitoring end-points, location, and scale will require SDN so that networks can be programmable and reconfigured to accommodate the moving workloads. Take for example, allocating bandwidth to a stadium for better streaming performance of an event as the number of users watching remotely on-demand goes up—this sort of dynamic network-on-demand capability is enabled by SDN. Additionally, NFV can play a key role where many of the monitoring points that make the city “smart” are actually not purpose-built hardware-centric solutions, but rather software-based solutions that can be running on-demand.

With virtual network functions (VNF), the network can react in a more agile manner as the municipality requires. This is particularly important because the network underlying the smart city must be able to extract high levels of contextual insight through real-time analytics conducted on extremely large datasets if systems are to be able to problem-solve in real-time; for example, automatically diverting traffic away from a street where a traffic incident has taken place.

SDN and NFV may enable the load balancing, service chaining and bandwidth calendaring needed to manage networks that are unprecedented in scale. In addition, SDN and NFV can ensure network-level data security and protection against intrusions – which is critical given the near-impossible task of securing the numerous sensor and device end points in smart city environments.
Smart city business models

In their smart city initiatives, cities large and small are addressing issues regarding planning, infrastructure, systems operations, citizen engagement, data sharing, and more. The scale might vary, but all are trying to converge networks in order to provide better services to citizens in an era of shrinking budgets. As such, the decision on how to go about making this a reality is important. There are four major smart city business models to consider, as defined by analysts at Frost & Sullivan (“Global Smart City Market a $1.5T Growth Opportunity In 2020”):

Build Own Operate (BOO): In a BOO model, municipalities own, control, and independently build the city infrastructure needed, and deliver the smart city services themselves. Both operation and maintenance of these services is under the municipality’s control, often headed up by their city planner.

Build Operate Transfer (BOT): Whereas in a BOO model, the municipality is always in charge of the operation and management of smart city services, in a BOT model that is only the case after a little while – the smart city infrastructure building and initial service operation is first handled by a trusted partner appointed by the city planner. Then, once all is built and in motion, operation is handed back over to the city.

Open Business Model (OBM): In an OBM model, the city planner is open to any qualified company building city infrastructure and providing smart city services, so long as they stay within set guidelines and regulations.

Build Operate Manage (BOM): Finally, there is the BOM model, which is where the majority of smart city projects are likely to fall under. In this model, the smart city planner appoints a trusted partner to develop the city infrastructure and services. The city planner then has no further role beyond appointment – the partner is in charge of operating and managing smart city services.

SDN and NFV: The keys to the (smart) city
With the appropriate business model in place and the network foundation laid out, the technology needs to be implemented to enable virtualization. Virtualized applications allow for the flexibility of numerous data types, and the scalability to transport huge amounts of data the city aims to use in its analysis.

SDN and NFV reduce the hardware, power, and space requirements to deploy network functions through the use of industry-standard high-volume servers, switches and storage; it makes the network applications portable and upgradeable with software; and it allows cities of all sizes the agility and scalability to tackle the needs and trends of the future as they arise. Like the brain’s neural pathways throughout a body, SDN and NFV are essential in making the smart city and its networks connect and talk to each other in a meaningful way.



Click here to view complete Q&A of 98-361 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 98-361 Training at


Are wearables worth the cybersecurity risk in the enterprise?

How should the enterprise address the growing adoption of wearables?

The Internet of Things and wearable technology are becoming more integrated into our everyday lives. If you haven’t already, now is the time to begin planning for their security implications in the enterprise.

According to research firm IHS Technology, more than 200 million wearables will be in use by 2018. That’s 200 million more chances of a security issue within your organization. If that number doesn’t startle you, Gartner further predicts that 30% of these devices will be invisible to the eye. Devices like smart contact lenses and smart jewelry will be making their way into your workplace. Will you be ready to keep them secure even if you can’t see them?

According to TechTarget, “Although there haven’t been any major publicized attacks involving wearables yet, as the technology becomes more widely incorporated into business environments and processes, hackers will no doubt look to access the data wearables hold or use them as an entry point into a corporate network.”

While it’s true that IT cannot possibly be prepared for every potential risk, as an industry we need to do a better job of assessing risks before an attack happens. This includes being prepared for new devices and trends that will pose all new risks for our organizations.

How many of us read the news about a new data breach practically every day and have still yet to improve security measures within our own organizations? If you’re thinking “guilty,” you’re not alone. Organizational change can’t always happen overnight, but we can’t take our eyes off the ball either.

In a 2014 report, 86% of respondents expressed concern for wearables increasing the risk of data security breaches. IT Business Edge suggests, “With enterprise-sensitive information now being transferred from wrist to wrist, businesses should prepare early and create security policies and procedures regarding the use of wearables within the enterprise.” Updating policies is a smart move, but the hard part is anticipating the nature and use of these new devices and then following through with implementing procedures to address them. It seems it may be easier said than done.

We all know that wearables pose security challenges, but how do IT departments begin to address them? This can be especially challenging considering that some of the security risks lie on the device manufacturers rather than the teams responsible for securing the enterprise network the technology is connected to. Many wearables have the ability to store data locally without encryption, PIN protection, or user-authentication features, meaning that if the device is lost or stolen, anyone could potentially access the information.

Beyond the data breach threat of sensitive information being accessed by the wrong hands, wearables take it a step further by providing discreet access for people to use audio or video surveillance to capture sensitive information. Is someone on your own team capturing confidential information with their smartwatch? You may not realize it’s happening until it’s too late.

How can we effectively provide security on devices that appear insecure by design? It seems the safest option is to ban all wearables in the enterprise – there are too many risks associated with them, many of which seemingly cannot be controlled. If this thought has crossed your mind, I may have bad news for you. This isn’t really an option for most organizations, especially those looking to stay current in today’s fast-paced society. TechTarget’s Michael Cobb explains, “Banning wearable technology outright may well drive employees from shadow IT to rogue IT – which is much harder to deal with.”

If the threat of rogue IT isn’t enough to convince you, also consider that there may very well be real benefits of wearables for your organization. According to Forrester, the industries that will likely benefit from this technology in the short term are healthcare, retail, and public safety organizations. As an example in the healthcare field, Forrester suggests that “the ability of biometric sensors to continually monitor various health stats, such as blood glucose, blood pressure and sleep patterns, and then send them regularly to healthcare organizations for monitoring could transform health reporting.” There are many examples for other industries, and the market continues to evolve every day.

It all boils down to this: enterprise wearables present a classic case of risk versus reward. We know there are many security risks, but are the potential rewards great enough to make the risks worthwhile? This answer may vary based on your industry and organization, but chances are there are many real business opportunities that can come from wearable technology.

If you haven’t already, it’s time to start talking with your teams about what those opportunities are and the best ways to ease the associated risks. As we all know, the technology will move forward with or without us and the ones who can effectively adapt will be the ones who succeed. It’s our job to make sure our organizations are on the right side of that equation.


MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at

Sony BMG Rootkit Scandal: 10 Years Later

Object lessons from infamous 2005 Sony BMG rootkit security/privacy incident are many — and Sony’s still paying a price for its ham-handed DRM overreach today.

Hackers really have had their way with Sony over the past year, taking down its Playstation Network last Christmas Day and creating an international incident by exposing confidential data from Sony Pictures Entertainment in response to The Interview comedy about a planned assassination on North Korea’s leader. Some say all this is karmic payback for what’s become known as a seminal moment in malware history: Sony BMG sneaking rootkits into music CDs 10 years ago in the name of digital rights management.

“In a sense, it was the first thing Sony did that made hackers love to hate them,” says Bruce Schneier, CTO for incident response platform provider Resilient Systems in Cambridge, Mass.
LogRhythm CEO hobbies

Mikko Hypponen, chief research officer at F-Secure, the Helsinki-based security company that was an early critic of Sony’s actions, adds:

“Because of stunts like the music rootkit and suing Playstation jailbreakers and emulator makers, Sony is an easy company to hate for many. I guess one lesson here is that you really don’t want to make yourself a target.

“When protecting its own data, copyrights, money, margins and power, Sony does a great job. Customer data? Not so great,” says Hypponen, whose company tried to get Sony BMG to address the rootkit problem before word of the invasive software went public. “So, better safe than Sony.”

The Sony BMG scandal unfolded in late 2005 after the company (now Sony Music Entertainment) secretly installed Extended Copy Protection (XCP) and MediaMax CD-3 software on millions of music discs to keep buyers from burning copies of the CDs via their computers and to inform Sony BMG about what these customers were up to. The software, which proved undetectable by anti-virus and anti-spyware programs, opened the door for other malware to infiltrate Windows PCs unseen as well. (As if the buyers of CDs featuring music from the likes of Celine Dion and Ricky Martin weren’t already being punished enough.)

The Sony rootkit became something of a cultural phenomenon. It wound up as a punch line in comic strips like Fox Trot, it became a custom T-shirt logo and even was the subject of class skits shared on YouTube. Mac fanboys and fangirls smirked on the sidelines.

“In a sense, [the rootkit] was the first thing Sony did that made hackers love to hate them,” says Bruce Schneier, Resilient Systems CTO.

Security researcher Dan Kaminsky estimated that the Sony rootkit made its mark on hundreds of thousands of networks in dozens of countries – so this wasn’t just a consumer issue, but an enterprise network one as well.

Once Winternals security researcher Mark Russinovich — who has risen to CTO for Microsoft Azure after Microsoft snapped up Winternals in 2006 — exposed the rootkit on Halloween of 2005, all hell broke loose.

Sony BMG botched its initial response: “Most people don’t even know what a rootkit

is, so why should they care about it?” went the infamous quote from Thomas Hesse, then president of Sony BMG’s Global Digital Business. The company recalled products, issued and re-issued rootkit removal tools, and settled lawsuits with a number of states, the Federal Trade Commission and the Electronic Frontier Foundation.

Microsoft and security vendors were also chastised for their relative silence and slow response regarding the rootkit and malware threat. In later years, debate emerged over how the term “rootkit” should be defined, and whether intent to maliciously seize control of a user’s system should be at the heart of it.

In looking back at the incident now, the question arises about how such a privacy and security affront would be handled these days by everyone from the government to customers to vendors.

“In theory, the Federal Trade Commission would have more authority to go after [Sony BMG] since the FTC’s use of its section 5 power has been upheld by the courts,” says Scott Bradner, University Technology Security Officer at Harvard. “The FTC could easily see the installation of an undisclosed rootlet as fitting its definition of unfair competitive practices.”

Bill Bonney, principal consulting analyst with new research and consulting firm TechVision Research, says he can’t speak to how the law might protect consumers from a modern day Sony BMG rootkit, but “with the backlash we have seen for all types of non-transparent ways (spying, exploiting, etc.) companies are dealing with their customers, I think in the court of public opinion the response could be pretty substantial and, as happened recently with the EU acting (theoretically) because of [the NSA’s PRISM program], if the issue is egregious enough there could be legal or regulatory consequences. “

As for how customers might react today, we’ve all seen how quickly people turn to social media to take companies to task for any product or service shortcoming or any business shenanigans. Look no further than Lenovo, which earlier this year got a strong dose of negative customer reaction when it admittedly screwed up by pre-loading Superfish crapware onto laptops. That software injected product recommendations into search results and opened a serious security hole by interfering with SSL-encrypted Web traffic.

In terms of how security vendors now fare at spotting malware or other unsavory software, Schneier says “There’s always been that tension, even now with stuff the NSA and FBI does, about how this stuff is classified. I think [the vendors] are getting better, but they’re still not perfect… It’s hard to know what they still let by.”

Noted tech activist Cory Doctorow, writing for Boing Boing earlier this month, explains that some vendors had their reasons for not exposing the Sony rootkit right away. “Russinovich was not the first researcher to discover the Sony Rootkit, just the first researcher to blow the whistle on it. The other researchers were advised by their lawyers that any report on the rootkit would violate section 1201 of the DMCA, a 1998 law that prohibits removing ‘copyright protection’ software. The gap between discovery and reporting gave the infection a long time to spread.”

Reasons for hope though include recent revelations by the likes of Malwarebytes, which warned users that a malicious variety of adware dubbed eFast was hijacking the Chrome browser and replacing it, by becoming the default browser associated with common file types like jpeg and html.

Schneier says it’s important that some of the more prominent security and anti-virus companies — from Kaspersky in Russia to F-Secure in Finland to Symantec in the United States to Panda Security in Spain — are spread across the globe given that shady software practices such as the spread of rootkits are now often the work of governments.

“You have enough government diversity that if you have one company deliberately not finding something, then others will,” says Schneier, who wrote eloquently about the Sony BMG affair for back in 2005.

The non-profit Free Software Foundation Europe (FSFE) has been calling attention to the Sony BMG rootkit’s 10th anniversary, urging the masses to “Make some noise and write about this fiasco” involving DRM. The FSFE, seeing DRM as an anti-competitive practice, refers to the words behind the acronym as digital restriction management rather than the more common digital rights management.

F-Secure Chief Research Officer Mikko Hypponen: “I guess one lesson here is that you really don’t want to make yourself a target.”

Even worse, as the recent scandal involving VW’s emissions test circumvention software shows, is that businesses are still using secret software to their advantage without necessarily caring about the broader implications.

The object lessons from the Sony BMG scandal are many, and might be of interest to those arguing to build encryption backdoors into products for legitimate purposes but that might be turned into exploitable vulnerabilities.

One basic lesson is that you shouldn’t mimic the bad behavior that you’re ostensibly standing against, as Sony BMG did “in at least appearing to violate the licensing terms of the PC manufacturers” TechVision’s Bonney says.

And yes, there is a warning from the Sony BMG episode “not to weaponize your own products. You are inviting a response,” he says.


Click here to view complete Q&A of 70-355 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-355 Training at



Five steps to optimize your firewall configuration

95% of all firewall breaches are caused by misconfiguration. Here’s how to address the core problems

Firewalls are an essential part of network security, yet Gartner says 95% of all firewall breaches are caused by misconfiguration. In my work I come across many firewall configuration mistakes, most of which are easily avoidable. Here are five simple steps that can help you optimize your settings:

* Set specific policy configurations with minimum privilege. Firewalls are often installed with broad filtering policies, allowing traffic from any source to any destination. This is because the Network Operations team doesn’t know exactly what is needed so start with this broad rule and then work backwards. However, the reality is that, due to time pressures or simply not regarding it as a priority, they never get round to defining the firewall policies, leaving your network in this perpetually exposed state.

You should follow the principle of least privilege – that is, give the minimum level of privilege the user or service needs to function normally, thereby limiting the potential damage caused by a breach. You should also document properly – ideally mapping out the flows that your applications actually require before granting access. It’s also a good idea to regularly revisit your firewall policies to look at application usage trends and identify new applications being used on the network and what connectivity they actually require.

* Only run required services. All too often I find companies running firewall services that they either don’t need or are no longer used, such as dynamic routing, which typically should not be enabled on security devices as best practice, and “rogue” DHCP servers on the network distributing IPs, which can potentially lead to availability issues as a result of IP conflicts. It’s also surprising to see the number of devices that are still managed using unencrypted protocols like Telnet, despite the protocol being over 30 years old.

The solution is to harden devices and ensure that configurations are compliant before devices are promoted into production environments. This is something a lot of organizations struggle with. By configuring your devices based on the function that you actually want them to fulfil and following the principle of least privileged access – before deployment – you will improve security and reduce the chances of accidentally leaving a risky service running on your firewall.

* Standardize authentication mechanisms. During my work, I often find organizations that use routers that don’t follow the enterprise standard for authentication. One example I encountered is a large bank that had all the devices in its primary data centers controlled by a central authentication mechanism, but did not use the same mechanism at its remote office. By not enforcing corporate authentication standards, staff in the remote branch could access local accounts with weak passwords, and had a different limit on login failures before account lockout.

This scenario reduces security and creates more opportunities for attackers, as it’s easier for them to access the corporate network via the remote office. Enterprises should therefore ensure that any remote offices they have follow the same central authentication mechanism as the rest of the company.

* Use the right security controls for test data. Organizations tend to have good governance stating that test systems should not connect to production systems and collect production data, but this is often not enforced because the people who are working in testing see production data as the most accurate way to test. However, when you allow test systems to collect data from production, you’re likely to be bringing that data down into an environment with a lower level of security. That data could be highly sensitive, and it could also be subject to regulatory compliance. So if you do use production data in a test environment, make sure that you use the correct security controls required by the classification the data falls into.

* Always log security outputs. While logging properly can be expensive, the costs of being breached or not being able to trace the attack are far higher. Failing to store the log output from their security devices, or not doing so with enough granularity is one of the worst things you can do in terms of network security; not only will you not be alerted when you’re under attack, but you’ll have little or no traceability when you’re carrying out your post-breach investigation. By ensuring that all outputs from security devices are logged correctly organizations will not only save time and money further down the line but will also enhance security by being able to properly monitor what is happening on their networks.

Enterprises need to continuously monitor the state of their firewall security, but by following these simple steps businesses can avoid some of the core misconfigurations and improve their overall security posture.


MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at


Tech pros make the most of the ‘gig economy’

Younger IT workers are increasingly choosing independence over full-time employment. Is the ‘open talent economy’ right for you too? Three 20- and 30-somethings share their experiences.

Call it what you will — the “open talent economy,” “freelancing,” the “gig economy,” “contracting” — working for yourself is having a moment, particularly in high tech.

Once upon a time, IT pros went freelance only when driven there by circumstances like a bad economy, a layoff or an overabundance of their particular skill set. Or they turned to consulting in the sunset of their careers, tired of cubicle farms and long commutes. Now, millennials, who this year became the largest proportion of the labor force, are leading the charge to change the tech industry’s perception of self-employment.

It’s common knowledge that the cohort of workers 35 and under prefer a flexible, DIY workstyle, using their personal mobile devices to communicate and work from anywhere at any time. What’s not so commonly known, however, is that some millennials — some say it’s a growing number — are eschewing traditional employment altogether to work as independents.

“A large number of millennials are choosing a different path in terms of what they want in their professional life,” says Alisia Genzler, executive vice president at Randstad Technologies, a high-tech talent and solutions company. “We are seeing more and more of them choose freelancing and contract work over traditional jobs, more so than in previous generations.”

Millennials came of age and graduated from college during the Great Recession, many saddled with debt and unable to find a job. While some eventually made their way into the corporate workforce, others stayed independent, either by choice or by circumstance. “We now have a generation of workers who never had full-time jobs,” says Can Erbil, an economics professor at Boston College who studies the labor market. “That is not the exception but more the norm for them.” What’s more, millennials grew up in an educational environment that stressed project-oriented work, he adds, so short-term sprints are a natural cadence for them.

” We now have a generation of workers who never had full-time jobs. ”

Can Erbil, professor of economics, Boston College
The recession also taught millennials that a traditional job and long-term loyalty to an employer don’t necessarily mean security. “A lot of them look at their parents who had jobs with one company for a long time, only to be laid off, so [millennials] want to keep their options open,” says John Reed, senior executive director at Robert Half Technology.

And benefits are increasingly becoming decoupled from employers — with the Affordable Care Act guaranteeing individual access to health insurance, workers don’t have to be on a payroll to be covered. In fact, according to an article in Money magazine, only 31% of college graduates last year received employer-provided health insurance, compared to 53% in 2000.

High tech is gig-friendly
Millennials may be blazing the path, but freelancing is an option that can work for employees at any age, proponents argue. A 2014 study found that 53 million U.S. workers were freelancing to some extent — that’s 34% of the workforce. Millennials were the largest group of survey respondents who said they were freelancing, at 38%, according to the report, which was commissioned by Freelancers Union and Elance-oDesk, the freelance marketplace platform now called Upwork. But 32% of those over 35 likewise indicated they were working independently.

Daniel Masata, senior vice president at staffing and recruitment firm Adecco Engineering & Technology, says he’s seeing the trend across all age groups. Baby Boomers, for example, might freelance to keep their hand in or supplement retirement income. Gen-Xers may have been laid off during the last recession and either had difficulty getting rehired or just decided to go independent. Ten years ago, 75% of candidates for technology jobs were seeking full-time employment, Masata estimates. Today, it’s only about 50%.

The high-tech industry is particularly well-suited to the gig economy. The software development cycle, for example, has become well-defined and compartmentalized, making it easier to farm out, says Andrew Liakopoulos, principal within the human capital practice at Deloitte Consulting and an expert on what Deloitte calls the “open talent economy.”

In fact, IT is one of the first markets where Deloitte noticed the freelancing trend. “The millennials were the ones who, after being forced into [freelancing], actually have used what was happening in the macro environment to their advantage,” says Liakopoulos. “And IT was the first occupation where we saw them doing it.”

To discover what impact the gig economy might have on tech employees of any age, Computerworld sought out millennials who are working independently. Some are freelancing indefinitely, some are using freelancing as a stepping stone to a better job, and some of them say they are committed to contract work for their entire careers. Although freelancing has its downsides, specifically the risk of not finding enough good-paying work and the lack of benefits like paid time off and company-subsidized healthcare, all say their experience as independent workers offers many advantages.

Read on to hear their stories and determine whether gig work might be right for you.
Rejecting perfectly good jobs (at Microsoft!)

Erik Kennedy joined Microsoft as a program manager straight out of school after graduating in 2010 from Olin College of Engineering with a bachelor’s in electrical and computer engineering. But at age 25, after three years with the company, he decided to strike out on his own.
erik kennedy independent tech worker

UI/UX programmer Erik Kennedy says he makes money at about the same rate as when he worked at Microsoft, but as a freelancer, he’s able to take significant time off for travel.

Although Microsoft was a good employer, Kennedy says, he felt stifled by the atmosphere of a large company. He wanted to pick his own projects. “Hypothetically, my boss’s boss’s boss’s boss’s boss could make a decision that could affect what I did on a day-to-day basis,” he explains. “I wanted a little more freedom and was willing to take a little more risk.”

The inherent insecurity of freelancing means that it’s not suitable for everyone, says Kennedy. “You kind of ‘lose your job’ every two to six months” as projects turn over, he says. “If you can handle that, then it’s a great deal.”

The area in which Kennedy specializes — UI/UX (user interface, user experience) — is in high demand, which lessens his risk. Based in the Seattle area, Kennedy works mostly for startups and nonprofits, with a few name-brand technology companies like Amazon in the mix for variety.

So far, two and a half years in, Kennedy’s been happy with his decision. “I make money at about the same rate [as I did at Microsoft], but I’ve taken off more time for travel since becoming a freelancer,” he says. He even got married last year, after which he and his bride travelled the world for eight months. “It’s such a millennial thing to do, and we would have never been able to do that if I had a full-time job,” says Kennedy.
Paying off the mortgage — in your 30s

Steven Boyd, 33, went freelance in 2011 after working as a developer in a series of full-time jobs. At one employer he learned SketchFlow, a part of Microsoft’s Visual Studio, and now specializes in it. “At first I was scared” to go independent, Boyd admits. “I felt that I needed that stability you get from a full-time permanent position.” But then he realized that security was an illusion. One startup where he worked couldn’t make its payroll one month. He was tired of being assigned projects, rather than choosing his own, and felt underappreciated.
steven boyd independent contractor

SketchFlow developer Steven Boyd feels more appreciated as an independent contractor — and he’s paid off the mortgage on the family home.

Today, he picks his own projects and clients (which range from large corporations to startups and nonprofits), works when he wants to and by his estimation is financially secure. In fact, he makes much more money than he did at his previous positions, which topped out at $110,000 a year. “And I had to really negotiate hard for that.” In 2013, he made close to $250,000, but “worked way too much,” he says. In 2014, he scaled back to working 30 hours a week and still earned $180,000.

He’s paid off the mortgage on the family home in the Denver area, bought several rental properties and started a scholarship fund at his alma mater, Colorado State University, to encourage minority students to pursue computer science. “To be able to amass that sort of money in such a short period of time would be nearly impossible as a full-time employee,” he notes.

He doesn’t miss the benefits; his wife works full-time and so provides health insurance for him and their four-year-old son. Nor does he miss paid vacations — saying he never took them anyway — but relishes having the flexibility to take big chunks of time off when life requires it. Recently, for example, Boyd took a hiatus to care for his son for three months while their babysitter recovered from surgery.

Both Kennedy and Boyd recommend working a few years at a traditional job before trying freelancing. “I couldn’t see someone coming straight out of school and being successfully independent,” says Boyd. “It takes a while to learn how to deal with people and different types of scenarios.” By working a traditional job first, Kennedy says, he built up a good network that he could tap for business when he went solo.
Keeping skills sharp

Independent work can be as valuable to long-term career growth as a technical degree, says Katy Tynan, author of the book Free Agent: The Independent Professional’s Roadmap to Self-Employment Success. That’s because freelancers are typically required to pick up new skills quickly, says Tynan, who worked in IT for 15 years. Staying at a traditional IT job for years can cause employees to lose relevance, she says. “Things tend to stay the same within an organization; you don’t have to rapidly learn new things.”

In many enterprise shops, “You have to jump through all sorts of hoops just to learn a new technology,” says Ron Pastore, 35, who made the switch to freelancing two years ago. “You end up molded into what they need you to be, and then if they don’t need you anymore, you’re out there in the market with limited skills,” he says.

ron pastore independent IT contractor

Software engineer Ron Pastore works primarily with startups — for a reduced rate plus equity. “Going back to traditional employment would be my worst-case scenario,” he says.

Pastore has no college degree, but excelled in programming at an early age. He worked as a software engineer in various full-time positions for 10 years, but ultimately wanted more flexibility and felt limited by traditional employment, he says.

Married with two children, the Rockland, N.Y.-based Pastore says he is more secure financially today than before, because he’s not depending on one source of income. He estimates he makes 15% to 20% more today than he did at corporate jobs, “though this is not an apples-to-apples comparison,” he says. “I work mainly with startups, at a reduced rate plus equity.” He also works many fewer hours than he did as an employee and says he has no trouble finding clients.

Pastore hopes he’ll never hold a corporate full-time job again. “Going back to traditional employment would be my worst-case scenario,” he says. For his part, Kennedy says he is not averse to going back to a full-time job, but for now freelancing makes sense for him.
The job you want, not the job that’s offered

Whether they stay in freelancing or not, younger programmers are showing just how confident they are in their ability to fashion the career they want, not the one that’s offered by corporations. If the job doesn’t suit, they have no problem walking away from it. Boyd, for example, says he recently rebuffed the advances of a recruiter for Microsoft. The job sounded attractive, “and I probably would’ve taken it if it wasn’t so much travel,” he says. “I like this flexibility of being independent.”

With the proportion of millennials in the workforce continuing to grow (some forecasts say they will make up 75% within the next decade), this is likely to be a permanent change in the labor market. “As you look where this is heading, there’s no turning back,” says Deloitte’s Liakopoulos. A substantial proportion of younger workers do not want to become part of the old economy, he says. “They don’t want to be tethered to an organization. They want to continue being entrepreneurial. And they [plan] to use freelancing to create the flexibility they want in their lives.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

13 more big data & analytics companies to watch

So many big data and analytics-focused startups are getting funding these days that I’ve been inspired to compile a second slideshow highlighting these companies. This new batch has reined in some $250 million this year as they seek to help organizations make sense of the seemingly endless pool of data going online.

So many big data and analytics-focused startups are getting funding these days that I’ve been inspired to compile a second slideshow highlighting these companies (see “13 Big Data and Analytics Companies to Watch” for the previous collection). This new batch has reined in some $250 million this year as they seek to help organizations more easily access and make sense of the seemingly endless amount of online data.

Founded: 2012
Headquarters: Redwood City, Calif.
Funding/investors: $9M in Series A funding led by Costanoa Capital and Data Collective.

Focus: Its data accessibility platform is designed to make information more usable by the masses across enterprises. The company is led by former Oracle, Apple, Google and Microsoft engineers and executives, and its on-premises and virtual private cloud-based offerings promise to help data analysts get in sync, optimize data across Hadoop and other stores, and ensure data governance. Boasts customers including eBay and Square.

Founded: 2012
Headquarters: Menlo Park, Calif. (with operations in India, too)
Funding/investors: $15M in Series B funding led by Scale Venture Partners and Next World Capital, bringing total funding to $23M.

Focus: Data science-driven predictive analytics software for sales teams, including the newly released Aviso Insights for Salesforce. Co-founder and CEO K.V. Rao previously founded subscription commerce firm Zuora and worked for WebEx, while Co-founder and CTO Andrew Abrahams was head of quantitative research and model oversight at JPMorgan Chase. The two met about 20 years ago at the National Center for Supercomputing Applications.

Founded: 2004
Headquarters: San Francisco
Funding/investors: $156M, including a $65M round in March led by Wellington Management.

Focus: Cloud-based business intelligence and analytics that works across compliance-sensitive enterprises but also gives end users self-service data access. This company, formed by a couple of ex-Siebel Analytics team leaders, has now been around for a while, has thousands of customers and has established itself as a competitor to big companies like IBM and Oracle. And it has also partnered with big companies, such as AWS and SAP, whose HANA in-memory database can now run Birst’s software.

Founded: 2012
Headquarters: Mountain View
Funding/investors: $39M, including a $20M Series C round led by Intel Capital in August.

Focus: A founding team from VMware has delivered the EPIC software platform designed to enable customers to spin up virtual on-premises Hadoop or Spark clusters that give data scientists easier access to big data and applications. (We also included this firm in our roundup of hot application container startups.)

Founded: 2009
Headquarters: San Francisco
Funding/investors: $76M, including $40M in Series E funding led by ST Telemedia.

Focus: Big data analytics application for Hadoop designed to let any employee analyze and visualize structured and unstructured data. Counts British Telecom and Citibank among its customers.

Deep Information Sciences
Founded: 2010
Headquarters: Boston
Funding/investors: $18M, including an $8M Series a round in April led by Sigma Prime Ventures and Stage 1 Ventures.

Focus: The company’s database storage engine employs machine learning and predictive algorithms to enable MySQL databases to handle big data processing needs at enterprise scale. Founded by CTO Thomas Hazel, a database and distributed systems industry veteran.

Founded: 2012
Headquarters: Santa Cruz
Funding/investors: $48M, including a $30M B round in March led by Meritech

Focus: Web-based business intelligence platform that provides access to data whether in a database or the cloud. A modeling language called LookML enables analysts to create interfaces end users can employ for dashboard or to drill down and really analyze data. Founded by CTO Lloyd Tabb, a one-time principal engineer at Netscape, where he worked on Navigator and Communicator. Looker claims to have Etsy, Uber and Yahoo among its customers.

Founded: 2012
Headquarters: Palo Alto
Funding: $14M, including $11M in Series A funding in May, with backers including Chevron Technology Ventures and Intel Capital.

Focus: Semantic search engine that plows through big data from multiple sources and delivers information in a way that can be consumed by line-of-business application users. The company announced in June that its platform is now powered by Apache Spark. Co-founder Donald Thompson spent 15 years prior to launching Maana in top engineering and architect jobs at Microsoft, including on the Bing search project.

Founded: 2007
Headquarters: Cambridge, Mass.
Funding/investors: $20M, including $15M in Series B funding led by Ascent Venture Partners.

Focus: This company, which got its start in Germany under founder Ingo Mierswa, offers an open source-based predictive analytics platform for business analysts and data scientists. The platform, available on-premises or in the cloud, has been upgraded of late with new security and workflow capabilities. Peter Lee, a former EVP at Tibco, took over as CEO in June.

Founded: 2011
Headquarters: Redwood Shores, Calif.
Funding/investors: $10M in Series A funding in March, from Crosslink Capital and .406 Ventures.

Focus: The team behind Informatica/Siperian MDM started Reltio, which offers what it calls data-driven applications for sales, marketing, compliance and other users, as well as a cloud-based master data management platform. The company claims its offerings break down silos between applications like CRM and ERP to give business users direct access to and control over data.

Founded: 2014
Headquarters: Palo Alto
Funding/investors: $900K in seed funding from investors including Andreessen Horowitz and Formation8.

Focus: A “data science platform for the unstructured world.” Sensai’s offering makes it possible to quantify and analyze textual information, such as from news articles and regulatory filings. The company is focused initially on big financial firms, like UBS, though also has tech giant Siemens among its earlier customers. Two of Sensai’s co-founders come from crowdfunding company


Founded: 2014
Headquarters: Seattle
Funding/investors: $13.25M, including a $10M Series A round led by Foundry Group, New Enterprise Associates and Madrona Venture Group

Focus: This iPhone app enables businesses to tap into smartphone users (or “Fives”) to clean up big data in their spare time for a little spare cash. The idea is that computing power alone can’t be counted on to crunch and analyze big data. Micro-tasks include everything from SEO-focused photo tagging to conducting surveys.

Treasure Data
Founded: 2011
Headquarters: Mountain View
Funding/investors: $23M, including $15M in January in Series B funding led by Scale Venture Partners.

Focus: Provides cloud services designed to simplify the collection, storage and analysis of data, whether from mobile apps, Internet of Things devices, cloud applications or other sources of information. This alternative to Hadoop platforms and services handles some 22 trillion events per year, according to the company, which has a presence not just in Silicon Valley, but in Japan and South Korea as well.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


OpenStack is redefining the business model for data solutions

Want proof? Industry leading vendors are snatching up OpenStack-based companies

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

IT is headed toward being something more akin to a utility service, transformed by OpenStack’s open standardized cloud architecture, which will improve interoperability and render vendor lock-in a thing of the past.

Initially a solution adopted by smaller ISVs lacking the capital to build private clouds, OpenStack-based cloud solutions are shaping up to be the logical choice for large enterprise as industry leaders, including IBM, Cisco, EMC, HP and Oracle, bet on its value for defining the next-generation model for business computing.

These industry giants have been snatching up OpenStack-based companies over the past couple years, building up their capabilities around the architecture. IBM and Cisco are some of the latest to close deals, with their respective acquisitions of Blue Box and Piston Cloud Computing. Other relevant acquisitions include EMC’s purchase of Cloudscaling, Oracle’s Nimbula acquistion, and Cisco’s MetaCloud acquisition.

OpenStack’s value for business lies in its capacity for facilitating seamless private-to-public scalability and extensive workload portability, while removing the need to lay out capital to acquire and maintain depreciating commodity hardware.

These companies see that innovations in open clouds will inevitably win out as the premiere solution for business data management. The days of commodity hardware and internally managed datacenters are rapidly fading. With cloud services available on a pay-as-you-go basis and infrastructure as a service (IaaS) removing the need to invest in commodity hardware, customers will look at performance, pricing and quality of service as the most important factors in choosing a cloud provider, while maintaining the freedom to easily switch if a better option comes along.

OpenStack’s core strength is interoperability, allowing for seamless scaling across private and public environments, as well as easier transition and connectivity across vendors and networks.

Companies like IBM and Cisco buying up OpenStack-based providers to bolster their own hybrid cloud solutions does not mean the architecture will lose touch with its open-source roots. Open standards and interoperability go hand-in-hand and are at the heart of OpenStack’s unique capabilities.

What we are seeing is the maturation of OpenStack, with major names in business computing positioned to mainstream its adoption by leveraging their financial, IP, R&D resources and brand trust to meet complex demands and ensure confidence from large enterprise organizations transitioning to the cloud.

Cisco listed OpenStack’s capabilities for enhancing automation, availability and scale for hybrid clouds as playing a major role in its new Intercloud Network, while HP is utilizing OpenStack to facilitate its vendor-neutral Helion Network, which will pool the services of Helion partners to offer global workload portability for customers of vendors within their network.

Adoption of OpenStack by these providers signals a major shift for the industry, moving away from dependence on hardware sales and heavy contractual service agreements to a scalable IaaS utilities model, where customers pay for what they need when they need it and expect it to just work. Providers may need to shoulder the burden of maintaining datacenters but will reap the reward of pulling the maximum value from their commodity investments.

Interoperability may seem like a double-edged sword for companies that were built on their own software running exclusively on their own hardware. But the tide is shifting and they realize that closed platforms are losing relevance, while open architecture offers new opportunities to expand their business segments, better serve customers, and thrive with a broader customer base.

Cisco recently added new functionalities for its Intercloud offering, extending virtual machine on-boarding to support Amazon Virtual Private Cloud and extending its zone-based firewall services to include Microsoft Azure. Last year, IBM partnered with software and cloud competitor Microsoft, each offering their respective enterprise software across both Microsoft Azure and the IBM Cloud to help reduce costs and spur development across their platforms for their customers. OpenStack furthers these capabilities across the quickly expanding list of providers adapting the cloud architecture, enabling a vendor-agnostic market for software solutions.

Open standardized cloud architecture is the future of business IT, and OpenStack currently stands as the best and only true solution to make it happen. Its development was spurred by demand from small ISVs who will continue to require its capabilities and promote its development, regardless of whether large enterprise service providers are on board.

However, its inevitable development and obvious potential for enterprise application is forcing the hand of IT heavyweights to conform. Regardless if they’d prefer to maintain the status quo for their customers, the progress we’ve seen won’t be undone and the path toward vendor neutrality has been set.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Dropbox security chief defends security and privacy in the cloud

Patrick Heim is the (relatively) new head of Trust & Security at Dropbox. Formerly Chief Trust Officer at Salesforce, he has served as CISO at Kaiser Permanente and McKesson Corporation. Heim has worked more than 20 years in the information security field. Heim discusses security and privacy in the arena of consumerized cloud-based tools like those that employees select for business use.

What security and privacy concerns do you still hear from those doing due diligence prior to placing their trust in the cloud?
A lot of them are just trying to figure out what to do with the cloud in general. Companies right now have really three choices, especially with respect to the consumer cloud (i.e., cloud tools like Dropbox). One of them is to kind of ignore it, which is always a horrible strategy because when they look at it, they see that their users are adopting it en masse. Strategy two is to build IT walls up higher and pretend it’s not happening. Strategy three is adoption, which is to identify what people like to use and convert it from the uncontrolled mass of consumerized applications into something security feels comfortable with, something that is compliant with the company’s rules with a degree of manageability and cost control.

Are there one or two security concerns you can name? Because if the cloud was always entirely safe in and of itself, the enterprise wouldn’t have these concerns.

If you look at the track record of cloud computing, it’s significantly better from a security perspective than the track record of keeping stuff on premise. The big challenge organizations have, when you look at some of these breaches, is they’re not able to scale up to secure the really complicated in-house infrastructures they have.

We’re [as a cloud company] able to attract some of the best and brightest talent in the world around security because we’re able to get folks that quite frankly want to solve really big problems on a massive scale. Some of these opportunities aren’t available if they’re not in a cloud company.

How do you suggest that enterprises take that third approach, which is to adopt consumerized cloud applications?
The first step is through discovery. Understand how employees use cloud computing. There are a number of tools and vendors that help with that process. With that, IT has to be willing to rethink their role. Employees should really be the scouts for innovation. They’re at the forefront of adopting new apps and cloud technology. The role of IT will shift to custodian or curator of those technologies. IT will provide integration services to make sure that there is a reasonable architecture for piecing these technologies together to add value and to provide security and governance to make sure those kinds of cloud services align with the overall risk objectives of the organization.

“If you look at the track record of cloud computing, it’s significantly better from a security perspective than the track record of keeping stuff on premise.”

Patrick Heim, Head of Trust & Security, Dropbox

How can the enterprise use the cloud to boost security and minimize company overhead?
If you think about boosting security, there is this competition for talent and the lack of resources for the enterprise to do it in-house. If you look at the net risk concept, where you evaluate your security and risk posture prior to and after you invest in the cloud, and you understand what changes, one of those changes is: what do I not have to manage anymore? If you look at the complexity of the tech stack, there are security accountabilities, and the enterprise shifts the vast majority of security accountabilities on the infrastructure side to the cloud computing provider; that leaves your existing resources free to perform more value-added functions.

What are the security concerns in cloud collaboration scenarios?
When I think about collaboration especially outside of the boundaries of an individual organization, there is always the question of how do you maintain reasonable control over that information once it’s in the hands of somebody else? There is that underlying tension that the recipient of that shared information may not continue to protect it.

In response to that, there is ERM, which provides a document-level control that’s cryptographically enforced. We’re looking at ways of minimizing the usability tradeoff that can come with adding in some of these kinds of security advancements. We’re working with some vendors in this space to identify what do we have to do from an interface and API perspective to integrate this so that the impact on the end user for adopting some of these advanced encryption capabilities is absolutely minimized, meaning that when you encrypt a document using some of these technologies that you can still, for example, preview it and search for it.

How do enterprises need to power their security solutions in the current IT landscape?
When they look at security solutions, I think more and more they have to think beyond the old model of the network parameter. When they send data to the cloud, they have to adopt a security strategy that also involves cloud security, where the cloud actually provides the security as one of its functions.

There are a number of cloud-access security brokers, and the smart ones aren’t necessarily sitting on the network and monitoring, but the smart ones are interacting, using access and APIs, and looking at the data people are placing into cloud environments, analyzing them for policy violations, and providing for archiving and backup and similar capabilities.

Security tools that companies need to focus on could be oriented to how these capabilities are going to scale across multiple cloud vendors as well as how do I get away from inserting it into our network directly and focus more on API integration with multiple cloud vendors?

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Virtual Mobile Infrastructure: Secure the data and apps, in lieu of the device

VMI offers an effective, efficient way to provide access to sensitive mobile apps and data without compromising security or user experience

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Corporate use of smartphones and tablets, both enterprise- and employee-owned (BYOD), has introduced significant risk and legal challenges for many organizations.

Other mobile security solutions such as MDM (mobile device management) and MAM (mobile app management) have attempted to address this problem by either locking down or creating “workspaces” on users’ personal devices. For BYOD, this approach has failed to adequately secure enterprise data, and created liability issues in terms of ownership of the device – since it is now BOTH a personal and enterprise (corporate)-owned device.

MAM “wrap” solutions in particular require app modification in exchange for ‘paper thin’ security. You cannot secure an app running on a potentially hostile (unmanaged) operating system platform, and critically you can’t wrap commercial mobile applications.

By contrast, Virtual Mobile Infrastructure (VMI) offers an effective, efficient way to provide access to sensitive mobile apps and data without compromising enterprise security or user experience.

Like VDI for desktops, VMI offers a secure approach to mobility without heavy-handed management policies that impact user experience and functionality.

From IT’s perspective, VMI is a mobile-first platform that provides remote access to an Android virtual mobile device running on a secure server in a private, public or hybrid cloud. The operating system, the data, and the applications all reside on a back-end server — not on the local device.

From a user’s perspective, VMI is simply another app on their iOS, Android or Windows device that provides the same natural, undiluted mobile experience, with all the accustomed bells and whistles. Written as native applications, these client apps can be downloaded from the commercial app stores, or installed on devices using MAM or app wrapping technologies.

As Ovum states, “Put more simply, this [VMI] means in effect that your mobile device is acting only as a very thin client interface with all the functionality and data being streamed to it from a virtual phone running in the cloud.”

Getting started with VMI

After downloading and installing the VMI client, users go through an easy setup process, inputting server names, port numbers, account names and access credentials. When users connect to the VMI device they see a list of available applications, all running on a secure server that communicates with the client through encrypted protocols.

The client accesses apps as if they were running on a local device, yet because they are hosted in a data center, no data is ever stored on the device. Enterprises can secure and manage the entire stack from a central location, neutralizing many of the risks that mobile devices often introduce to a network.

Two-factor authentication is supported via PKI certificates in the physical phone’s key store. The physical device forces the user to have a PIN number (or biometric) to unlock the phone when there is a certificate in the hardware-backed key store. Additionally, the client supports variable session lengths with authentication tokens.

The server infrastructure that supports VMI clients can be implemented as multiple server clusters across geographic regions. As users travel, the client synchronizes with the server cluster closest to its physical location to access the applications on its virtual mobile device. The client continues to communicate with one server at a time, choosing the server location that provides the best performance.

In a typical deployment, there are compute nodes that host the virtual mobile devices, a storage service that holds user settings and data, and controller nodes that orchestrate the system.

The controller node(s) can be connected to an Enterprise Directory service, such as Active Directory, for user authentication and provisioning, and systems management tools such as Nagios and Monit can be used to monitor all parts of the system to ensure they are up and behaving properly (e.g. are not overloaded). The server hosting the devices creates detailed audit logs, which can be imported into a third party auditing tool such as Splunk or ArcSight.

VMI is platform-neutral, which means organizations can write, test, run and enhance a single instance of an app on a ‘gold disk’ OS image, rather than building separate apps for each supported end-user platform. This represents significant time and cost savings for resource-constrained IT organizations.

And while VMI takes a different approach to securing mobile endpoints than MDM, it does not aim to replace those solutions. Instead, VMI can integrate with MDM, MAM and other container solutions allowing organizations to use MDM to configure and manage an enterprise-owned virtual mobile device running in a data center, and MAM to support version management and control upgrade scheduling of VMI thin clients.

Mobile by design
Because VMI is optimized for smartphones and tablets with small touch screens and many sensors, users enjoy native apps and a full mobile experience. VMI supports unmodified commercial apps, allowing for greater workflow and productivity, and complements sandbox container solutions that provide limited offline access to apps such as corporate email by providing a richer user experience when the user is online (the vast majority of the time).

Users can also access separate work and personal environments from a single device, enjoying Facebook and Instagram and sending personal emails without worrying that corporate IT teams will seize data or wipe their data. When an employee leaves an organization, IT simply revokes their access privileges to the virtual mobile device.

Similar to VDI, there are many different business scenarios in which organizations should evaluate VMI. The most common include:

Healthcare – Enables access to electronic health records and other sensitive apps and data from mobile devices, in compliance with HIPAA privacy requirements.
Financial Services – Facilitates access to more sensitive client transaction data and business processes, from both personally owned and enterprise owned devices.
Retail – Supports secure Point of Sale as a Service for credit card transactions; Protecting the confidentiality of customer data accessed both from on and off premises.
Enterprise BYOD – Provides secure access to native apps from employee-owned mobile devices; keeping all data secure in the data center while at the same time not infringing on personal privacy.

Commercial Services – Extends the mobile enterprise to contractors, partners and customers.
Classified mobility – Allows government and security services to access data and applications from classified mobile devices, ensuring compliance with the thin client requirements of NSA’s
Mobility Capability Package.

With 1.9 billion devices expected to hit the market by 2018, IT professionals are on the hunt for a more effective way to secure the enterprise. VMI provides the access they need without compromising security or user experience.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Top 5 factors driving domestic IT outsourcing growth

Despite insourcing efforts, the expansion of nearshore centers is not necessarily taking work away from offshore locations. Eric Simonson of the Everest Group discusses the five main drivers responsible for the rise in domestic outsourcing, why Indian providers dominate the domestic landscape and more.

IT service providers placed significant focus on staffing up their offshore delivery centers during the previous decade. However, over the past five years, outsourcing providers have revved their U.S. domestic delivery center activity, according to recent research by outsourcing consultancy and research firm Everest Group.

The American outsourcing market currently employs around 350,000 full-time professionals and is growing between three and 20 percent a year depending on function, according to Everest Group’s research.

Yet the expansion of nearshore centers is not necessarily taking work away from offshore locations in India and elsewhere. Big insourcing efforts, like the one announced by GM, remain the exception. Companies are largely sticking with their offshore locations for existing non-voice work and considering domestic options for new tasks, according to Eric Simonson, Everest Group’s managing partner for research.

We spoke to Simonson about the five main drivers for domestic outsourcing growth, the types of IT services growing stateside, why Indian providers dominate the domestic landscape, and the how providers plan to meet the growing demand for U.S. IT services skills.

Interest in domestic IT outsourcing is on the rise, but you say that that does not indicate any dissatisfaction with the offshore outsourcing model.

Simonson: This isn’t about offshore not working and companies deciding to bring the work back. That’s happening a bit with some call center and help desk functions. But, by and large, these delivery center setups are more about bringing the wisdom of global delivery into the domestic market. The fundamental goal is industrializing the onshore model vs. fixing what’s broken offshore.

Can you talk about the five main drivers behind their increased interest in locating stateside?
Simonson: The first is diversification of buyer needs. As buyers have to support new types of services, certain types of tasks may be better delivered nearshore rather than offshore.

Secondly, there may be a desire to leverage the soft skills of onshore talent. This occurs when you need someone with a certain type of domestic business knowledge or dialect or cultural affinity.

Thirdly, domestic sourcing can be a way to overcome the structural challenges associated with offshore delivery, such as high attrition and burn out in graveyard shifts.

Fourth, companies may be seeking to manage certain externalities like regulatory requirements of fears about visa availabilities. To some extent, these reasons are often not necessarily based on true requirements, but are a convenient reason to give for choosing to outsource domestically rather than the potential risks of offshore.

Finally, there may be client-specific needs that demand domestic solutions—a local bank that wants to keep jobs in the community or a company with no experience offshore looking to start the learning curve.

Within IT services, what types of work currently dominate the domestic landscape?
Simonson: Application development is most prominent, with 123 domestic delivery centers in tier-one and -two cities serving financial services, public sector, manufacturing, retail and consumer packaged goods clients. Just behind that is IT infrastructure in similar geographies focused on those verticals as well. There are 80 consulting and systems integration centers and 68 testing centers as well.

It’s interesting to note that while U.S.-based providers tend to operate larger IT service centers domestically, it’s actually the Indian providers that dominate the landscape.
security tools 1

Simonson: Traditional U.S.-based multinationals have captured more scale in individual centers and have been able to grow them, in some ways, more strategically. They’ve been able to set up shop in smaller tier-4 cities like Ann Arbor or Des Moines and have more proven local talent models.

But the majority of domestic centers are operated by India-centric providers. Part of that is driven by their desire to get closer to their customers. With application and systems integration work, the ability to work more closely with the client is increasingly valuable. And with infrastructure work, concerns about data and systems access have encouraged Indian companies to offer more onshore options.

In addition, some of the bad press they’ve received related to visa issues is encouraging them to balance out their delivery center portfolios.

But Indian providers are not necessarily staffing up their centers with American workers.
Simonson: Indian providers are more likely to use visas to bring citizens of other countries (predominantly India) into the country to work on a temporary or permanent basis in a delivery center. About 32 percent of their domestic workforce working in delivery centers is comprised of these ‘landed resources.’ Across all providers, landed resources account for six percent of domestic service delivery employees. However, tightening visa norms and higher visa rejection rates are making it more difficult for providers to rely on foreign workers.

You found that approximately 43 percent of the delivery centers are located in the South, with almost half of those concentrated in the South Atlantic. And Texas has more than fifty. Is that
simply due to the fact that it’s cheaper to operate there?

Simonson: Cheap helps. But equally important are overall population trends. The South is growing, while regions like the Northeast or Midwest are either stable or on the decline. If you look at where people are going to school or moving and where corporations are relocating their headquarters, it’s taking place from the Carolinas down through Florida and over through Arkansas, Oklahoma and Texas. Those states are also more progressive about attracting services businesses (although there are some exceptions outside of the south like North Dakota and Missouri).

Do you expect the domestic IT outsourcing market to continue to grow?
Simonson: Yes, service providers expect an increase in demand for domestic outsourcing services by new and existing customers, and plan to increase their domestic delivery capabilities by adding more full time employees to their existing centers and establishing new delivery centers. In fact, 60 percent of delivery centers are planning to add headcount over the next three years with India-centric service providers expected to lead the expansion.

Tier-2 and tier-3 cities, like Orlando, Atlanta and Rochester, are poised for the greatest growth, with tier-1 and rural centers expecting the least amount of growth.

Will the supply of domestic IT talent keep up with this increased demand?
Simonson: The pressure to find IT talent has led service providers to adopt a range of approaches to extend their reach and develop ecosystems of talent. Many have developed educational partnerships, creating formal and informal relationships with colleges and technical institutes. They’re also basing themselves in cities known for their quality of life and recruiting entry-level and experienced talent from elsewhere. It all impacts what communities they decide to work in.

All service providers will have to expand their talent pools, particularly in IT. Automation of some tasks could increase capacity, but doesn’t provide the higher-complexity skills that are most valued onshore.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Go to Top