Hi I educated in the U.K. with working experienced for 18 years in multinational companies, As an IT Manager and IT Instructor, I am attached with here they provide IT exams study material, the study materials included exams Q&A with Explanation, Study Guides, Training Labs, Exams Simulations, Training Videos, etc. for certification like MCSE 2003 Training, MCITP Training,, CCNA exams preparation, CompTIA A+ Training, and more provide you the best training 100% guarantee. “Best Material Great Results”

Home page:

Posts by admin

98-365 Windows Server Administration Fundamentals

Users report that they are unable to print. You verify that the print spooler service is running. What should you do next?

A. Purge the service
B. Disable the service
C. Pause the service
D. Restart the service

Answer: D

To protect a server in case of a blackout, you should use a/an:

A. Uninterruptible Power Supply.
B. Dedicated surge protector.
C. Power Supply Unit.
D. Redundant power supply.
E. Hot-swappable power supply.
F. Line conditioner.

Answer: A

Power On Self Test (POST) runs when a computer first boots.
Which component software issues this test?

A. Complementary Metal Oxide Semiconductor
B. Northbridge On Board Chip
C. Basic Input/Output System
D. Southbridge On Board Chip

Answer: C

Explanation: The four main functions of a PC BIOS (Basic Input/Output System).
POST – Test the computer hardware and make sure no errors exist before loading the operating system. Additional information on the POST can be found on our POST and Beep Codes page.
Bootstrap Loader – Locate the operating system. If a capable operating system is located, the BIOS will pass control to it.
BIOS drivers – Low level drivers that give the computer basic operational control over your computer’s hardware.
BIOS or CMOS Setup- – Configuration program that allows you to configure hardware settings including system settings such as computer passwords, time, and date.

You have an Active Directory infrastructure that contains one domain and seven domain controllers. How many forests can you have without creating any trusts?

A. 0
B. 1
C. 7
D. 8

Answer: B

Explanation: In a Windows Server 2003 forest, you can link two disjoined Windows Server 2003 forests together to form a one-way or two-way, transitive trust relationships. A two-way, forest trust is used to form a transitive trust relationship between every domain in both forests.

You are troubleshooting a permissions issue with the Reports share. The permissions are shown in the following image:

The groups connect to the share.
Use the drop-down menus to select the answer choice that answers each question. Each correct selection is worth one point.


Which RAID level mirrors a set of disks and then stripes across the disks?

D. RAID 10

Answer: D

Explanation: A RAID 1+0, sometimes called RAID 1&0 or RAID 10. RAID 10 is a stripe of mirrors.

Click here to view complete Q&A of 98-365 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 98-365 Training at

74-344 Managing Programs and Projects with Project Server 2013

You are employed as an analyst at makes use of Project Server 2013 in their
You are currently performing a Portfolio Analysis. You want to identify projects that should be
included in or excluded from the portfolio automatically.
Which of the following actions should you take?

A. You should consider making use of the Filtering options.
B. You should consider making use of the Sorting options.
C. You should consider making use of the Grouping options.
D. You should consider making use of the Force In and Force Out options.

Answer: D


You are employed as a project manager at makes use of Project Server 2013
in their environment.
Edit permissions have been granted to all project managers. After successfully editing and
publishing a project in Project Web App (PWA), you are informed that other project managers are
unable to edit your project.
You then access the Project Center in PWA to fix the problem.
Which of the following actions should you take?

A. You should consider making use of the Resource Plan button.
B. You should consider making use of the Build Team button.
C. You should consider making use of the Check in My Projects button.
D. You should consider making use of the Project Permissions button.

Answer: C


You are employed as a portfolio manager at makes use of Project Online in
their environment.
The following have been set for a portfolio selection:
•Business drivers
•The main constraints to identify the efficient frontier. has accumulated business cases for new proposals, of which a large number can apply
to the same business requirement.
You have been instructed to make sure that the analysis generates the most suitable proposal
with regards to cost and resources. You also have to make sure that the portfolio selection does
not include any recurring efforts.
Which of the following actions should you take?

A. You should consider creating a mutual exclusion dependency among all these projects.
B. You should consider creating a mutual inclusion dependency among all these projects.
C. You should consider creating a specific exclusion dependency among all these projects.
D. You should consider creating a specific inclusion dependency among all these projects.

Answer: A


You are employed as a program manager at makes use of Project Server
2013 in their environment. has a data warehouse that collects relational information from various business areas.
The execution of this data warehouse is currently your responsibility.
You want to make sure that project managers have the ability to administer the execution for a
business area as individual projects, while the dependencies are still accepted at a program level.
You have instructed the project managers to create, save, and publish sub-projects for every area.
Which of the following actions should you take NEXT?

A. You should consider defining dependencies.
B. You should consider creating a master project file.
C. You should consider inserting the sub-projects into a program-level project.
D. You should consider creating a shared project file.

Answer: C


You are employed as a program manager at makes use of Project Server
2013 and Project Professional 2013 in their environment. is in the process of implementing a data warehouse. You have been given the
responsibility of supervising this process.
Part of your duties is to configure a program master project that includes subprojects for every
implementation area. Alterations to the dependencies must occur between projects.
You need to achieve your goal in the shortest time possible.
Which of the following actions should you take?

A. You should consider making use of Project Server 2013 to access the program-level project
from Project Web App (PWA).
B. You should consider making use of Project Professional 2013 to access the program-level
project from Project Web App (PWA).
C. You should consider making use of Project Server 2013 to access each of the required
subprojects from Project Web App (PWA).
D. You should consider making use of Project Professional 2013 to access each of the required
subprojects from Project Web App (PWA).

Answer: B




Click here to view complete Q&A of 74-344 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 74-344 Training at

Exam 70-697 Configuring Windows Devices (beta)

Exam 70-697 Configuring Windows Devices (beta)

Published: September 1, 2015
Languages: English
Audiences: IT professionals
Technology Windows 10
Credit toward certification: Specialist

Skills measured
This exam measures your ability to accomplish the technical tasks listed below. The percentages indicate the relative weight of each major topic area on the exam. The higher the percentage, the more questions you are likely to see on that content area on the exam. View video tutorials about the variety of question types on Microsoft exams.

Please note that the questions may test on, but will not be limited to, the topics described in the bulleted text.

Do you have feedback about the relevance of the skills measured on this exam? Please send Microsoft your comments. All feedback will be reviewed and incorporated as appropriate while still maintaining the validity and reliability of the certification process. Note that Microsoft will not respond directly to your feedback. We appreciate your input in ensuring the quality of the Microsoft Certification program.

If you have concerns about specific questions on this exam, please submit an exam challenge.

Manage identity (13%)
Support Windows Store and cloud apps
Install and manage software by using Microsoft Office 365 and Windows Store apps, sideload apps by using Microsoft Intune, sideload apps into online and offline images, deeplink apps by using Microsoft Intune, integrate Microsoft account including personalization settings
Support authentication and authorization
Identifying and resolving issues related to the following: Multi-factor authentication including certificates, Microsoft Passport, virtual smart cards, picture passwords, and biometrics; workgroup vs. domain, Homegroup, computer and user authentication including secure channel, account policies, credential caching, and Credential Manager; local account vs. Microsoft account; Workplace Join; Configuring Windows Hello

Plan desktop and device deployment (13%)
Migrate and configure user data
Migrate user profiles; configure folder location; configure profiles including profile version, local, roaming, and mandatory
Configure Hyper-V
Create and configure virtual machines including integration services, create and manage checkpoints, create and configure virtual switches, create and configure virtual disks, move a virtual machine’s storage
Configure mobility options
Configure offline file policies, configure power policies, configure Windows To Go, configure sync options, configure Wi-Fi direct, files, powercfg, Sync Center
Configure security for mobile devices
Configure BitLocker, configure startup key storage

Plan and implement a Microsoft Intune device management solution (11%)
Support mobile devices
Support mobile device policies including security policies, remote access, and remote wipe; support mobile access and data synchronization including Work Folders and Sync Center; support broadband connectivity including broadband tethering and metered networks; support Mobile Device Management by using Microsoft Intune, including Windows Phone, iOS, and Android
Deploy software updates by using Microsoft Intune
Use reports and In-Console Monitoring to identify required updates, approve or decline updates, configure automatic approval settings, configure deadlines for update installations, deploy third-party updates
Manage devices with Microsoft Intune
Provision user accounts, enroll devices, view and manage all managed devices, configure the Microsoft Intune subscriptions, configure the Microsoft Intune connector site system role, manage user and computer groups, configure monitoring and alerts, manage policies, manage remote computers

Configure networking (11%)
Configure IP settings
Configure name resolution, connect to a network, configure network locations
Configure networking settings
Connect to a wireless network, manage preferred wireless networks, configure network adapters, configure location-aware printing
Configure and maintain network security
Configure Windows Firewall, configure Windows Firewall with Advanced Security, configure connection security rules (IPsec), configure authenticated exceptions, configure network discovery

Configure storage (10%)
Support data storage
Identifying and resolving issues related to the following: DFS client including caching settings, storage spaces including capacity and fault tolerance, OneDrive
Support data security
Identifying and resolving issues related to the following: Permissions including share, NTFS, and Dynamic Access Control (DAC); Encrypting File System (EFS) including Data Recovery Agent; access to removable media; BitLocker and BitLocker To Go including Data Recovery Agent and Microsoft BitLocker Administration and Monitoring (MBAM)

Manage data access and protection (11%)
Configure shared resources
Configure shared folder permissions, configure HomeGroup settings, configure libraries, configure shared printers, configure OneDrive
Configure file and folder access
Encrypt files and folders by using EFS, configure NTFS permissions, configure disk quotas, configure file access auditing Configure authentication and authorization

Manage remote access (10%)
Configure remote connections
Configure remote authentication, configure Remote Desktop settings, configure VPN connections and authentication, enable VPN reconnect, configure broadband tethering
Configure mobility options
Configure offline file policies, configure power policies, configure Windows To Go, configure sync options, configure Wi-Fi direct

Manage apps (11%)
Deploy and manage Azure RemoteApp
Configure RemoteApp and Desktop Connections settings, configure Group Policy Objects (GPOs) for signed packages, subscribe to the Azure RemoteApp and Desktop Connections feeds, export and import Azure RemoteApp configurations, support iOS and Android, configure remote desktop web access for Azure RemoteApp distribution
Support desktop apps
The following support considerations including: Desktop app compatibility using Application Compatibility Toolkit (ACT) including shims and compatibility database; desktop application co-existence using Hyper-V, Azure RemoteApp, and App-V; installation and configuration of User Experience Virtualization (UE-V); deploy desktop apps by using Microsoft Intune

Manage updates and recovery (10%)

Configure system recovery
Configure a recovery drive, configure system restore, perform a refresh or recycle, perform a driver rollback, configure restore points
Configure file recovery
Restore previous versions of files and folders, configure File History, recover files from OneDrive
Configure and manage updates
Configure update settings, configure Windows Update policies, manage update history, roll back updates, update Windows Store apps



Click here to view complete Q&A of 70-697 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-697 Training at


SDN and NFV: The brains behind the “smart” city

In major metropolitan areas and smaller cities alike, governments are adopting software-defined networking (SDN) and network function virtualization (NFV) to deliver the agility and flexibility needed to support adoption of “smart” technologies that enhance the livability, workability and sustainability of their towns.

Today there are billions of devices and sensors being deployed that can automatically collect data on everything from traffic to weather, to energy usage, water consumption, carbon dioxide levels and more. Once collected, the data has to be aggregated and transported to stakeholders where it is stored, organized and analyzed to understand what’s happening and what’s likely to happen in the future.

There’s a seemingly endless list of potential benefits. Transportation departments can make informed decisions to alleviate traffic jams. Sources of water leaks can be pinpointed and proactive repairs scheduled. Smart payments can be made across city agencies, allowing citizens to complete official payments quickly and reducing government employee time to facilitate such transactions. And even public safety can be improved by using automated surveillance to assist the police watch high-crime hotspots.

Of particular interest is how healthcare services can be improved. There is already a push to adopt more efficient and effective digital technology management systems to better store, secure and retrieve huge amounts of patient data. Going a step further, a smart city is better equipped to support telemedicine innovations that require the highest quality, uninterrupted network service. Telesurgery, for example, could allow for specialized surgeons to help local surgeons perform emergency procedures from remote locations — the reduction of wait time before surgery can save numerous lives in emergency situations, and can help cities and their hospital systems attract the brightest minds in medical research and practice.

The smart city of today

While the smart city is expected to become the norm, examples exist today. Barcelona is recognized for environmental initiatives (such as electric vehicles and bus networks), city-wide free Wi-Fi, smart parking, and many more programs, all of which benefit from smart city initiatives. With a population of 1.6 million citizens, Barcelona shows that smart city technologies can be implemented regardless of city size.

But even smaller cities are benefitting from going “smart.” In 2013 Cherry Hill, New Jersey, with a population of only 71,000, began using a web-based data management tool along with smart sensors to track the way electricity, water, fuel and consumables are being utilized, then compared usage between municipal facilities to identify ways to be more efficient. Chattanooga, Tennessee, population 170,000, along with its investment to provide the fastest Internet service in the U.S., has recently begun developing smart city solutions for education, healthcare and public safety.

How do cities become smart? The most immediate need is to converge disparate communications networks run by various agencies to ensure seamless connectivity. To achieve this, packet optical based connectivity is proving critical, thanks largely to the flexibility and cost advantages it provides. Then atop the packet optical foundation sits technology that enables NFV and the applications running on COTS (commercial off-the-shelf) equipment in some form of virtualized environment. SDN and NFV allow for the quick and virtual deployment of services to support multiple data traffic and priority types, as well as increasingly unpredictable data flows of IoT.

Decoupling network functions from the hardware means that architectures can be more easily tweaked as IoT requirements change. Also, SDN and NFV can yield a more agile service provision process by dynamically defining the network that connects the IoT end devices to back-end data centers or cloud services.

The dynamic nature of monitoring end-points, location, and scale will require SDN so that networks can be programmable and reconfigured to accommodate the moving workloads. Take for example, allocating bandwidth to a stadium for better streaming performance of an event as the number of users watching remotely on-demand goes up—this sort of dynamic network-on-demand capability is enabled by SDN. Additionally, NFV can play a key role where many of the monitoring points that make the city “smart” are actually not purpose-built hardware-centric solutions, but rather software-based solutions that can be running on-demand.

With virtual network functions (VNF), the network can react in a more agile manner as the municipality requires. This is particularly important because the network underlying the smart city must be able to extract high levels of contextual insight through real-time analytics conducted on extremely large datasets if systems are to be able to problem-solve in real-time; for example, automatically diverting traffic away from a street where a traffic incident has taken place.

SDN and NFV may enable the load balancing, service chaining and bandwidth calendaring needed to manage networks that are unprecedented in scale. In addition, SDN and NFV can ensure network-level data security and protection against intrusions – which is critical given the near-impossible task of securing the numerous sensor and device end points in smart city environments.
Smart city business models

In their smart city initiatives, cities large and small are addressing issues regarding planning, infrastructure, systems operations, citizen engagement, data sharing, and more. The scale might vary, but all are trying to converge networks in order to provide better services to citizens in an era of shrinking budgets. As such, the decision on how to go about making this a reality is important. There are four major smart city business models to consider, as defined by analysts at Frost & Sullivan (“Global Smart City Market a $1.5T Growth Opportunity In 2020”):

Build Own Operate (BOO): In a BOO model, municipalities own, control, and independently build the city infrastructure needed, and deliver the smart city services themselves. Both operation and maintenance of these services is under the municipality’s control, often headed up by their city planner.

Build Operate Transfer (BOT): Whereas in a BOO model, the municipality is always in charge of the operation and management of smart city services, in a BOT model that is only the case after a little while – the smart city infrastructure building and initial service operation is first handled by a trusted partner appointed by the city planner. Then, once all is built and in motion, operation is handed back over to the city.

Open Business Model (OBM): In an OBM model, the city planner is open to any qualified company building city infrastructure and providing smart city services, so long as they stay within set guidelines and regulations.

Build Operate Manage (BOM): Finally, there is the BOM model, which is where the majority of smart city projects are likely to fall under. In this model, the smart city planner appoints a trusted partner to develop the city infrastructure and services. The city planner then has no further role beyond appointment – the partner is in charge of operating and managing smart city services.

SDN and NFV: The keys to the (smart) city
With the appropriate business model in place and the network foundation laid out, the technology needs to be implemented to enable virtualization. Virtualized applications allow for the flexibility of numerous data types, and the scalability to transport huge amounts of data the city aims to use in its analysis.

SDN and NFV reduce the hardware, power, and space requirements to deploy network functions through the use of industry-standard high-volume servers, switches and storage; it makes the network applications portable and upgradeable with software; and it allows cities of all sizes the agility and scalability to tackle the needs and trends of the future as they arise. Like the brain’s neural pathways throughout a body, SDN and NFV are essential in making the smart city and its networks connect and talk to each other in a meaningful way.



Click here to view complete Q&A of 98-361 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 98-361 Training at


Are wearables worth the cybersecurity risk in the enterprise?

How should the enterprise address the growing adoption of wearables?

The Internet of Things and wearable technology are becoming more integrated into our everyday lives. If you haven’t already, now is the time to begin planning for their security implications in the enterprise.

According to research firm IHS Technology, more than 200 million wearables will be in use by 2018. That’s 200 million more chances of a security issue within your organization. If that number doesn’t startle you, Gartner further predicts that 30% of these devices will be invisible to the eye. Devices like smart contact lenses and smart jewelry will be making their way into your workplace. Will you be ready to keep them secure even if you can’t see them?

According to TechTarget, “Although there haven’t been any major publicized attacks involving wearables yet, as the technology becomes more widely incorporated into business environments and processes, hackers will no doubt look to access the data wearables hold or use them as an entry point into a corporate network.”

While it’s true that IT cannot possibly be prepared for every potential risk, as an industry we need to do a better job of assessing risks before an attack happens. This includes being prepared for new devices and trends that will pose all new risks for our organizations.

How many of us read the news about a new data breach practically every day and have still yet to improve security measures within our own organizations? If you’re thinking “guilty,” you’re not alone. Organizational change can’t always happen overnight, but we can’t take our eyes off the ball either.

In a 2014 report, 86% of respondents expressed concern for wearables increasing the risk of data security breaches. IT Business Edge suggests, “With enterprise-sensitive information now being transferred from wrist to wrist, businesses should prepare early and create security policies and procedures regarding the use of wearables within the enterprise.” Updating policies is a smart move, but the hard part is anticipating the nature and use of these new devices and then following through with implementing procedures to address them. It seems it may be easier said than done.

We all know that wearables pose security challenges, but how do IT departments begin to address them? This can be especially challenging considering that some of the security risks lie on the device manufacturers rather than the teams responsible for securing the enterprise network the technology is connected to. Many wearables have the ability to store data locally without encryption, PIN protection, or user-authentication features, meaning that if the device is lost or stolen, anyone could potentially access the information.

Beyond the data breach threat of sensitive information being accessed by the wrong hands, wearables take it a step further by providing discreet access for people to use audio or video surveillance to capture sensitive information. Is someone on your own team capturing confidential information with their smartwatch? You may not realize it’s happening until it’s too late.

How can we effectively provide security on devices that appear insecure by design? It seems the safest option is to ban all wearables in the enterprise – there are too many risks associated with them, many of which seemingly cannot be controlled. If this thought has crossed your mind, I may have bad news for you. This isn’t really an option for most organizations, especially those looking to stay current in today’s fast-paced society. TechTarget’s Michael Cobb explains, “Banning wearable technology outright may well drive employees from shadow IT to rogue IT – which is much harder to deal with.”

If the threat of rogue IT isn’t enough to convince you, also consider that there may very well be real benefits of wearables for your organization. According to Forrester, the industries that will likely benefit from this technology in the short term are healthcare, retail, and public safety organizations. As an example in the healthcare field, Forrester suggests that “the ability of biometric sensors to continually monitor various health stats, such as blood glucose, blood pressure and sleep patterns, and then send them regularly to healthcare organizations for monitoring could transform health reporting.” There are many examples for other industries, and the market continues to evolve every day.

It all boils down to this: enterprise wearables present a classic case of risk versus reward. We know there are many security risks, but are the potential rewards great enough to make the risks worthwhile? This answer may vary based on your industry and organization, but chances are there are many real business opportunities that can come from wearable technology.

If you haven’t already, it’s time to start talking with your teams about what those opportunities are and the best ways to ease the associated risks. As we all know, the technology will move forward with or without us and the ones who can effectively adapt will be the ones who succeed. It’s our job to make sure our organizations are on the right side of that equation.


MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at

Sony BMG Rootkit Scandal: 10 Years Later

Object lessons from infamous 2005 Sony BMG rootkit security/privacy incident are many — and Sony’s still paying a price for its ham-handed DRM overreach today.

Hackers really have had their way with Sony over the past year, taking down its Playstation Network last Christmas Day and creating an international incident by exposing confidential data from Sony Pictures Entertainment in response to The Interview comedy about a planned assassination on North Korea’s leader. Some say all this is karmic payback for what’s become known as a seminal moment in malware history: Sony BMG sneaking rootkits into music CDs 10 years ago in the name of digital rights management.

“In a sense, it was the first thing Sony did that made hackers love to hate them,” says Bruce Schneier, CTO for incident response platform provider Resilient Systems in Cambridge, Mass.
LogRhythm CEO hobbies

Mikko Hypponen, chief research officer at F-Secure, the Helsinki-based security company that was an early critic of Sony’s actions, adds:

“Because of stunts like the music rootkit and suing Playstation jailbreakers and emulator makers, Sony is an easy company to hate for many. I guess one lesson here is that you really don’t want to make yourself a target.

“When protecting its own data, copyrights, money, margins and power, Sony does a great job. Customer data? Not so great,” says Hypponen, whose company tried to get Sony BMG to address the rootkit problem before word of the invasive software went public. “So, better safe than Sony.”

The Sony BMG scandal unfolded in late 2005 after the company (now Sony Music Entertainment) secretly installed Extended Copy Protection (XCP) and MediaMax CD-3 software on millions of music discs to keep buyers from burning copies of the CDs via their computers and to inform Sony BMG about what these customers were up to. The software, which proved undetectable by anti-virus and anti-spyware programs, opened the door for other malware to infiltrate Windows PCs unseen as well. (As if the buyers of CDs featuring music from the likes of Celine Dion and Ricky Martin weren’t already being punished enough.)

The Sony rootkit became something of a cultural phenomenon. It wound up as a punch line in comic strips like Fox Trot, it became a custom T-shirt logo and even was the subject of class skits shared on YouTube. Mac fanboys and fangirls smirked on the sidelines.

“In a sense, [the rootkit] was the first thing Sony did that made hackers love to hate them,” says Bruce Schneier, Resilient Systems CTO.

Security researcher Dan Kaminsky estimated that the Sony rootkit made its mark on hundreds of thousands of networks in dozens of countries – so this wasn’t just a consumer issue, but an enterprise network one as well.

Once Winternals security researcher Mark Russinovich — who has risen to CTO for Microsoft Azure after Microsoft snapped up Winternals in 2006 — exposed the rootkit on Halloween of 2005, all hell broke loose.

Sony BMG botched its initial response: “Most people don’t even know what a rootkit

is, so why should they care about it?” went the infamous quote from Thomas Hesse, then president of Sony BMG’s Global Digital Business. The company recalled products, issued and re-issued rootkit removal tools, and settled lawsuits with a number of states, the Federal Trade Commission and the Electronic Frontier Foundation.

Microsoft and security vendors were also chastised for their relative silence and slow response regarding the rootkit and malware threat. In later years, debate emerged over how the term “rootkit” should be defined, and whether intent to maliciously seize control of a user’s system should be at the heart of it.

In looking back at the incident now, the question arises about how such a privacy and security affront would be handled these days by everyone from the government to customers to vendors.

“In theory, the Federal Trade Commission would have more authority to go after [Sony BMG] since the FTC’s use of its section 5 power has been upheld by the courts,” says Scott Bradner, University Technology Security Officer at Harvard. “The FTC could easily see the installation of an undisclosed rootlet as fitting its definition of unfair competitive practices.”

Bill Bonney, principal consulting analyst with new research and consulting firm TechVision Research, says he can’t speak to how the law might protect consumers from a modern day Sony BMG rootkit, but “with the backlash we have seen for all types of non-transparent ways (spying, exploiting, etc.) companies are dealing with their customers, I think in the court of public opinion the response could be pretty substantial and, as happened recently with the EU acting (theoretically) because of [the NSA’s PRISM program], if the issue is egregious enough there could be legal or regulatory consequences. “

As for how customers might react today, we’ve all seen how quickly people turn to social media to take companies to task for any product or service shortcoming or any business shenanigans. Look no further than Lenovo, which earlier this year got a strong dose of negative customer reaction when it admittedly screwed up by pre-loading Superfish crapware onto laptops. That software injected product recommendations into search results and opened a serious security hole by interfering with SSL-encrypted Web traffic.

In terms of how security vendors now fare at spotting malware or other unsavory software, Schneier says “There’s always been that tension, even now with stuff the NSA and FBI does, about how this stuff is classified. I think [the vendors] are getting better, but they’re still not perfect… It’s hard to know what they still let by.”

Noted tech activist Cory Doctorow, writing for Boing Boing earlier this month, explains that some vendors had their reasons for not exposing the Sony rootkit right away. “Russinovich was not the first researcher to discover the Sony Rootkit, just the first researcher to blow the whistle on it. The other researchers were advised by their lawyers that any report on the rootkit would violate section 1201 of the DMCA, a 1998 law that prohibits removing ‘copyright protection’ software. The gap between discovery and reporting gave the infection a long time to spread.”

Reasons for hope though include recent revelations by the likes of Malwarebytes, which warned users that a malicious variety of adware dubbed eFast was hijacking the Chrome browser and replacing it, by becoming the default browser associated with common file types like jpeg and html.

Schneier says it’s important that some of the more prominent security and anti-virus companies — from Kaspersky in Russia to F-Secure in Finland to Symantec in the United States to Panda Security in Spain — are spread across the globe given that shady software practices such as the spread of rootkits are now often the work of governments.

“You have enough government diversity that if you have one company deliberately not finding something, then others will,” says Schneier, who wrote eloquently about the Sony BMG affair for back in 2005.

The non-profit Free Software Foundation Europe (FSFE) has been calling attention to the Sony BMG rootkit’s 10th anniversary, urging the masses to “Make some noise and write about this fiasco” involving DRM. The FSFE, seeing DRM as an anti-competitive practice, refers to the words behind the acronym as digital restriction management rather than the more common digital rights management.

F-Secure Chief Research Officer Mikko Hypponen: “I guess one lesson here is that you really don’t want to make yourself a target.”

Even worse, as the recent scandal involving VW’s emissions test circumvention software shows, is that businesses are still using secret software to their advantage without necessarily caring about the broader implications.

The object lessons from the Sony BMG scandal are many, and might be of interest to those arguing to build encryption backdoors into products for legitimate purposes but that might be turned into exploitable vulnerabilities.

One basic lesson is that you shouldn’t mimic the bad behavior that you’re ostensibly standing against, as Sony BMG did “in at least appearing to violate the licensing terms of the PC manufacturers” TechVision’s Bonney says.

And yes, there is a warning from the Sony BMG episode “not to weaponize your own products. You are inviting a response,” he says.


Click here to view complete Q&A of 70-355 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-355 Training at



Exam 70-695 Deploying Windows Desktops and Enterprise Applications

Exam 70-695 Deploying Windows Desktops and Enterprise Applications

Skills measured
This exam measures your ability to accomplish the technical tasks listed below. The percentages indicate the relative weight of each major topic area on the exam. The higher the percentage, the more questions you are likely to see on that content area on the exam. View video tutorials about the variety of question types on Microsoft exams.

Please note that the questions may test on, but will not be limited to, the topics described in the bulleted text.

Do you have feedback about the relevance of the skills measured on this exam? Please send Microsoft your comments. All feedback will be reviewed and incorporated as appropriate while still maintaining the validity and reliability of the certification process. Note that Microsoft will not respond directly to your feedback. We appreciate your input in ensuring the quality of the Microsoft Certification program.

If you have concerns about specific questions on this exam, please submit an exam challenge.

Implement the Operating System Deployment (OSD) infrastructure (21%)
Assess the computing environment
Configure and implement the Microsoft Assessment and Planning (MAP) Toolkit, assess Configuration Manager reports, integrate MAP with Microsoft System Center 2012 Configuration Manager, determine network load capacity
Plan and implement user state migration
Design considerations, including determining which user data and settings to preserve, hard-link versus remote storage, mitigation plan for non-migrated applications, and wipe-and-load migration versus side-by-side migration; estimate migration store size; secure migrated data; create a User State Migration Tool (USMT) package
Configure the deployment infrastructure
Configure Windows Deployment Services (WDS), install and configure Microsoft Deployment Toolkit (MDT), identify network services that support deployments, select Configuration Manager distribution points, support BitLocker
Configure and manage activation
Configure KMS, MAK, and Active Directory–based activation; identify the appropriate activation tool

Implement a Lite Touch deployment (18%)
Install and configure WDS
Configure unicast/multicast, add images to WDS, configure scheduling, restrict who can receive images
Configure MDT
Configure deployment shares, manage the driver pool, configure task sequences, configure customsettings.ini
Create and manage answer files
Identify the appropriate location for answer files, identify the required number of answer files, identify the appropriate setup phase for answer files, configure answer file settings, create autounattend.xml answer files

Implement a Zero Touch deployment (20%)
Configure Configuration Manager for OSD
Configure deployment packages and applications, configure task sequences, manage the driver pool, manage boot and deployment images
Configure distribution points
Configure unicast/multicast, configure PXE, configure deployments to distribution points and distribution point groups
Configure MDT and Configuration Manager integration
Use MDT-specific task sequences; create MDT boot images; create custom task sequences, using MDT components

Create and maintain desktop images (21%)
Plan images
Design considerations, including thin, thick, and hybrid images, WDS image types, image format (VHD or WIM), number of images based on operating system or hardware platform, drivers, and operating features
Capture images
Prepare the operating system for capture, create capture images using WDS, capture an image to an existing or new WIM file, capture an operating system image using Configuration Manager
Maintain images
Update images using DISM; apply updates, drivers, settings, and files to online and offline images; apply service packs to images; manage embedded applications

Prepare and deploy the application environment (20%)
Plan for and implement application compatibility and remediation
Planning considerations, including RDS, VDI, Client Hyper-V, and 32 bit versus 64 bit; plan for application version co-existence; use the Application Compatibility Toolkit (ACT); deploy compatibility fixes
Deploy Office 2013 by using MSI
Customize deployment, manage Office 2013 activation, manage Office 2013 settings, integrate Lite Touch deployment, re-arm Office 2013, provide slipstream updates
Deploy Office 2013 by using click-to-run (C2R)
Configure licensing, customize deployment, configure updates, monitor usage by using the Telemetry Dashboard



Click here to view complete Q&A of 70-695 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-695 Training at


Aruba succeeded where other Wi-Fi companies failed: A talk with the founder about the acquisition by HP, the future of Wi-Fi

Wireless LAN stalwart Aruba was acquired by HP last March for $3 billion, so Network World Editor in Chief John Dix visited Aruba co-founder Keerti Melkote to see how the integration is going and for his keen insights on the evolution of Wi-Fi. Melkote has seen it all, growing Aruba from a startup in 2002 to the largest independent Wi-Fi company with 1,800 employees. After Aruba was pulled into HP he was named CTO of the combined network business, which employs roughly 5,000. In this far ranging interview Melkote talks about product integration and rationalization, the promise of location services and IoT, the competition, the arrival of gigabit Wi-Fi and what comes next.

Why sell to HP?
Aruba was doing really well as a company. We gained market share through every technology transition — from 802.11a to “b” to “g” and “n” and now “ac” — and today we’re sitting at roughly 15% global share and have a lot more than that in segments like higher education and the federal market. But we were at a point where we could win more if we had an audience at the CIO level, and increasingly we were getting exposed to global projects that required us to have a large partner in tow to give us the people onsite to execute on a worldwide basis.

So we began looking for what internally we called a big brother to help us scale to that next level. We talked to the usual suspects in terms of professional services, consulting companies, etc., but then HP approached us and said they were interested in partnering with us to go after the campus market, which is changing from wired to wireless.

HP has a good history on the wired side, so we felt this was an opportune moment to bring the sides together, but go to market with a mobile-first story. After all, as customers re-architect their infrastructure they’re not going with four cable drops to every desk, they’re looking at where the traffic is, which is all on the wireless networks these days. HP agreed with that and basically said, “Why don’t you guys come in and not only grow Aruba, but take all of networking within HP and make it a part of the whole ecosystem.”

So HP Networking and Aruba have come together in one organization and Dominic Orr [formerly CEO of Aruba] is the leader for that and I am Chief Technology Officer. We are focusing on integrating the Aruba products with the HP network products to create a mobile-first campus architecture.

Does the Aruba name go away and does everyone move to an HP campus?
No, and there is some exciting news there. The go-forward branding for networking products in the campus is going to be Aruba, including the wire line products. Over time you will start to see a shift in this mobile-first architecture with Aruba switching also coming to market.
Think Big. Scale Fast. TRANSFORM. Enter Blue Planet.nagement…

Will that include the HP Networking operations in the area?
No, we have a global development model, so we have development sites here in Sunnyvale, Palo Alto and Roseville. And we have sites in India, China, Canada and in Costa Rica. There won’t be any changes to any of the development sites. As the business grows we’re going to have to grow most of those sites.

HP has bought other wireless players along the way, including Colubris and 3Com, so how does it all fit together?
Colubris was a pretty focused wireless acquisition back in 2008 and those products have done well for HP, but that customer base is ready for upgrades to 11ac and as they upgrade they will migrate to Aruba. The former product line will be end-of-lifed over time, but we’re not going to end support for it. There is a small team supporting it and will continue to do so until customers are ready to migrate.

3Com was a much broader acquisition, involving data center campus products, routing, etc. Most of the R&D for 3Com is in China with H3C [the joint venture 3Com formed with Huawei Technologies before 3Com was acquired by HP in 2010]. There is a two-prong go-to-market approach for those products. There is a China go-to-market, which has done really well. In fact, they are number one, even ahead of Cisco, from an overall network market share perspective in China. For the rest of the world we were using the products to go after the enterprise.

As you probably heard recently, we are going to sell 51% of our share in H3C to a Chinese owned entity because there needs to be Chinese ownership for them to further grow share. H3C will be an independent entity on the Chinese stock market and will sell networking gear in China and HP servers and storage as well.

So that becomes our way to attack the China market while we will continue to sell the other network products to the rest of the world. Those products are doing very well, especially in the data center. They run some of the largest data centers in the world, names that are less familiar here in the U.S., but very large data centers for the likes of Alibaba, Tencent and other companies that are basically the Amazons and Facebooks of China.

3Com has a wireless portfolio called Unified Wireless. That product line will also be end-of-lifed but still supported, and as we migrate to next-generation architectures we will position Aruba for those buyers. The definitive statement we’ve made is Aruba will be the wireless LAN and mobility portfolio in general and Hewlett-Packard’s network products will be the go-forward switching products.

Two products that are really helping to integrate our product lines are: ClearPass, which is our unified policy management platform, which is going to be the first point where access management is integrated between wired and wireless; and AirWave, which is the network management product which will become the single console for the customer to manage the entire campus network. For the data center we will have a different strategy because data center management is about integrating with servers and storage and everything else, but for the campus the AirWave product will be the management product.

3Com has a product called IMC Intelligent Management Console that will continue if customers need deep wired management, but if you need to manage a mobile-first campus, AirWave will do the complete job for you.

Given your longevity and perspective in the wireless LAN business, are we where you thought we would be in terms of Wi-Fi usage when you first started on this path 13 years ago?
It’s taken longer than I thought it would, but it has certainly far surpassed my expectations. Back in 2002 there was no iPhone or iPad. Wireless was for mobile users on laptops and we believed it would become the primary means of connecting to the network and you would no longer need to cable them in. That was the basic bet we made when we started Aruba. My hope was we would get there in five to seven years and it took 15, but things always take a little bit longer than you think.

The seminal moment in our business was the introduction of the iPad. Even though the iPhone was around most people were still connecting to the cellular network and not Wi-Fi because of the convenience. Laptop-centric networking was still prominent, but when the iPad arrived there was no way to connect it to the wire and there were all sorts of challenges. How do you provide pervasive wireless connectivity, because the executives that brought them in were taking them along wherever they went. Security was a big challenge because they were all personal devices.

We had developed and perfected answers for those questions over the years so it was all sort of right there for us. And the last five years has seen dramatic changes in terms of all-wireless offices, open office space architectures, etc. Microsoft Lync was also a big inflection point as well.

Why is that?
Whenever I talk to customers about pulling the cable out they always point to the phone and say, “I still need to pull a cable for that, which means I need power over Ethernet, I need an Ethernet switch in the closet, I need a PBX.” But when Lync was introduced in 2013 you could get your unified communications on your smart phone. Today, if you were to ask what is the most important device on the network, I’d say it’s the smart phone because it’s converging the computing and messaging and everything else on one device. Now you can provide a rich experience on a mobile device and do it anywhere, anytime.

Where do we stand on location-based services?
We’ve been talking about location services for a very long time. What happened was Wi-Fi based location alone wasn’t actually solving the problem. It was giving you a sense of where people were in a facility, but getting the technology to allow you to engage with somebody in physical space was not working, mostly because the operating systems on those mobile devices weren’t supporting Wi-Fi for location, just connectivity.

We have now integrated Bluetooth Low Energy (BLE) into our portfolio so you have two ways of connecting with the user; the Wi-Fi side gives you presence and Bluetooth Low Energy gives you the ability to engage on the user side so you can send notifications about where they are. That technology lets us provide tools for marketers, for retailers to send coupons, invite people into a store, and so on.

So it is finally picking up some?
It is. Actually Asia is doing well. There is a lot of construction in Asia and this is one of the demands. But the U.S. is picking up. We just implemented a large network at the Levi’s Stadium right down the street here [which recently replaced Candlestick Park as home of the San Franciso 49ers].

One of the things the CEO imagined was that, as you drive from home to the game, their app would guide your experience. So they’ll take you to the right parking lot, then provide you directions to your seat, and once you are in the seat enjoying the game they wanted to provide amenities — so food and beverage ordering and the ability to watch instant replays and the like. All these things are available for a fee of course. In the first season of operation this app generated $2 million of additional sales for Levi’s Stadium.

That was a big win for us, not just for demonstrating high density Wi-Fi where we have seen regularly 3-4 gig of traffic going to the Internet, but also showing the revenue generating potential of location-based technology.

Speaking of networking a lot of things, what do you make of the Internet of Things movement?
Eventually where it all goes is integrating the Internet of Things. Every day I interact with customers there are new use cases coming up around the intersection of location-based technology and the Internet of Things. And that’s squarely in the purview of what we are doing. It’s not today. Today is still about this all-wireless workplace, but in the next five years I think you’ll see a lot more of this. There is a lot of innovation still to come.

There’s a hodgepodge of stuff used to connect sensors today, but you see Wi-Fi playing a prominent role?
Wi-Fi will definitely be an integral component, but Bluetooth Low Energy will also be important because some sensors will be battery operated. There may be a role for the evolution of ZigBee as well. That’s super low energy. ZigBee is not yet in the mainstream enterprise but I can see some successor of that happening. But sensors will look to wireless for connectivity because they need to go anywhere. You can’t have cable follow them. So the wireless fabric is becoming super-critical for that.

Switching gears a bit, how is competition changing?
We look at three key market segments: large and medium enterprises; small/medium businesses, which have completely different characteristics; and service providers. Aruba has done really well in the large and medium enterprise segment. We have done reasonably well in the small/medium segment, but there is more competition there. Ruckus has done well there. And service provider is the emerging battleground.

As a standalone company Aruba couldn’t afford to invest, frankly, in all three segments. We were focused on the large and medium enterprise and we built a good franchise. Clearly Cisco is the primary competitor there, but now as part of HP we have another go-to-market capability and investment to take on all three segments in a meaningful way, so that’s another big reason why we came together.

We just recently announced a partnership with Ericsson to go after the service provider Wi-Fi segment, and that will help us gain share. And HP has been a strong player in the small/medium business so we’re going to take Aruba down-market. We’re going to play in all three segments. I feel if we just keep executing, market share gains are possible.

Ruckus talks about optimizing the airwaves as being their key differentiator. How do you differentiate Aruba?
The four key things I talk about are the emergence of the all-wireless workplace, inflight communications and voice, the need for deep security within your own device, and the need for location based services and training towards IoT.

We talked about the all-wireless workplace and location services. Regarding voice traffic, we have invested quite a bit of energy ensuring optimal utilization. Ruckus focused on the antenna technology, while we are focused on the software that goes on top of the antenna. The analogy I’ll give you is, as you walk away from an access point I can boost my antenna power to give you a better signal, and that problem is a good problem to solve if you’re in a home because you only have one access point. But in the enterprise there is a collection of access points and the problem isn’t about holding onto a client for as long as possible, but to move the client to the best access point. So the trick is to enable the client to roam from one access point to another in a very efficient way. We call this technology ClientMatch. That is the core differentiator for us over the air, and we’ve specifically optimized it for voice by working with the Microsoft team to enable Lync and Skype for Business.

Security is a place we cannot be touched. We’ve had deep security expertise for a very long time. The DoD, three of the armed forces, most of the federal market actually, uses Aruba. I can’t get into all the details, but we have significant penetration because of our security depth. For enterprises that is a big deal. They really want to make sure the security side is well covered.

What’s the hot button in wireless security today?
We know how to encrypt. We know how to authenticate. Basically it is the threat of an unmanaged device coming into the network. We’re looking at solving that problem as a mobile security problem and we solved one part of it with access management, but we have this Adaptive Trust architecture which integrates with mobile device management tools — VMware’s AirWatch, MobileIron, Microsoft’s Intune. We partner with those companies and the likes of Palo Alto Networks, and HP now brings its security and management platform ArcSight to the table. The idea is to secure the mobile edge so no matter where you are you have a secure connection back to the enterprise.

Let’s shift to the adoption of Gigabit Wi-Fi, or 802.11ac. How is that transition going?
The campus access network from your desktop to the closet has stagnated for a long time. That’s because there was really nothing driving the need for more than a gigabit’s worth of bandwidth to the desktop. Now with Gigabit Wi-Fi technologies the over the air rates are greater than if you were to connect to the wired LAN. So if you deploy Gigabit Wi-Fi and have signals going at 2G, let’s say, the wired line becomes a bottleneck. There is a technology called Smart Rate that HP Networking introduced for its switches which allows you to raise the data rates to 2.5Gbps and even 5Gbps. At that point your access points don’t have to contend with the bottleneck and can pick up the bits over the air and put them on the wire without dropping them.

So you need will need wired ports faster than a gigabit as you transition to this mobile workplace, but you won’t need as many ports as before. That is a transition, I think, that will happen over the next 2-3 years.

Did many people buy into Wave 1 of Gigabit Wi-Fi or did they hold off?
We’ve had tremendous success with Wave 1. The need for bandwidth is truly insatiable. And there is a ton of demand still yet to be put on the network. Video is a significant driver of bandwidth and most companies are throttling it video. So the more you open the pipe, the more capacity I think people will consume. Wave 1 has done very well. I think Wave 2 will continue to do well and then there’s .11ax which will take capacity even higher.

So people bought into Wave 1 even though Wave 2 requires them to replace hardware?

I tell customers, if you’re going to wait for the next best thing you’re going to wait forever, because there’s always going to be the next best thing on the horizon. So it’s really a question of where you are in your lifecycle for an investment. If the customer is at a point where they’ve had five years of investment and they’re hurting, it’s a good time. Wave 1 can actually solve a lot of problems. There’s no need to wait another 18 months for Wave 2 technology. You know you’re going to refresh that too in five years and there will be new technology at that point in time.

Will anybody buy anything but Wave 2 at this point?
It depends. Wave 1 technology you can buy today at multiple price points in the industry. Wave 2 is still at the very top end of the range. So if you’re looking for, let’s say, lighting up a retail store and you don’t need all the capacity of Wave 2, then Wave 1 will do just fine. That’s typical of most technologies, to start at the top and eventually work its way down. We’re right in the beginning of the Wave 2 transition.

How about in carpeted office space? Would you just drop Wave 2 into key points to satisfy demand?
Wi-Fi has always basically been single user. Only one user could speak on a wireless LAN at a time. With Wave 2 you can have multiple conversations at the same time; each access point can serve four streams. So that boosts capacity in a significant way and can also improve spectrum efficiency. For that reason alone, I think Wave 2 should be used pretty much anywhere you go. You could start with a high density zone and then work your way up. That’s typically how people do it, but I would encourage most customers to take advantage of this technology.

In the industry we’ve always used speed as a measure of the next generation of technology. Never have we given attention to efficiency. This is the first time where we’re saying efficiency gains are pretty significant.

And Wave 2 ultimately will be able to support up to eight streams, right?
Yes, the technology allows you to do eight streams, although it is not possible to pack eight antennas into the form factor at this point. But it will come.

I think the targets are up to 10 gig. Let’s see how far they get. At that point, the Gigabit Ethernet backhaul will become an even more interesting problem. You’ll need 10 gig of backhaul from the access point.

In terms of the coming year, what should people look for?
They should expect a streamlined roadmap with unified management for wired and wireless, and unified security for wired and wireless in the campus. And they should expect changes in wiring closet switches to support Wave 2.

The other piece cooking in the labs is the next-generation controller technology. We invented the controller back in 2002 and that has gone through multiple generations of upgrades. The first controller had something like a 2Gig back plane that could support 1,000 users, and now we have a 40G controller that supports 32,000 users. So how do you get from there to 500G? That will require us to rethink architecture because these campuses are getting there.

We used to talk about tens of thousands of devices on a campus. Today campuses have hundreds of thousands of devices. How do you support them in a single architecture? Right now you add more controllers, but that creates a management problem. We are working on a unified solution for very large campuses and taking it to the next level for service providers as well.


MCTS Training, MCITP Trainnig

Best Aruba Certification, HP Training at

Google Graveyard: What Google has killed off in 2015

Six feet deep
Google is truly a company that has more technology and products than it can handle sometimes, and in 2015 the company with the recent name change shed a host of tools and products to enable it to focus on more pressing needs. Here’s a look back at what Google this year has offed or announced plans to off (To go back even further, check out 2014’s Google Graveyard.)

Google Code
Google in March said it would be axing its Google Code platform in January 2016, acknowledging increased adoption of alternatives like GitHub and Bitbucket. “As developers migrated away from Google Code, a growing share of the remaining projects were spam or abuse. Lately, the administrative load has consisted almost exclusively of abuse management,” wrote Google open-source director Chris DiBona. Google Code launched in 2006.

Chrome extensions
At the risk of making itself look controlling, Google has been taking steps for years to protect Google Chrome users of extensions that inject ads and malware. In May it really put the kibosh on such software coming from any Windows channel, specifying that all extensions now need to original in the Chrome Web Store. Extensions for Chrome for OS X got the same treatment in July. “Extending this protection is one more step to ensure that users of Chrome can enjoy all the web has to offer without the need to worry as they browse,” a Google product manager wrote in announcing the changes.

Pwnium hacking contest
Google’s big one-day hacking contest at the CanSecWest event, under which it doled out hundreds of thousands of dollars since 2012, has been shuttered in favor of year-long opportunities for hackers to snag bounties for uncovering flaws in its Chrome technology. Among other things, Google was concerned that hackers were hoarding bugs until the contest came around.

Bookmarks Manager
Technicaly, Google didn’t kill the Bookmarks Manager in June, but it did relent to widespread hatred of the free Chrome extension and revert to including the old bookmark tool with its browser. Those few who did cotton to the new UI are still able to access the Bookmarks Manager if they know where to look. Meanwhile, Google’s Sarah Dee blogged: “Our team will continue to explore other ways to improve the bookmarks experience. ”

Google alerted users of its PageSpeed Service for making websites zippier that it would be killing off the tools as of Aug. 3. Google had pitched its 4.5-year-old hosted PageSpeed optimizing proxy as a way to improve website performance without having to know any code.

Google TV
Google kicked off 2015 by announcing it would ditch the Google TV brand that few probably knew existed and focus its living-room entertainment efforts instead on Android TV and Google Cast. The company said Google TV libraries would no longer be available, but Google TV devices would continue to work.

Google logo
Google nixed its colorful longtime serif typeface logo, around since 1999, in favor of a new sans serif colorful logo with a typeface dubbed Product Sans. With the emergence of the Alphabet parent company came a new look for its Google business.

Google Talk had a good run, starting up in 2005, but it’s never good when Google pulls out the term “deprecated” as it did in February in reference to this chat service’s Windows App. Google said it was pulling the plug on GTalk in part to focus on Google Hangouts in a world where people have plenty of other ways to chat online. However, Google Talk does live on via third-party apps.

Maps Coordinate for mobile workforces
Google in January emailed users of its mobile enterprise workforce management offering, which debuted in 2012, that the service would be shutting down come January 2016. Google has been folding various mapping-related products into one another in recent years, and is putting focus on its mapping APIs in its Maps for Work project going forward.

Google Moderator
This tool, launched in 2008, was used to “create a meaningful conversation from many different people’s questions, ideas, and suggestions.” The White House, among others, used it to organize feedback for online and offline events during the 2012 elections. But Google gave up on the tools in July due to its overall lack of use.

There’s no more helping Google Helpouts, which was discontinued in April. This online collaboration service was short-lived, launching in November 2013. While alive, it allowed users to share their expertise – for free or a fee — through live video and provide real-time help from their computers or mobile devices. It exploited Google Hangouts technology, but was largely redundant with so many help videos found on Google’s very own YouTube.

Eclipse developer tools
Google informed developers over the summer that it was time for them to switch over to Android Studio, now firmed up at Version 1.0, as the company would be “ending development and official support for the Android Developer Tools (ADT) in Eclipse at the end of the year. This specifically includes the Eclipse ADT plugin and Android Ant build system.”

Flu Trends
Google in August said it was discontinuing its Flu and Dengue Trends, which were estimates of flu and Dengue fever based on search patterns. Flu Trends launched in 2008 as an early example of “nowcasting” and Google is now leaving the data publishing on diseases to health organizations that it will work with. Historical data remains available from Google.

Google+ ?
Google’s social networking technology has never had much life in the first place and isn’t “really most sincerely dead” like the Wicked Witch, but Google keeps messing around with it, such as extracting the Google Photos app from it, as announced at Google I/O this year, while adding a feature called Collections. Google also has stopped requiring people to have Google+ accounts to tap into other services, such as YouTube channel creation.



MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


70-341 Core Solutions of Microsoft Exchange Server 2013

You need to prepare the environment for the implementation of phase 1.
What changes must be made to the environment before you can install Exchange Server 2013?

A. The operating system or service pack level of TexDC1 needs to be upgraded.
B. The Windows 2008 R2 domain controllers in Washington and Boston need to be upgraded.
C. A server running Exchange Server 2007 or Exchange Server 2010 needs to be installed in Texas.
D. The PDC emulator role needs to be transferred to a domain controller in Washington or Boston.

Answer: A


You are evaluating whether the proposed Exchange solution will meet the current and future
capacity requirements.
You want to gather statistics about the current Exchange environment.
Which of the following tools would you use to determine the number of emails sent to and received
by the current users?

A. Remote Server Administration Tools.
B. Microsoft Exchange Server Profile Analyzer.
C. Microsoft Exchange Server Deployment Assistant.
D. ESEUtil.exe.
E. Microsoft Exchange Server Jetstress.

Answer: B


You need to apply the required size restriction to the mailboxes in the new environment.
Which of the following commands should you run?

A. Get-MailboxDatabase | Set-MailboxDatabase –ProhibitSendReceiveQuota
B. Get-MailboxDatabase | Set-Mailbox –ProhibitSendReceiveQuota
C. Get-Mailbox | Set-Mailbox –ProhibitSendReceiveQuota
D. Get-MailboxDatabase | Get-Mailbox | Set-Mailbox –ProhibitSendReceiveQuota

Answer: A


You are evaluating whether the proposed Exchange solution will meet the current and future
capacity requirements.
You want to gather statistics about the current Exchange environment.
Which of the following tools would you use to determine the number of IOPS (Input/Output
Operations Per Second) required for the mailbox database storage?

A. ESEUtil.exe.
B. Microsoft Exchange Server Jetstress.
C. Microsoft Exchange Server Deployment Assistant.
D. Exchange Mailbox Server Role Requirements Calculator.
E. SQL Server Analysis Services.

Answer: D


You need to install and configure anti-spam and antimalware filtering.
Which servers should you install the anti-spam agents and enable the anti-spam and antimalware
filtering? (Choose two).

A. You should install the anti-spam agents on the Client Access Servers only.
B. You should install the anti-spam agents on the Mailbox serversonly.
C. You should install the anti-spam agents on the Client Access Servers and the Mailbox Servers.
D. You should enable antimalware filtering on the Client Access Serversonly.
E. You should enable antimalware filtering on the Mailbox serversonly.
F. You enable antimalware filtering on the Client Access Servers and the Mailbox Servers.

Answer: B,E




Click here to view complete Q&A of 70-341 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-341 Training at

admin's RSS Feed
Go to Top