Archive for September, 2014

Amazing 12 tips to tune your Wi-Fi network

Wi-Fi networks can be very tricky to properly design and configure, especially in the small, crowded 2.4 GHz frequency band. In addition to interference from neighboring wireless networks, capacity issues arise when there are a high number of users on the network or a high density in a certain area.

In the early days of Wi-Fi, there weren’t that many Wi-Fi users or devices in the world. Today, the situation is much different. Private offices and buildings that have a wireless network may provide access to one, two, or even more Wi-Fi devices per worker and then maybe provide access for guests as well. More and more people are looking for Wi-Fi connectivity, especially at public venues — on their laptops, smartphones and tablets — to help conserve cellular data usage.

1. Design for throughput and capacity
When there weren’t many Wi-Fi users, you could design wireless networks pretty much based on coverage. You could perform a RF site survey and find the optimum locations for access points to ensure they provided adequate coverage. Now you should also design for throughput and capacity.

When designing a wireless network you should evaluate the Wi-Fi client devices that will be using it and how they’ll utilize it. Then you can do some calculations to figure estimated throughput and access points needed to support them, while also accounting for future growth and changes.

With 802.11b/g/n in the 2.4GHz band, there are only three non-overlapping channels. Thus co-channel interference becomes an issue when you bunch more than three access points in close proximity. Ideally, you don’t want an access point to hear any other access point on the same or overlapping channel. Though the 802.11 standards have mechanisms in place to deal with interference like this, co-channel interference will decrease performance.

2. Think about airtime
In areas where there is a high density of Wi-Fi users, like in public venues, you may find that the three 2.4GHz channels aren’t enough. However before overlapping channels and causing co-channel interference there are some techniques you might be able to utilize to increase capacity with the access points you already have.

Remember, wireless networks are all about airtime. Wi-Fi clients must contend for airtime with the access points as only one device, whether an access point or client, can transmit at any one time on a given channel. The higher the throughput and speeds data is transferred, the less airtime that is required, and generally the more clients that can connect and utilize the wireless access.

There are many settings you can configure to help boost performance and trim airtime.

3. Utilize 5GHz band steering

To help alleviate the crowded 2.4 GHz band, try to get Wi-Fi users onto the larger, less congested 5GHz band. Consider using dual-band 802.11n or 802.11ac access points that support band steering. When supported and enabled on the access points, dual-band clients will be guided or forced onto 5GHz instead of just leaving it up to the user or device to decide which band to

Most access points implement this type of functionality by responding only to the probe and association requests in the 5-GHz band when it has seen the same client with a probe/association request in the 2.4-GHz band. Thus once the access point knows a client is dual-band capable, it only allows connections from the client in the 5-GHz band.

In the 5-GHz band you have many more channels, and more Wi-Fi devices these days are supporting this band. However, do understand that 5GHz generally has less range due to the higher frequency. Thus you may have to do more Wi-Fi surveying to design for good 5GHz coverage.

If 5-GHz coverage isn’t up to par, consider configuring any band steering thresholds supported by the access point. Some allow you to set a minimum signal level a client must have before band steering will be used or a number of missed probe/association requests to 5GHz from the client before allowing connections on 2.4GHz.

4. Only use WPA2 security
Although both WPA and WPA2 security versions will work with 802.11n and 802.11ac, the data rates are limited to 54Mbps with WPA. You should select WPA2 only for the security on the private SSID(s) to allow maximum throughput when using the newer wireless standards. Any legacy clients not supporting the newer security should be upgraded.

5. Limit the amount of virtual SSIDs
When creating additional SSIDs, keep in mind that each one increases the overall overhead of the wireless network. Each SSID will generate additional beacons, probes, and other management traffic, taking up more airtime, even if the SSID isn’t being used. So consider limiting the number of virtual wireless networks; perhaps one for private access and another for public access. If needed, you can further segregate private access levels via dynamic VLAN assignment using 802.1X authentication for instance.

6. Disable lower data rates
Consider disabling the lower data rates to force packets, including those for management, to be sent via higher data rates and to ensure clients connect at higher data rates. This also encourages clients to automatically roam to better access points quicker rather than staying connected to an access point until the last second like they may normally do.

If you still have legacy 802.11b clients on the network you should really consider upgrading/replacing them, but you could still disable the lowest data rates (1M, 2M, and 5.5Mbps) and leave the highest (11Mbps) enabled.

If you don’t have any 802.11b clients, consider disabling all data rates at and below 11Mbps.
You’ll likely still need to support 802.11g clients, but if your Wi-Fi coverage is good enough you may also be able to disable some of the lower 802.11g data rates: 12M, 18M, 24M, 36M, and 48Mbps.

7. Configure proper channel-widths
On access points that support channels larger than the legacy 20MHz, you likely want to disable the Auto 20/40 MHz selection for 2.4GHz and only use the 20-MHz channels. In this band, it’s only possible to have one non-overlapping 40-MHz-wide channel. Thus larger channels are only really useful for areas where only one access point or channel will be used, including any neighboring networks.
MORE ON NETWORK WORLD: How to use public Wi-Fi hotspots safely

For 5GHz, however, you may be able to use larger channel-widths since there is more frequency spectrum. Just ensure the bonded channels will not cause co-channel interference with yours or neighboring networks.

8. Transmission times

Shortening packet sizes or transmission times can help increase performance as well. Here are a few settings you may want to enable:

9. Limit broadcast traffic
Broadcast traffic can also slow down the overall throughput of a wireless network, thus consider these two techniques to decrease broadcast traffic:

Enable wireless client isolation to prevent Wi-Fi devices from broadcasting to each other, if the user-to-user communication isn’t required. The Wi-Fi devices will still be able to communicate to wired clients, but not directly with wireless clients.
Separate the LAN and WLAN broadcast domains to cut down on the amount of broadcast traffic on the WLAN side.

10. Adjust the beacon interval
As mentioned earlier, each access point will broadcast a beacon packet for each individual SSID, which contains the basic information about the wireless network. The default interval rate at which beacon packets are sent over the airwaves is usually 100ms.

Increasing the interval rate will decrease the amount of beacons and the airtime they take up, but that can also cause other unwanted side effects. Typically, the smaller the interval, the quicker the clients will connect to and roam between the access points. The bigger the interval, the longer it takes clients to connect/roam and the longer delay for clients sending/receiving data that have power save mode enabled.

11. Adjust the fragmentation and RTS thresholds
Lowering the fragmentation and Request to Send (RTS) thresholds can help increase performance on wireless networks with a large number (at least over 5%) of collisions and/or interference.

If it appears you have a hidden node issue where clients are far apart and can’t hear each other but both can hear the access point, then start with reducing the RTS threshold. Perhaps start with threshold of around 500 bytes.

If hidden nodes don’t appear to be an issue, start with reducing the fragmentation threshold. Perhaps start with threshold of around 1,000 bytes.

12. Additional site surveys
Keep in mind, reducing these thresholds can also slow the network if not truly needed. I recommend making slight changes and then performing testing to ensure you’re seeing an improvement.

In addition to tweaking these settings, you may want perform additional RF site surveys if capacity issues still arise. You may find adjusting access point transmit levels and the access point locations can help make cell sizes smaller, enabling you to put more access points into an area. Also look into other network configurations that could affect capacity, for instance an adequate DHCP range.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

First Look: BlackBerry Passport

BlackBerry does an about-face, back towards its enterprise roots.

So BB 10 didn’t work out so well, did it?
Which helps explain why, with the new Passport smartphone, BlackBerry is ditching the years-late emphasis on competing for consumers and refocusing on the enterprise users on which the company was built. The Passport is uniquely focused on being a device for work first and personal stuff second – take a look at how it’s turned out.

It’s hip to be square
We’re just not used to square screens anymore, are we? I think the last one I used was on a flip-phone, circa about 2005. So in a sense, BlackBerry’s not putting the Passport in great company there. Given that this screen is 4.5 inches and boasts 1440×1440 resolution, though, it’s probably OK.

Big in Canada
It’s a big device, there’s no getting around that – as the name suggests, it’s the size of a U.S. passport. That said, it’s no more outsized than other recently released phablets like the Samsung Galaxy Note 4 or the iPhone 6 Plus.

Of course it has a keyboard
It’s a new design, and it incorporates some intriguing touchpad functionality, like swiping to select auto-suggest entries. And it’s a business-focused BlackBerry device – of course it has a physical keyboard.

A voice search thingy!
One of many catch-up boxes checked by the Passport, the new voice search functionality appears to work more or less the same way as Siri/Cortana/Google Voice search, et al.

The impressive BlackBerry Blend system provides an app that can run on other mobile devices, as well as on desktops and laptops, that brings files and messages from the Passport to whichever device you happen to be using at the time, and segregates them into personal and enterprise spaces.

Some apps
BlackBerry bolsters its own somewhat limited app offerings with access to the Amazon App Store, which provides a larger selection of Android apps for use on the Passport.

Under the hood
The Passport’s specs bring it into line with the latest Androids and iPhones – a 2.2GHz, quad-core Snapdragon processor, 3GB of RAM, a 13MP camera with optical image stabilization and 32GB of on-board storage, with a microSD slot for expandability. It’s also got a big 3450 mAh battery, which BlackBerry was eager to talk up.

The nitty-gritty
The Passport goes on sale tomorrow from Amazon and BlackBerry directly, for $600 unlocked. It’ll be available on-contract from as-yet unspecified carriers for about $250, BlackBerry said.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at




10 Hot Internet of Things Startups

As Internet connectivity gets embedded into every aspect of our lives, investors, entrepreneurs and engineers are rushing to cash in. Here are 10 hot startups that are poised to shape the future of the Internet of Things (IoT).

As Internet connectivity gets embedded into everything from baby monitors to industrial sensors, investors, entrepreneurs and engineers are rushing to cash in. According to Gartner, Internet of Things (IoT) vendors will earn more than $309 billion by 2020. However, most of those earnings will come from services.

Gartner also estimates that by 2020, the IoT will consist of 26 billion devices. All of those devices, Cisco believes will end up dominating the Internet by 2018. You read that right: In less time than it takes to earn a college degree (much less time these days), machines will communicate over the Internet a heck of a lot more than people do.
MORE ON NETWORK WORLD: 12 most powerful Internet of Things companies

With the IoT space in full gold-rush mode, we evaluated more than 70 startups to find 10 that look poised to help shape the future of IoT.

Note: These 10 are listed in alphabetical order and are not ranked.
1. AdhereTech

What they do: Provide a connected pill bottle that ensures patients take their medications.

Headquarters: New York, N.Y.

CEO: Josh Stein. He received his MBA from Wharton in 2012, and, before that, he worked for a number of successful startups in New York City, including Lot18, PlaceVine and FreshDirect.

Why they’re on this list: There are plenty of companies trying to cash in on IoT by tethering it to healthcare. Let’s call it the Internet of Health (IoH). What’s impressive about AdhereTech, though, is that it focuses on a discrete problem and knocks it out of the park with its solution. It’s simple and smart.

Prescription adherence — sticking to your prescribed medication regimen — is one of the biggest problems plaguing medicine. Current levels of adherence are as low as 40 percent for some medications. Poor adherence to appropriate medication therapy has been shown to result in complications, increased healthcare costs, and even death. Medication adherence for patients with chronic conditions, such as diabetes, hypertension, hyperlipidemia, asthma and depression, is an even more significant problem, often requiring intervention.

According to AdhereTech, of all medication-related hospital admissions in the United States, 33 to 69 percent are related to poor medication adherence. The resulting costs are approximately $100 billion annually, and as many as 125,000 deaths per year in the U.S. can be attributed to medication non-adherence.

AdhereTech’s pill bottle seeks to increase adherence and reduce the costs associated with missed or haphazard medication dosage. The bottle uses sensors to detect when one pill or one liquid milliliter of medication is removed from the bottle. If a patient hasn’t taken his/her medication, the service reminds them via phone call or text message, as well as with on-bottle lights and chimes. The company’s software also asks patients who skip doses why they got off schedule. In addition to helping people remember, AdhereTech aggregates data anonymously to give a clearer picture of patient adherence overall to pharmaceutical companies and medical practitioners.

Customers: AdhereTech has trials running with Boehringer Ingelheim for a TBD medication, The Walter Reed National Military Medical Center for type 2 diabetes medication and Weill Cornell Medical College for HIV medication.

Competitive Landscape: Vitality GlowCap is the most direct competitor for AdhereTech. Other less direct competitors include RXAnte, an analytics company that helps to identify patients most at risk for falling off their prescription regimen, and Proteus Digital Health, which puts tiny digestible sensors inside of pills to give doctors a clearer picture of patient compliance.


Best Microsoft MCTS Training – Microsoft MCITP Training at

Popular Android apps fail basic security tests, putting privacy at risk

Instagram and Grindr stored images on their servers that were accessible without authentication, study finds

Instagram, Grindr, OkCupid and many other Android applications fail to take basic precautions to protect their users’ data, putting their privacy at risk, according to new study.

Data integration is often underestimated and poorly implemented, taking time and resources. Yet it
Learn More

The findings comes from the University of New Haven’s Cyber Forensics Research and Education Group (UNHcFREG), which earlier this year found vulnerabilities in the messaging applications WhatsApp and Viber.

This time, they expanded their analysis to a broader range of Android applications, looking for weaknesses that could put data at risk of interception. The group will release one video a day this week on their YouTube channel highlighting their findings, which they say could affect upwards of 1 billion users.

“What we really find is that app developers are pretty sloppy,” said Ibrahim Baggili, UNHcFREG’s director and editor-in-chief of the Journal of Digital Forensics, Security and Law, in a phone interview.

The researchers used traffic analysis tools such as Wireshark and NetworkMiner to see what data was exchanged when certain actions were performed. That revealed how and where applications were storing and transmitting data.

Facebook’s Instagram app, for example, still had images sitting on its servers that were unencrypted and accessible without authentication. They found the same problem in applications such as OoVoo, MessageMe, Tango, Grindr, HeyWire and TextPlus when photos were sent from one user to another.

Those services were storing the content with plain “http” links, which were then forwarded to the recipients. But the problem is that if “anybody gets access to this link, it means they can get access to the image that was sent. There’s no authentication,” Baggili said.

The services should either ensure the images are quickly deleted from their servers or that only authenticated users can get access, he said.

Many applications also didn’t encrypt chat logs on the device, including OoVoo, Kik, Nimbuzz and MeetMe. That poses a risk if someone loses their device, Baggili said.

“Anyone who gets access to your phone can dump the backup and see all the chat messages that were sent back and forth,” he said. Other applications didn’t encrypt the chat logs on the server, he added.

Another significant finding is how many of the applications either don’t use SSL/TLS (Secure Sockets Layer/Transport Security Layer) or insecurely use it, which involves using digital certificates to encrypt data traffic, Baggili said.

Hackers can intercept unencrypted traffic over Wi-Fi if the victim is in a public place, a so-called man-in-the-middle attack. SSL/TLS is considered a basic security precaution, even though in some circumstances it can be broken.

OkCupid’s application, used by about 3 million people, does not encrypt chats over SSL, Baggili said. Using a traffic sniffer, the researchers could see text that was sent as well as who it was sent to, according to one of the team’s demonstration videos.

Baggili said his team has contacted developers of the applications they’ve studied, but in many cases they haven’t been able to easily reach them. The team wrote to support-related email addresses but often didn’t receive responses, he said.

Best Microsoft MCTS Training – Microsoft MCITP Training at

Are breaches inevitable?

Security managers have to do a lot more to stay a step ahead of determined hackers

Is there a reason that data breaches have been happening at a rapid clip lately? And is there more that we, as security managers, should be doing to make sure that our own companies don’t join the ranks of the breached?

Home Depot is the latest company to make headlines for a potentially big data breach, and it just might be the biggest one yet. The current record holder is Target, and we’ve more recently seen the company that owns grocery store chains Supervalu, Albertsons, Acme Markets, Jewel-Osco and Shaw’s compromised by hackers. J.P. Morgan and four other major banks appear to have fallen victim to security breaches. UPS stores were also hit by hackers, and several hundred Norwegian companies were compromised. These victims have joined the ranks of Neiman-Marcus, Michael’s, Sally Beauty, P.F. Chang’s and Goodwill. What’s going on?
MORE ON NETWORK WORLD: Free security tools you should try

The motivation for attacks like these is usually financial. The attackers are stealing credit card and debit card numbers, along with personal information, which they then sell in underground markets. We don’t yet know whether this is the case with the banks that were hit; those attacks may have been politically motivated, or we may learn that fraudulent transactions were used to steal money. In any case, there seems to be a big jump in electronic data theft for profit. But the stolen information is only valuable for a few days, and its value diminishes rapidly by the hour. Some security researchers are saying that this loss of value is motivating today’s data thieves to move quickly. Another factor may be Microsoft’s termination of support for Windows XP, which could be prompting hackers to go for one last all-out heist to grab what they can while many systems are still vulnerable. Perhaps, knowing that all the vulnerabilities of Windows XP would soon vanish, our thieves had a fire sale.

But I suspect there is more to the story. Most big businesses use standard security procedures and technologies that have been around for years, if not decades. Many of these defenses have not kept up with current threats. Take antivirus, for example. Signature-based malware detection has long been ineffective against modern malware, yet most companies continue to rely on it as a key defense. We know from the details of some of the retail breaches that those who have implemented advanced heuristic malware detection have ignored the alarms set off by the point-of-sale malware (for reasons I cannot fathom). Patching will always be a game of catch-up, with the attackers having the upper hand. And password-based authentication will evidently be with us forever, much as I might rail against it. Attackers use all of these to get through their victims’ defenses.

The simple fact of the matter is that attackers will always have several vulnerabilities to choose from at any potential victim they want to target. And security managers, even those who are really good at their jobs, will never be able to close every single hole. And it only takes one.

So if traditional information security practices are not enough, what else can we do? I’ve been giving that question a lot of thought lately, and I think part of the answer is to evolve our security technologies, just as the attackers evolve their techniques. That heuristic behavior-based malware detection technology I keep talking about is pretty cool, but is it still cutting-edge? It’s been around for three or four years. Is there anything newer out there? And how can we choose the right technologies that are going to be effective against emerging threats but still stand the test of time so their manufacturers will be around three years from now?

There are some new products starting to go to market, and venture capitalists are funding a lot of new security technology. I think we should all keep a close eye on them. I’m beginning to believe that in the cutthroat rivalry between attacker and defender, the best technology wins. The only way we can keep one step ahead of today’s hackers is to take two steps forward and advance our defensive capabilities to the point where we can reliably repel, or at least detect, today’s data thieves.


Best Microsoft MCTS Training – Microsoft MCITP Training at



Chromebook Pixel revisited: 18 months with Google’s luxury laptop

Is it crazy to pay $1300 for a Chromebook? Some reflections after a year and a half of living with Google’s luxurious Pixel.

When you stop and think about it, it’s kind of astonishing how far Chromebooks have come.

It was only last February, after all, that Google’s Chromebook Pixel came crashing into our lives and made us realize how good of an experience Chrome OS could provide.

At the time, the Pixel was light-years ahead of any other Chromebook in almost every possible way: From build quality to display and performance, the system was just in a league of its own. And its price reflected that status: The Pixel sold for a cool $1300, or $1450 if you wanted a higher-storage model with built-in LTE support.

Today, the Pixel remains the sole high-end device in the Chromebook world (and its price remains just as high). But the rest of the Chrome OS universe has evolved — and the gap between the Pixel and the next notch down isn’t quite as extreme as it used to be.

So how has the Pixel held up 18 months after its release, and does it still justify the lofty price? I’ve owned and used the Pixel since last spring and have evaluated almost every other Chromebook introduced since its debut.

Here are some scattered thoughts based on my experiences:

1. Hardware and design
As I said when I revisited the device a year ago, the Chromebook Pixel is hands-down the nicest computer I’ve ever used. The laptop is as luxurious as it gets, with a gorgeous design, premium materials, and top-notch build quality that screams “high-end” from edge to edge.
Chromebook Pixel Revisited

We’re finally starting to see some lower-end Chromebooks creep up in the realms of design and build quality — namely the original HP Chromebook 11 (though it’s simply too slow to recommend for most people) and the ThinkPad Yoga 11e Chromebook (which is sturdy and well-built but not exactly sleek) — and that’s a very good thing. In fact, that’s a large part of what Google was ultimately trying to accomplish by creating the Pixel in the first place. Think about it.

While those devices may be a step up from the status quo, though, they’re not even close to the standard of premium quality the Pixel delivers. When it comes to hardware, the Pixel is first-class through and through while other products are varying levels of economy.

The Pixel’s backlit keyboard and etched-glass trackpad also remain unmatched in their premium nature. Typing and navigating is a completely different experience on this laptop than on any other Chromebook (and, for that matter, on almost any non-Chrome-OS laptop, too).

The same goes for the Pixel’s spectacular speakers. Other Chromebooks are okay, but none is anywhere near this outstanding.

2. Display
The display — man, oh man, the display. The Pixel’s 12.85-in. 2560-x-1700 IPS screen is like candy for your eyes. The vast majority of Chromebook screens (yes, even those that offer 1080p resolution) are still using junky TN panels and consequently look pretty awful. The two exceptions are the same systems mentioned above — the HP 11 and the ThinkPad Yoga 11e — but while those devices’ displays reign superior in the sub-$500 category, their low resolution is no match for the Pixel’s crystal-clear image quality.

I continue to appreciate the Pixel’s touchscreen capability to this day, too: While I certainly don’t put my fingers on the screen all the time, it’s really nice to have the ability to reach up and tap, scroll, or pinch when I feel the urge. For as much time as I spend using smartphones and tablets, it seems completely natural to be able to do that with a laptop as well. (Admit it: You’ve tried to touch a non-touchscreen laptop at some point. We all have.)
“Performance is where things get particularly interesting”

I will say this, though: The time I’ve spent recently with the Yoga 11e has definitely gotten me keen on the idea of a Chromebook being able to convert into a tablet-like setup. After using that device, I sometimes find myself wishing the Pixel’s display could tilt back further and provide that sort of slate-style experience.

3. Stamina and performance
At about five hours per charge, the Pixel’s battery life is passable but not exceptional — especially compared to the eight to 10 hours we’re seeing on some systems these days. As I’ve mused before, stamina is the Pixel’s Achilles’ heel.

Performance is where things get particularly interesting: When the Pixel first came out, its horsepower was unheard of for a Chrome OS device. I could actually use the system in my typical power-user way, with tons of windows and tabs running at the same time and no slowdowns or multitasking misery. Compared to the sluggish Chrome OS systems we’d seen up to that point, it felt like a full-fledged miracle.

The Pixel’s performance is no less impressive today, but what’s changed is that other Chrome OS systems have actually come close to catching up. These days, you can get solid performance in a Chromebook for around $200 with the various Haswell-based systems. The newer Core i3 devices give you a little more punch for around $300. Neither quite reaches the Pixel’s level of snappiness and speed, but in practical terms, they’re not too far behind.

So for most folks, performance alone is no longer a reason to own the Pixel. It’s an important part of the Pixel, for sure, but if that’s the only thing you’re interested in, you’d do far better to save yourself the cash and get a lower-end Chromebook with decent internals.

To Pixel or not to Pixel?
What is a reason to own the Pixel, then? Simple: to enjoy a top-of-the-line Chrome OS experience with all the amenities you could ask for. The device’s hardware quality and design, keyboard and trackpad, speakers, and display add up to make a wonderful overall user experience no other Chromebook can match.

As for whether it’s worth the price, well, that’s a question only you can answer. Is a high-end car worth the premium over a reliable but less luxurious sedan? For someone like me, probably not. But for someone who’s passionate about cars, spends a lot of time in a vehicle and appreciates the elevated quality, it just might be.

The same concept applies here. The Pixel remains a fantastic luxury option for users sold on the Chrome OS concept — people like me who rely heavily on cloud storage and spend most of their time using Web-centric apps and services.

Like with any luxury item, the level of quality the Pixel provides certainly isn’t something anyone needs, but its premium nature is something a lot of folks will enjoy — and that’s as true today as it was last year.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Go to Top