Gartner: IT careers – what’s hot?

Do you know smart machines, robotics and risk analysis? Gartner says you should

ORLANDO— If you are to believe the experts here a the Gartner IT Symposium IT workers and managers will need to undergo wide-spread change if they are to effectively compete for jobs in the next few years.

How much change? Well Gartner says by 2018, digital business requires 50% less business process workers and 500% more key digital business jobs, compared to traditional models. IT leaders will need to develop new hiring practices to recruit for the new nontraditional IT roles.

“Our recommendation is that IT leaders have to develop new practices to recruit for non-traditional IT roles…otherwise we are going to keep designing things that will offend people,” said Daryl Plummer, managing vice president, chief of Research and chief Gartner Fellow. “We need more skills on how to relate to humans – the people who think people first are rare.”

Gartner intimated within large companies there are smaller ones, like startups that need new skills.

“The new digital startups in your business units are thirsting for data analysts, software developers and cloud vendor management staff, and they are often hiring them fast than IT,” said Peter Sondergaard, senior vice president and global head of Research. “They may be experimenting with smart machines, seeking technology expertise IT often doesn’t have.”

So what are the hottest skills? Gartner says right now, the hottest skills CIOs must hire or outsource for are:
Mobile
User Experience
Data sciences

Three years from now, the hottest skills will be:
Smart Machines (including the Internet of Things)
Robotics
Automated Judgment
Ethics

Over the next seven years, there will be a surge in new specialized jobs. The top jobs for digital will be:
Integration Specialists
Digital Business Architects
Regulatory Analysts
Risk Professionals


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Biggest, baddest, boldest software backdoors of all time

These 12 historically insidious backdoors will have you wondering what’s in your software — and who can control it

The boldest software backdoors of all time

It’s always tough to ensure the software you’re using is secure, but it’s doubly difficult if the creators of the software — or some malicious unknown third party — has surreptitiously planted a back way in.

Here’s a look at 12 of the trickiest, subtlest, and nastiest software backdoors found in the wild yet.

Back Orifice
Far from being the first backdoor, Back Orifice brought backdoor awareness to a wider audience. Created in 1998 by folks from the Cult of the Dead Cow hacker collective, Back Orifice allowed computers running Microsoft Windows to be controlled remotely over a network (and cleverly played off the name of Microsoft BackOffice Server, a precursor to Windows Small Business Server).

Back Orifice was devised to demonstrate deep-seated security issues in Microsoft Windows 98, and so it sported such features as being able to hide itself from the user — something that endeared it to a generation of black hat hackers because it could be used as a malicious payload.

The DSL backdoor that wouldn’t die
Having a backdoor in your hardware product is bad enough; promising to fix it and then only covering up its existence is even worse. But that’s what happened at the end of 2013 with a number of DSL gateways that used hardware made by Sercomm, all of which sported a manufacturer-added backdoor on port 32764. A patch was later released in April 2014 to fix the problem, but the “fix” only concealed access to the port until a specially crafted packet (a “port knock”) was sent to reveal it. We’re still waiting for a real fix.

The PGP full-disk encryption backdoor
Here’s one for the “not a backdoor, but a feature” department: PGP Whole Disk Encryption, now marketed by Symantec, allows an arbitrary static password to be added to the boot process for an encrypted volume. (By default the password expires the first time it’s used.) When first unearthed in 2007, PGP replied that other disk-encryption products had similar functionality, although the lack of public documentation for the feature was unnerving. At least now we know it’s in there, but the jury’s still out on whether it should be there to begin with.

Backdoors in pirated copies of commercial WordPress plug-ins
WordPress may be one of the most popular and powerful blogging and content management systems out there, but its track record on security leaves a lot to be desired. Some of the sneakiest breaches have come by way of pirated copies of premium plug-ins surreptitiously patched to include backdoors, at least one of which was obfuscated so well that expert WordPress users might have trouble detecting it.

Yet another reason to avoid pirated software (as if you needed any more).

The Joomla plug-in backdoor
WordPress isn’t the only major CMS that’s experienced backdoor issues with plugins. Joomla installations have been victimized in a similar way — for instance, via a free plug-in, the code of which was apparently modified after the fact.

Such sneak attacks are generally performed as a means for getting back into a website that’s been hacked because few think twice about checking whether a CMS plug-in was the point of entry of an attack.

The ProFTPD backdoor
ProFTPD, a widely used open source FTP server, nearly had a backdoor planted in it as well. Back in 2010, attackers gained access to the source code hosting server and added code which allowed an attacker to spawn a root shell by sending the command “HELP ACIDBITCHEZ.” Irony abounded in this case: The attackers used a zero-day exploit in ProFTPD itself to break into the site and plant the malicious code!

The Borland Interbase backdoor
This one’s guaranteed to raise hairs. From 1994 through 2001, Borland (later Inprise) Interbase Versions 4.0 through 6.0 had a hard-coded backdoor — one put there by Borland’s own engineers. The backdoor could be accessed over a network connection (port 3050), and once a user logged in with it, he could take full control over all Interbase databases. The kicker, and a sign of some strange programmer humor at work, was the credentials that were used to open the backdoor. Username: politically. Password: correct.

The Linux backdoor that wasn’t
Back in 2003, someone attempted to insert a subtle backdoor into the source code for the Linux kernel. The code was written to give no outward sign of a backdoor and was added to the Linux source by someone who broke into the server where the code was hosted.

Two lines of code were changed — something that might have breezed past most eyes. Theoretically, the change could have allowed an attacker to give a specific, flagged process root privileges on a machine. Fortunately, the backdoor was found and yanked when an automatic code audit detected the change. Speculation still abounds about who might have been responsible; perhaps a certain three-letter agency that asked Linus Torvalds to add backdoors to Linux might know.

The tcpdump backdoor
One year before someone tried to backdoor the Linux kernel, someone tried to sneak a backdoor into a common Linux (and Unix) utility, tcpdump. A less stealthy hack than the Linux one — the changes were fairly obvious — it added a command-and-control mechanism to the program that could be activated by traffic over port 1963. As with the Linux backdoor, it was added directly to the source code by an attacker who broke into the server where the code was hosted. As with the Linux backdoor attempt, it was quickly found and rooted out (no pun intended).

The tcpdump backdoor
One year before someone tried to backdoor the Linux kernel, someone tried to sneak a backdoor into a common Linux (and Unix) utility, tcpdump. A less stealthy hack than the Linux one — the changes were fairly obvious — it added a command-and-control mechanism to the program that could be activated by traffic over port 1963. As with the Linux backdoor, it was added directly to the source code by an attacker who broke into the server where the code was hosted. As with the Linux backdoor attempt, it was quickly found and rooted out (no pun intended).

The NSA’s TAO hardware backdoors
Never let it be said that the NSA doesn’t have some clever tricks up its sleeve. Recent revelations about its TAO (Tailored Access Operations) program show that one of the NSA’s tricks involves intercepting hardware slated for delivery overseas, adding backdoors to the device’s firmware, and then sending the bugged hardware on its merry way. Aside from network gear, the NSA also apparently planted surveillance software in the firmware for various PCs, and even in PC peripherals like hard drives.

The Windows _NSAKEY backdoor that might have been
Speaking of the NSA, in 1999 researchers peered into Windows NT 4 Service Pack 5 and found a variable named _NSAKEY with a 1024-bit public key attached to it. Speculation ran wild that Microsoft was secretly providing the NSA with some kind of backdoor into encrypted data on Windows or into Windows itself. Microsoft denied any such activity, and security expert Bruce Schneier also doubted anything nefarious was going on. But rumors have swirled ever since concerning unpluggable backdoors into Windows.

The dual elliptic curve backdoor
Yet another from the NSA, and perhaps the sneakiest yet: a deliberate, stealthy weakening of a random number generator commonly used in cryptography. Theoretically, messages encrypted with the Dual_EC_DRBG (Dual Elliptic Curve Deterministic Random Bit Generator) standard, ratified by NIST, had a subtle weakness that could allow them to be decrypted by an attacker. Only after Edward Snowden leaked internal NSA memos did it come to light that said agency had manipulated the approval process for the standard to allow the backdoor to remain in the algorithm. Fortunately, plenty of other random number generators exist, and NIST has since withdrawn its recommendations for Dual_EC_DRBG. Small wonder people speculate what else the NSA may have hidden up its (and other peoples’) sleeves.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Gartner: Top 10 strategic predictions for businesses to watch out for

For a session that is high-tech oriented, this year’s Gartner strategic predictions were decidedly human.

That is to say many were related to increasing the customer’s experience with technology and systems rather than the usual techno-calculations.
Gartner 2014

“Machines are taking an active role in enhancing human endeavors,” said Daryl Plummer is a managing vice president, chief of Research and chief Gartner Fellow. “Our predictions this year maybe not be directly tied to the IT or CIO function but they will affect what you do.”

Plummer outlined the following predictions and a small recommendation as to what IT can do to prepare for the item. Read on:

1. By 2018, digital business requires 50% less business process workers and 500% more key digital business jobs, compared to traditional models. IT leaders — need to develop new hiring practices to recruit for the new nontraditional IT roles.

2. By 2017, a significant disruptive digital business will be launched that was conceived by a computer algorithm. CIOs must begin to simulate technology-driven transformation options for business.

3. By 2018, the total cost of ownership for business operations will be reduced by 30% through smart machines and industrialized services. CIOs must experiment with precursor “almost smart machine” technologies and phantom robotic business process automation. Business leaders must examine the impact of increased wellness on insurance and employee healthcare costs as a competitive factor.

4. By 2020, developed world life expectancy will increase by 0.5 years due to widespread adoption of wireless health monitoring technology. Business leaders must examine the impact of increased wellness on insurance and employee healthcare costs as a competitive factor

5. By year-end 2016, $2.5 billion in online shopping will be performed exclusively by mobile digital assistants. Apple’s Siri is a type of assistant, but many online vendors offer some sort of software-assist that you may or may not be aware of. Marketing executives must develop marketing techniques that capture the attention of digital assistants as well as people. By the end of 2016, $2.5 billion in online shopping will be performed exclusively by mobile digital assistants.

6. By 2017, U.S. customers’ mobile engagement behavior will drive U.S. mobile commerce revenue to 50% of U.S. digital commerce revenue. Recommendation: Marketing executives must develop marketing techniques that capture the attention of digital assistants as well as people. Mobile marketing teams investigate mobile wallets such as Apple’s Passbook and Google Wallet as consumer interest in mobile commerce and payments grows.

7. By 2016, 70% of successful digital business models will rely on deliberately unstable processes designed to shift as customer needs shift. CIO need to create an agile, responsive workforce that is accountable, responsive, and supports your organizational liquidity.

8. By 2017, more than half of consumer product and service R&D investments will be redirected to customer experience innovations. Consumer companies must invest in customer insight through persona and ethnographic research.

9. By 2017, nearly 20% of durable goods e-tailers will use 3D printing to create personalized product offerings. CIOs, product development leaders, and business partners—evaluate gaps between the existing “as is” and future “to be” state (process, skills, and technology.)

10. By 2018, retail businesses that utilize targeted messaging in combination with internal positioning systems (systems that know you are in or near a store) will see a 20% increase in customer visits. CIOs must help expand good customer data to support real-time offers.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

7 killer open source monitoring tools

Network and system monitoring is a broad category. There are solutions that monitor for the proper operation of servers, network gear, and applications, and there are solutions that track the performance of those systems and devices, providing trending and analysis. Some tools will sound alarms and notifications when problems are detected, while others will even trigger actions to run when alarms sound. Here is a collection of open source solutions that aim to provide some or all of these capabilities.

Cacti
Cacti is a very extensive performance graphing and trending tool that can be used to track just about any monitored metric that can be plotted on a graph. From disk utilization to fan speeds in a power supply, if it can be monitored, Cacti can track it — and make that data quickly available.

Nagios
Nagios is the old guard of system and network monitoring. It is fast, reliable, and extremely customizable. Nagios can be a challenge for newcomers, but the rather complex configuration is also its strength, as it can be adapted to just about any monitoring task. What it may lack in looks it makes up for in power and reliability.

Icinga
Icinga is an offshoot of Nagios that is currently being rebuilt anew. It offers a thorough monitoring and alerting framework that’s designed to be as open and extensible as Nagios is, but with several different Web UI options. Icinga 1 is closely related to Nagios, while Icinga 2 is the rewrite. Both versions are currently supported, and Nagios users can migrate to Icinga 1 very easily.

NeDi
NeDi may not be as well known as some of the others, but it’s a great solution for tracking devices across a network. It continuously walks through a network infrastructure and catalogs devices, keeping track of everything it discovers. It can provide the current location of any device, as well as a history.

NeDi can be used to locate stolen or lost devices by alerting you if they reappear on the network. It can even display all known and discovered connections on a map, showing how every network interconnect is laid out, down to the physical port level.

Observium
Observium combines system and network monitoring with performance trending. It uses both static and auto discovery to identify servers and network devices, leverages a variety of monitoring methods, and can be configured to track just about any available metric. The Web UI is very clean, well thought out, and easy to navigate.

As shown, Observium can also display the physical location of monitored devices on a geographical map. Note too the heads-up panels showing active alarms and device counts.

Zabbix
Zabbix monitors servers and networks with an extensive array of tools. There are Zabbix agents for most operating systems, or you can use passive or external checks, including SNMP to monitor hosts and network devices. You’ll also find extensive alerting and notification facilities, and a highly customizable Web UI that can be adapted to a variety of heads-up displays. In addition, Zabbix has specific tools that monitor Web application stacks and virtualization hypervisors.

Zabbix can also produce logical interconnection diagrams detailing how certain monitored objects are interconnected. These maps are customizable, and maps can be created for groups of monitored devices and hosts.

Ntop
Ntop is a packet sniffing tool with a slick Web UI that displays live data on network traffic passing by a monitoring interface. Instant data on network flows is available through an advanced live graphing function. Host data flows and host communication pair information is also available in real-time.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Amazing 12 tips to tune your Wi-Fi network

Wi-Fi networks can be very tricky to properly design and configure, especially in the small, crowded 2.4 GHz frequency band. In addition to interference from neighboring wireless networks, capacity issues arise when there are a high number of users on the network or a high density in a certain area.

In the early days of Wi-Fi, there weren’t that many Wi-Fi users or devices in the world. Today, the situation is much different. Private offices and buildings that have a wireless network may provide access to one, two, or even more Wi-Fi devices per worker and then maybe provide access for guests as well. More and more people are looking for Wi-Fi connectivity, especially at public venues — on their laptops, smartphones and tablets — to help conserve cellular data usage.

1. Design for throughput and capacity
When there weren’t many Wi-Fi users, you could design wireless networks pretty much based on coverage. You could perform a RF site survey and find the optimum locations for access points to ensure they provided adequate coverage. Now you should also design for throughput and capacity.

When designing a wireless network you should evaluate the Wi-Fi client devices that will be using it and how they’ll utilize it. Then you can do some calculations to figure estimated throughput and access points needed to support them, while also accounting for future growth and changes.

With 802.11b/g/n in the 2.4GHz band, there are only three non-overlapping channels. Thus co-channel interference becomes an issue when you bunch more than three access points in close proximity. Ideally, you don’t want an access point to hear any other access point on the same or overlapping channel. Though the 802.11 standards have mechanisms in place to deal with interference like this, co-channel interference will decrease performance.

2. Think about airtime
In areas where there is a high density of Wi-Fi users, like in public venues, you may find that the three 2.4GHz channels aren’t enough. However before overlapping channels and causing co-channel interference there are some techniques you might be able to utilize to increase capacity with the access points you already have.

Remember, wireless networks are all about airtime. Wi-Fi clients must contend for airtime with the access points as only one device, whether an access point or client, can transmit at any one time on a given channel. The higher the throughput and speeds data is transferred, the less airtime that is required, and generally the more clients that can connect and utilize the wireless access.

There are many settings you can configure to help boost performance and trim airtime.

3. Utilize 5GHz band steering

To help alleviate the crowded 2.4 GHz band, try to get Wi-Fi users onto the larger, less congested 5GHz band. Consider using dual-band 802.11n or 802.11ac access points that support band steering. When supported and enabled on the access points, dual-band clients will be guided or forced onto 5GHz instead of just leaving it up to the user or device to decide which band to

Most access points implement this type of functionality by responding only to the probe and association requests in the 5-GHz band when it has seen the same client with a probe/association request in the 2.4-GHz band. Thus once the access point knows a client is dual-band capable, it only allows connections from the client in the 5-GHz band.

In the 5-GHz band you have many more channels, and more Wi-Fi devices these days are supporting this band. However, do understand that 5GHz generally has less range due to the higher frequency. Thus you may have to do more Wi-Fi surveying to design for good 5GHz coverage.

If 5-GHz coverage isn’t up to par, consider configuring any band steering thresholds supported by the access point. Some allow you to set a minimum signal level a client must have before band steering will be used or a number of missed probe/association requests to 5GHz from the client before allowing connections on 2.4GHz.

4. Only use WPA2 security
Although both WPA and WPA2 security versions will work with 802.11n and 802.11ac, the data rates are limited to 54Mbps with WPA. You should select WPA2 only for the security on the private SSID(s) to allow maximum throughput when using the newer wireless standards. Any legacy clients not supporting the newer security should be upgraded.

5. Limit the amount of virtual SSIDs
When creating additional SSIDs, keep in mind that each one increases the overall overhead of the wireless network. Each SSID will generate additional beacons, probes, and other management traffic, taking up more airtime, even if the SSID isn’t being used. So consider limiting the number of virtual wireless networks; perhaps one for private access and another for public access. If needed, you can further segregate private access levels via dynamic VLAN assignment using 802.1X authentication for instance.

6. Disable lower data rates
Consider disabling the lower data rates to force packets, including those for management, to be sent via higher data rates and to ensure clients connect at higher data rates. This also encourages clients to automatically roam to better access points quicker rather than staying connected to an access point until the last second like they may normally do.

If you still have legacy 802.11b clients on the network you should really consider upgrading/replacing them, but you could still disable the lowest data rates (1M, 2M, and 5.5Mbps) and leave the highest (11Mbps) enabled.

If you don’t have any 802.11b clients, consider disabling all data rates at and below 11Mbps.
You’ll likely still need to support 802.11g clients, but if your Wi-Fi coverage is good enough you may also be able to disable some of the lower 802.11g data rates: 12M, 18M, 24M, 36M, and 48Mbps.

7. Configure proper channel-widths
On access points that support channels larger than the legacy 20MHz, you likely want to disable the Auto 20/40 MHz selection for 2.4GHz and only use the 20-MHz channels. In this band, it’s only possible to have one non-overlapping 40-MHz-wide channel. Thus larger channels are only really useful for areas where only one access point or channel will be used, including any neighboring networks.
MORE ON NETWORK WORLD: How to use public Wi-Fi hotspots safely

For 5GHz, however, you may be able to use larger channel-widths since there is more frequency spectrum. Just ensure the bonded channels will not cause co-channel interference with yours or neighboring networks.

8. Transmission times

Shortening packet sizes or transmission times can help increase performance as well. Here are a few settings you may want to enable:

9. Limit broadcast traffic
Broadcast traffic can also slow down the overall throughput of a wireless network, thus consider these two techniques to decrease broadcast traffic:

Enable wireless client isolation to prevent Wi-Fi devices from broadcasting to each other, if the user-to-user communication isn’t required. The Wi-Fi devices will still be able to communicate to wired clients, but not directly with wireless clients.
Separate the LAN and WLAN broadcast domains to cut down on the amount of broadcast traffic on the WLAN side.

10. Adjust the beacon interval
As mentioned earlier, each access point will broadcast a beacon packet for each individual SSID, which contains the basic information about the wireless network. The default interval rate at which beacon packets are sent over the airwaves is usually 100ms.

Increasing the interval rate will decrease the amount of beacons and the airtime they take up, but that can also cause other unwanted side effects. Typically, the smaller the interval, the quicker the clients will connect to and roam between the access points. The bigger the interval, the longer it takes clients to connect/roam and the longer delay for clients sending/receiving data that have power save mode enabled.

11. Adjust the fragmentation and RTS thresholds
Lowering the fragmentation and Request to Send (RTS) thresholds can help increase performance on wireless networks with a large number (at least over 5%) of collisions and/or interference.

If it appears you have a hidden node issue where clients are far apart and can’t hear each other but both can hear the access point, then start with reducing the RTS threshold. Perhaps start with threshold of around 500 bytes.

If hidden nodes don’t appear to be an issue, start with reducing the fragmentation threshold. Perhaps start with threshold of around 1,000 bytes.

12. Additional site surveys
Keep in mind, reducing these thresholds can also slow the network if not truly needed. I recommend making slight changes and then performing testing to ensure you’re seeing an improvement.

In addition to tweaking these settings, you may want perform additional RF site surveys if capacity issues still arise. You may find adjusting access point transmit levels and the access point locations can help make cell sizes smaller, enabling you to put more access points into an area. Also look into other network configurations that could affect capacity, for instance an adequate DHCP range.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

First Look: BlackBerry Passport

BlackBerry does an about-face, back towards its enterprise roots.

So BB 10 didn’t work out so well, did it?
Which helps explain why, with the new Passport smartphone, BlackBerry is ditching the years-late emphasis on competing for consumers and refocusing on the enterprise users on which the company was built. The Passport is uniquely focused on being a device for work first and personal stuff second – take a look at how it’s turned out.

It’s hip to be square
We’re just not used to square screens anymore, are we? I think the last one I used was on a flip-phone, circa about 2005. So in a sense, BlackBerry’s not putting the Passport in great company there. Given that this screen is 4.5 inches and boasts 1440×1440 resolution, though, it’s probably OK.

Big in Canada
It’s a big device, there’s no getting around that – as the name suggests, it’s the size of a U.S. passport. That said, it’s no more outsized than other recently released phablets like the Samsung Galaxy Note 4 or the iPhone 6 Plus.

Of course it has a keyboard
It’s a new design, and it incorporates some intriguing touchpad functionality, like swiping to select auto-suggest entries. And it’s a business-focused BlackBerry device – of course it has a physical keyboard.

A voice search thingy!
One of many catch-up boxes checked by the Passport, the new voice search functionality appears to work more or less the same way as Siri/Cortana/Google Voice search, et al.

Blend
The impressive BlackBerry Blend system provides an app that can run on other mobile devices, as well as on desktops and laptops, that brings files and messages from the Passport to whichever device you happen to be using at the time, and segregates them into personal and enterprise spaces.

Some apps
BlackBerry bolsters its own somewhat limited app offerings with access to the Amazon App Store, which provides a larger selection of Android apps for use on the Passport.

Under the hood
The Passport’s specs bring it into line with the latest Androids and iPhones – a 2.2GHz, quad-core Snapdragon processor, 3GB of RAM, a 13MP camera with optical image stabilization and 32GB of on-board storage, with a microSD slot for expandability. It’s also got a big 3450 mAh battery, which BlackBerry was eager to talk up.

The nitty-gritty
The Passport goes on sale tomorrow from Amazon and BlackBerry directly, for $600 unlocked. It’ll be available on-contract from as-yet unspecified carriers for about $250, BlackBerry said.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

 

10 Hot Internet of Things Startups

As Internet connectivity gets embedded into every aspect of our lives, investors, entrepreneurs and engineers are rushing to cash in. Here are 10 hot startups that are poised to shape the future of the Internet of Things (IoT).

As Internet connectivity gets embedded into everything from baby monitors to industrial sensors, investors, entrepreneurs and engineers are rushing to cash in. According to Gartner, Internet of Things (IoT) vendors will earn more than $309 billion by 2020. However, most of those earnings will come from services.

Gartner also estimates that by 2020, the IoT will consist of 26 billion devices. All of those devices, Cisco believes will end up dominating the Internet by 2018. You read that right: In less time than it takes to earn a college degree (much less time these days), machines will communicate over the Internet a heck of a lot more than people do.
MORE ON NETWORK WORLD: 12 most powerful Internet of Things companies

With the IoT space in full gold-rush mode, we evaluated more than 70 startups to find 10 that look poised to help shape the future of IoT.

Note: These 10 are listed in alphabetical order and are not ranked.
1. AdhereTech

What they do: Provide a connected pill bottle that ensures patients take their medications.

Headquarters: New York, N.Y.

CEO: Josh Stein. He received his MBA from Wharton in 2012, and, before that, he worked for a number of successful startups in New York City, including Lot18, PlaceVine and FreshDirect.

Why they’re on this list: There are plenty of companies trying to cash in on IoT by tethering it to healthcare. Let’s call it the Internet of Health (IoH). What’s impressive about AdhereTech, though, is that it focuses on a discrete problem and knocks it out of the park with its solution. It’s simple and smart.

Prescription adherence — sticking to your prescribed medication regimen — is one of the biggest problems plaguing medicine. Current levels of adherence are as low as 40 percent for some medications. Poor adherence to appropriate medication therapy has been shown to result in complications, increased healthcare costs, and even death. Medication adherence for patients with chronic conditions, such as diabetes, hypertension, hyperlipidemia, asthma and depression, is an even more significant problem, often requiring intervention.

According to AdhereTech, of all medication-related hospital admissions in the United States, 33 to 69 percent are related to poor medication adherence. The resulting costs are approximately $100 billion annually, and as many as 125,000 deaths per year in the U.S. can be attributed to medication non-adherence.

AdhereTech’s pill bottle seeks to increase adherence and reduce the costs associated with missed or haphazard medication dosage. The bottle uses sensors to detect when one pill or one liquid milliliter of medication is removed from the bottle. If a patient hasn’t taken his/her medication, the service reminds them via phone call or text message, as well as with on-bottle lights and chimes. The company’s software also asks patients who skip doses why they got off schedule. In addition to helping people remember, AdhereTech aggregates data anonymously to give a clearer picture of patient adherence overall to pharmaceutical companies and medical practitioners.

Customers: AdhereTech has trials running with Boehringer Ingelheim for a TBD medication, The Walter Reed National Military Medical Center for type 2 diabetes medication and Weill Cornell Medical College for HIV medication.

Competitive Landscape: Vitality GlowCap is the most direct competitor for AdhereTech. Other less direct competitors include RXAnte, an analytics company that helps to identify patients most at risk for falling off their prescription regimen, and Proteus Digital Health, which puts tiny digestible sensors inside of pills to give doctors a clearer picture of patient compliance.

 

Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com

Popular Android apps fail basic security tests, putting privacy at risk

Instagram and Grindr stored images on their servers that were accessible without authentication, study finds

Instagram, Grindr, OkCupid and many other Android applications fail to take basic precautions to protect their users’ data, putting their privacy at risk, according to new study.

Data integration is often underestimated and poorly implemented, taking time and resources. Yet it
Learn More

The findings comes from the University of New Haven’s Cyber Forensics Research and Education Group (UNHcFREG), which earlier this year found vulnerabilities in the messaging applications WhatsApp and Viber.

This time, they expanded their analysis to a broader range of Android applications, looking for weaknesses that could put data at risk of interception. The group will release one video a day this week on their YouTube channel highlighting their findings, which they say could affect upwards of 1 billion users.

“What we really find is that app developers are pretty sloppy,” said Ibrahim Baggili, UNHcFREG’s director and editor-in-chief of the Journal of Digital Forensics, Security and Law, in a phone interview.

The researchers used traffic analysis tools such as Wireshark and NetworkMiner to see what data was exchanged when certain actions were performed. That revealed how and where applications were storing and transmitting data.

Facebook’s Instagram app, for example, still had images sitting on its servers that were unencrypted and accessible without authentication. They found the same problem in applications such as OoVoo, MessageMe, Tango, Grindr, HeyWire and TextPlus when photos were sent from one user to another.

Those services were storing the content with plain “http” links, which were then forwarded to the recipients. But the problem is that if “anybody gets access to this link, it means they can get access to the image that was sent. There’s no authentication,” Baggili said.

The services should either ensure the images are quickly deleted from their servers or that only authenticated users can get access, he said.

Many applications also didn’t encrypt chat logs on the device, including OoVoo, Kik, Nimbuzz and MeetMe. That poses a risk if someone loses their device, Baggili said.

“Anyone who gets access to your phone can dump the backup and see all the chat messages that were sent back and forth,” he said. Other applications didn’t encrypt the chat logs on the server, he added.

Another significant finding is how many of the applications either don’t use SSL/TLS (Secure Sockets Layer/Transport Security Layer) or insecurely use it, which involves using digital certificates to encrypt data traffic, Baggili said.

Hackers can intercept unencrypted traffic over Wi-Fi if the victim is in a public place, a so-called man-in-the-middle attack. SSL/TLS is considered a basic security precaution, even though in some circumstances it can be broken.

OkCupid’s application, used by about 3 million people, does not encrypt chats over SSL, Baggili said. Using a traffic sniffer, the researchers could see text that was sent as well as who it was sent to, according to one of the team’s demonstration videos.

Baggili said his team has contacted developers of the applications they’ve studied, but in many cases they haven’t been able to easily reach them. The team wrote to support-related email addresses but often didn’t receive responses, he said.

Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com

Are breaches inevitable?

Security managers have to do a lot more to stay a step ahead of determined hackers

Is there a reason that data breaches have been happening at a rapid clip lately? And is there more that we, as security managers, should be doing to make sure that our own companies don’t join the ranks of the breached?

Home Depot is the latest company to make headlines for a potentially big data breach, and it just might be the biggest one yet. The current record holder is Target, and we’ve more recently seen the company that owns grocery store chains Supervalu, Albertsons, Acme Markets, Jewel-Osco and Shaw’s compromised by hackers. J.P. Morgan and four other major banks appear to have fallen victim to security breaches. UPS stores were also hit by hackers, and several hundred Norwegian companies were compromised. These victims have joined the ranks of Neiman-Marcus, Michael’s, Sally Beauty, P.F. Chang’s and Goodwill. What’s going on?
MORE ON NETWORK WORLD: Free security tools you should try

The motivation for attacks like these is usually financial. The attackers are stealing credit card and debit card numbers, along with personal information, which they then sell in underground markets. We don’t yet know whether this is the case with the banks that were hit; those attacks may have been politically motivated, or we may learn that fraudulent transactions were used to steal money. In any case, there seems to be a big jump in electronic data theft for profit. But the stolen information is only valuable for a few days, and its value diminishes rapidly by the hour. Some security researchers are saying that this loss of value is motivating today’s data thieves to move quickly. Another factor may be Microsoft’s termination of support for Windows XP, which could be prompting hackers to go for one last all-out heist to grab what they can while many systems are still vulnerable. Perhaps, knowing that all the vulnerabilities of Windows XP would soon vanish, our thieves had a fire sale.

But I suspect there is more to the story. Most big businesses use standard security procedures and technologies that have been around for years, if not decades. Many of these defenses have not kept up with current threats. Take antivirus, for example. Signature-based malware detection has long been ineffective against modern malware, yet most companies continue to rely on it as a key defense. We know from the details of some of the retail breaches that those who have implemented advanced heuristic malware detection have ignored the alarms set off by the point-of-sale malware (for reasons I cannot fathom). Patching will always be a game of catch-up, with the attackers having the upper hand. And password-based authentication will evidently be with us forever, much as I might rail against it. Attackers use all of these to get through their victims’ defenses.

The simple fact of the matter is that attackers will always have several vulnerabilities to choose from at any potential victim they want to target. And security managers, even those who are really good at their jobs, will never be able to close every single hole. And it only takes one.

So if traditional information security practices are not enough, what else can we do? I’ve been giving that question a lot of thought lately, and I think part of the answer is to evolve our security technologies, just as the attackers evolve their techniques. That heuristic behavior-based malware detection technology I keep talking about is pretty cool, but is it still cutting-edge? It’s been around for three or four years. Is there anything newer out there? And how can we choose the right technologies that are going to be effective against emerging threats but still stand the test of time so their manufacturers will be around three years from now?

There are some new products starting to go to market, and venture capitalists are funding a lot of new security technology. I think we should all keep a close eye on them. I’m beginning to believe that in the cutthroat rivalry between attacker and defender, the best technology wins. The only way we can keep one step ahead of today’s hackers is to take two steps forward and advance our defensive capabilities to the point where we can reliably repel, or at least detect, today’s data thieves.


 

Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com

 

 

Chromebook Pixel revisited: 18 months with Google’s luxury laptop

Is it crazy to pay $1300 for a Chromebook? Some reflections after a year and a half of living with Google’s luxurious Pixel.

When you stop and think about it, it’s kind of astonishing how far Chromebooks have come.

It was only last February, after all, that Google’s Chromebook Pixel came crashing into our lives and made us realize how good of an experience Chrome OS could provide.

At the time, the Pixel was light-years ahead of any other Chromebook in almost every possible way: From build quality to display and performance, the system was just in a league of its own. And its price reflected that status: The Pixel sold for a cool $1300, or $1450 if you wanted a higher-storage model with built-in LTE support.

Today, the Pixel remains the sole high-end device in the Chromebook world (and its price remains just as high). But the rest of the Chrome OS universe has evolved — and the gap between the Pixel and the next notch down isn’t quite as extreme as it used to be.

So how has the Pixel held up 18 months after its release, and does it still justify the lofty price? I’ve owned and used the Pixel since last spring and have evaluated almost every other Chromebook introduced since its debut.

Here are some scattered thoughts based on my experiences:

1. Hardware and design
As I said when I revisited the device a year ago, the Chromebook Pixel is hands-down the nicest computer I’ve ever used. The laptop is as luxurious as it gets, with a gorgeous design, premium materials, and top-notch build quality that screams “high-end” from edge to edge.
Chromebook Pixel Revisited

We’re finally starting to see some lower-end Chromebooks creep up in the realms of design and build quality — namely the original HP Chromebook 11 (though it’s simply too slow to recommend for most people) and the ThinkPad Yoga 11e Chromebook (which is sturdy and well-built but not exactly sleek) — and that’s a very good thing. In fact, that’s a large part of what Google was ultimately trying to accomplish by creating the Pixel in the first place. Think about it.

While those devices may be a step up from the status quo, though, they’re not even close to the standard of premium quality the Pixel delivers. When it comes to hardware, the Pixel is first-class through and through while other products are varying levels of economy.

The Pixel’s backlit keyboard and etched-glass trackpad also remain unmatched in their premium nature. Typing and navigating is a completely different experience on this laptop than on any other Chromebook (and, for that matter, on almost any non-Chrome-OS laptop, too).

The same goes for the Pixel’s spectacular speakers. Other Chromebooks are okay, but none is anywhere near this outstanding.

2. Display
The display — man, oh man, the display. The Pixel’s 12.85-in. 2560-x-1700 IPS screen is like candy for your eyes. The vast majority of Chromebook screens (yes, even those that offer 1080p resolution) are still using junky TN panels and consequently look pretty awful. The two exceptions are the same systems mentioned above — the HP 11 and the ThinkPad Yoga 11e — but while those devices’ displays reign superior in the sub-$500 category, their low resolution is no match for the Pixel’s crystal-clear image quality.

I continue to appreciate the Pixel’s touchscreen capability to this day, too: While I certainly don’t put my fingers on the screen all the time, it’s really nice to have the ability to reach up and tap, scroll, or pinch when I feel the urge. For as much time as I spend using smartphones and tablets, it seems completely natural to be able to do that with a laptop as well. (Admit it: You’ve tried to touch a non-touchscreen laptop at some point. We all have.)
“Performance is where things get particularly interesting”

I will say this, though: The time I’ve spent recently with the Yoga 11e has definitely gotten me keen on the idea of a Chromebook being able to convert into a tablet-like setup. After using that device, I sometimes find myself wishing the Pixel’s display could tilt back further and provide that sort of slate-style experience.

3. Stamina and performance
At about five hours per charge, the Pixel’s battery life is passable but not exceptional — especially compared to the eight to 10 hours we’re seeing on some systems these days. As I’ve mused before, stamina is the Pixel’s Achilles’ heel.

Performance is where things get particularly interesting: When the Pixel first came out, its horsepower was unheard of for a Chrome OS device. I could actually use the system in my typical power-user way, with tons of windows and tabs running at the same time and no slowdowns or multitasking misery. Compared to the sluggish Chrome OS systems we’d seen up to that point, it felt like a full-fledged miracle.

The Pixel’s performance is no less impressive today, but what’s changed is that other Chrome OS systems have actually come close to catching up. These days, you can get solid performance in a Chromebook for around $200 with the various Haswell-based systems. The newer Core i3 devices give you a little more punch for around $300. Neither quite reaches the Pixel’s level of snappiness and speed, but in practical terms, they’re not too far behind.

So for most folks, performance alone is no longer a reason to own the Pixel. It’s an important part of the Pixel, for sure, but if that’s the only thing you’re interested in, you’d do far better to save yourself the cash and get a lower-end Chromebook with decent internals.

To Pixel or not to Pixel?
What is a reason to own the Pixel, then? Simple: to enjoy a top-of-the-line Chrome OS experience with all the amenities you could ask for. The device’s hardware quality and design, keyboard and trackpad, speakers, and display add up to make a wonderful overall user experience no other Chromebook can match.

As for whether it’s worth the price, well, that’s a question only you can answer. Is a high-end car worth the premium over a reliable but less luxurious sedan? For someone like me, probably not. But for someone who’s passionate about cars, spends a lot of time in a vehicle and appreciates the elevated quality, it just might be.

The same concept applies here. The Pixel remains a fantastic luxury option for users sold on the Chrome OS concept — people like me who rely heavily on cloud storage and spend most of their time using Web-centric apps and services.

Like with any luxury item, the level of quality the Pixel provides certainly isn’t something anyone needs, but its premium nature is something a lot of folks will enjoy — and that’s as true today as it was last year.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Go to Top