Posts tagged privacy
Patrick Heim is the (relatively) new head of Trust & Security at Dropbox. Formerly Chief Trust Officer at Salesforce, he has served as CISO at Kaiser Permanente and McKesson Corporation. Heim has worked more than 20 years in the information security field. Heim discusses security and privacy in the arena of consumerized cloud-based tools like those that employees select for business use.
What security and privacy concerns do you still hear from those doing due diligence prior to placing their trust in the cloud?
A lot of them are just trying to figure out what to do with the cloud in general. Companies right now have really three choices, especially with respect to the consumer cloud (i.e., cloud tools like Dropbox). One of them is to kind of ignore it, which is always a horrible strategy because when they look at it, they see that their users are adopting it en masse. Strategy two is to build IT walls up higher and pretend it’s not happening. Strategy three is adoption, which is to identify what people like to use and convert it from the uncontrolled mass of consumerized applications into something security feels comfortable with, something that is compliant with the company’s rules with a degree of manageability and cost control.
Are there one or two security concerns you can name? Because if the cloud was always entirely safe in and of itself, the enterprise wouldn’t have these concerns.
If you look at the track record of cloud computing, it’s significantly better from a security perspective than the track record of keeping stuff on premise. The big challenge organizations have, when you look at some of these breaches, is they’re not able to scale up to secure the really complicated in-house infrastructures they have.
We’re [as a cloud company] able to attract some of the best and brightest talent in the world around security because we’re able to get folks that quite frankly want to solve really big problems on a massive scale. Some of these opportunities aren’t available if they’re not in a cloud company.
How do you suggest that enterprises take that third approach, which is to adopt consumerized cloud applications?
The first step is through discovery. Understand how employees use cloud computing. There are a number of tools and vendors that help with that process. With that, IT has to be willing to rethink their role. Employees should really be the scouts for innovation. They’re at the forefront of adopting new apps and cloud technology. The role of IT will shift to custodian or curator of those technologies. IT will provide integration services to make sure that there is a reasonable architecture for piecing these technologies together to add value and to provide security and governance to make sure those kinds of cloud services align with the overall risk objectives of the organization.
“If you look at the track record of cloud computing, it’s significantly better from a security perspective than the track record of keeping stuff on premise.”
Patrick Heim, Head of Trust & Security, Dropbox
How can the enterprise use the cloud to boost security and minimize company overhead?
If you think about boosting security, there is this competition for talent and the lack of resources for the enterprise to do it in-house. If you look at the net risk concept, where you evaluate your security and risk posture prior to and after you invest in the cloud, and you understand what changes, one of those changes is: what do I not have to manage anymore? If you look at the complexity of the tech stack, there are security accountabilities, and the enterprise shifts the vast majority of security accountabilities on the infrastructure side to the cloud computing provider; that leaves your existing resources free to perform more value-added functions.
What are the security concerns in cloud collaboration scenarios?
When I think about collaboration especially outside of the boundaries of an individual organization, there is always the question of how do you maintain reasonable control over that information once it’s in the hands of somebody else? There is that underlying tension that the recipient of that shared information may not continue to protect it.
In response to that, there is ERM, which provides a document-level control that’s cryptographically enforced. We’re looking at ways of minimizing the usability tradeoff that can come with adding in some of these kinds of security advancements. We’re working with some vendors in this space to identify what do we have to do from an interface and API perspective to integrate this so that the impact on the end user for adopting some of these advanced encryption capabilities is absolutely minimized, meaning that when you encrypt a document using some of these technologies that you can still, for example, preview it and search for it.
How do enterprises need to power their security solutions in the current IT landscape?
When they look at security solutions, I think more and more they have to think beyond the old model of the network parameter. When they send data to the cloud, they have to adopt a security strategy that also involves cloud security, where the cloud actually provides the security as one of its functions.
There are a number of cloud-access security brokers, and the smart ones aren’t necessarily sitting on the network and monitoring, but the smart ones are interacting, using access and APIs, and looking at the data people are placing into cloud environments, analyzing them for policy violations, and providing for archiving and backup and similar capabilities.
Security tools that companies need to focus on could be oriented to how these capabilities are going to scale across multiple cloud vendors as well as how do I get away from inserting it into our network directly and focus more on API integration with multiple cloud vendors?
Instagram and Grindr stored images on their servers that were accessible without authentication, study finds
Instagram, Grindr, OkCupid and many other Android applications fail to take basic precautions to protect their users’ data, putting their privacy at risk, according to new study.
Data integration is often underestimated and poorly implemented, taking time and resources. Yet it
The findings comes from the University of New Haven’s Cyber Forensics Research and Education Group (UNHcFREG), which earlier this year found vulnerabilities in the messaging applications WhatsApp and Viber.
This time, they expanded their analysis to a broader range of Android applications, looking for weaknesses that could put data at risk of interception. The group will release one video a day this week on their YouTube channel highlighting their findings, which they say could affect upwards of 1 billion users.
“What we really find is that app developers are pretty sloppy,” said Ibrahim Baggili, UNHcFREG’s director and editor-in-chief of the Journal of Digital Forensics, Security and Law, in a phone interview.
The researchers used traffic analysis tools such as Wireshark and NetworkMiner to see what data was exchanged when certain actions were performed. That revealed how and where applications were storing and transmitting data.
Facebook’s Instagram app, for example, still had images sitting on its servers that were unencrypted and accessible without authentication. They found the same problem in applications such as OoVoo, MessageMe, Tango, Grindr, HeyWire and TextPlus when photos were sent from one user to another.
Those services were storing the content with plain “http” links, which were then forwarded to the recipients. But the problem is that if “anybody gets access to this link, it means they can get access to the image that was sent. There’s no authentication,” Baggili said.
The services should either ensure the images are quickly deleted from their servers or that only authenticated users can get access, he said.
Many applications also didn’t encrypt chat logs on the device, including OoVoo, Kik, Nimbuzz and MeetMe. That poses a risk if someone loses their device, Baggili said.
“Anyone who gets access to your phone can dump the backup and see all the chat messages that were sent back and forth,” he said. Other applications didn’t encrypt the chat logs on the server, he added.
Another significant finding is how many of the applications either don’t use SSL/TLS (Secure Sockets Layer/Transport Security Layer) or insecurely use it, which involves using digital certificates to encrypt data traffic, Baggili said.
Hackers can intercept unencrypted traffic over Wi-Fi if the victim is in a public place, a so-called man-in-the-middle attack. SSL/TLS is considered a basic security precaution, even though in some circumstances it can be broken.
OkCupid’s application, used by about 3 million people, does not encrypt chats over SSL, Baggili said. Using a traffic sniffer, the researchers could see text that was sent as well as who it was sent to, according to one of the team’s demonstration videos.
Baggili said his team has contacted developers of the applications they’ve studied, but in many cases they haven’t been able to easily reach them. The team wrote to support-related email addresses but often didn’t receive responses, he said.
Twitter has some reservations in granting wishes of kin
Twitter said late Tuesday it will remove images and videos of deceased people upon the request of family members, but it put conditions on the policy.
The microblogging service made the announcement a week after the daughter of the late comedian Robin Williams said she would quit Twitter after receiving gruesome images of him from online trolls.
The move also comes as Twitter tried to delete images and video depicting the death of U.S. photojournalist James Foley, who was apparently killed by the militant group Islamic State, better known as ISIS.
“In order to respect the wishes of loved ones, Twitter will remove imagery of deceased individuals in certain circumstances,” Twitter spokesman Nu Wexler said in a message about the update to its policies.
“When reviewing such media removal requests, Twitter considers public interest factors such as the newsworthiness of the content and may not be able to honor every request.”
Twitter, which boasts 271 million active monthly users, posted details of the policy that require the estate or a person’s family member to provide documents such as copies of a death certificate and government-issued identification.
Family members or other authorized people can request the removal of photos or video of deceased people on Twitter “from when critical injury occurs to the moments before or after death,” it said.
Twitter still refuses to provide account access to anyone, even if they are related to the person who has died.
Women have been the target of threats an abuse on Twitter, and critics have urged the company to change its Twitter Rules. A year ago, it introduced an “in-tweet” abuse button to report violations.
But some have complained that it’s still impossible to stop determined trolls.
“Ive endured this for two years, and so have countless others,” Twitter user Imani Gandy recently wrote about the racist invective she suffers at the hands of one particular troll.
“He creates hundreds of accounts to tweet his inane ramblings to my friends, online acquaintances and even my work. He latches on to any tweet of mine and harasses anyone that I interact with.”
She criticized Twitter for being slow to act and having no solutions beyond suspending accounts, adding she and other users are trying to get Twitter CEO Dick Costolo to strengthen the service’s abuse policies.
“Your privacy is very important to us,” Microsoft is fond of saying. But if a former Microsoft Privacy Chief no longer trusts Microsoft, should you?
Bowden’s statements were made during a conference about privacy and surveillance that was held in Lausanne, Switzerland, and reported on by the Guardian. At one point, Bowden’s presentation slide showed a “NSA surveillance octopus” to help illustrate the evils of surveillance in the U.S. cloud; but this was not a PowerPoint presentation. He was using LibreOffice 3.6 because he doesn’t trust Microsoft software at all anymore. In fact, he said he only uses open source software so he can examine the underlying code.
An attendee pointed out that free software has been subverted too, but Bowden called open source software “the least worst” and the best option to use if you are trying to avoid surveillance. Another privacy tip…the privacy pro also does not carry a personal tracker on him, meaning Bowden gave up on carrying a mobile phone two years ago.
No privacy in the cloud: zero, zippy, none
According to Bowden, “In about 2009 the whole industry turned on a dime and turned to cloud computing – massively parallel computation sold as a commodity at a distance.” He said, “Cloud computing leaves you no privacy protection.” However, “cloud computing is too useful to be disinvented. Unlike Echelon, though, which was only interception, potentially all EU data is at risk. FISA (Foreign Intelligence Surveillance Act) can grab data after it’s stored, and decrypted.”
Bowden authored a paper about “the U.S. National Security Agency (NSA) surveillance programs (PRISM) and Foreign Intelligence Surveillance Act (FISA) activities and their impact on EU citizens’ fundamental rights.” While it mostly dissects how “surveillance activities by the U.S. authorities are conducted without taking into account the rights of non-U.S. citizens and residents,” it also looks at some “serious limitations to the Fourth Amendment for U.S. citizens.”
“The thoughts prompted in the mind of the public by the revelations of Edward Snowden cannot be unthought. We are already living in a different society in consequence,” Bowden wrote [pdf]. He again pointed out the dangers to privacy in cloud computing. “The scope of FAA creates a power of mass-surveillance specifically targeted at the data of non-U.S. persons located outside the U.S., including data processed by ‘Cloud computing’, which eludes EU Data Protection regulation.”
Data can only be processed whilst decrypted, and thus any Cloud processor can be secretly ordered under FISA 702 to hand over a key, or the information itself in its decrypted state. Encryption is futile to defend against NSA accessing data processed by US Clouds (but still useful against external adversaries such as criminal hackers). Using the Cloud as a remote disk-drive does not provide the competitiveness and scalability benefits of Cloud as a computation engine. There is no technical solution to the problem.
He concluded that there is an “absence of any cognizable privacy rights for ‘non-U.S. persons’ under FISA.”
Microsoft’s strategy: Grind down people’s privacy expectations
It was Bowden’s position over privacy policies for Microsoft that makes his point of view important. This man, a privacy expert, no longer trusts Microsoft as a company, nor its software.Microsoft ‘your privacy is our priority’ Yet Microsoft (and most all other companies) love to publicize the quote, “Your privacy is very important to us.” But does Microsoft really care about your privacy?
During an interview with Bowden, the London School of Economics and Political Science (LSE) asked, “Do you think the general public understands how much privacy they have in the digital world?”
Bowden replied, “There’s been a grinding down of people’s privacy expectations in a systematic way as part of the corporate strategy, which I saw in Microsoft.”
Regarding the Guardian’s report that Bowden does not trust the Redmond giant, Microsoft sent this PR-damage control statement to CNET:
“We believe greater transparency on the part of governments – including the U.S. government – would help the community understand the facts and better debate these important issues. That’s why we’ve taken a number of steps to try and secure permission, including filing legal action with the U.S. government.”
About that transparency…LSE asked Bowden, “What’s your view on the transparency policies of tech-companies?”
Bowden replied, “It is purely public relations strategy – corporate propaganda aimed at the public sphere – and due to the existence of secret mass-surveillance laws will never be truly transparent.”
From anywhere on the planet, a hacker could open and close the lid to your smart toilet, turn your child’s smart toy into a covert surveillance device, or unlock the doors of your smart home.
Disregard for a moment why you would ever want to connect a toilet to the Internet to “record a toilet diary,” and instead ask why would a person hack a smart toilet? Because it’s there; it’s vulnerable and it helps to highlight new security risks associated with smart devices connected to the web, making up the Internet of Things.
LIXIL Satis Bluetooth smart toilet
Since the Japanese manufactured LIXIL Satis smart toilet is extremely expensive, as much as about $6,000, and not readily available in the U.S., researchers at the security firm Trustwave reverse-engineered an Android app for the bluetooth-controlled Satis. It has a hard-coded PIN of “0000,” according to the security advisory, and:
any person using the “My Satis” application can control any Satis toilet. An attacker could simply download the “My Satis” application and use it to cause the toilet to repeatedly flush, raising the water usage and therefore utility cost to its owner. Attackers could cause the unit to unexpectedly open/close the lid, activate bidet or air-dry functions, causing discomfort or distress to user.
Although that hack is more of a prank, you might take the security risk more seriously if an attacker could secretly access the webcam in your child’s toy, capture video and then upload it to a remote server.
Violet’s Karotz Smart Rabbit
The toy in question is a Karotz plastic bunny that “can connect to the Internet (to download weather forecasts, read its owner’s email, etc.),” stated the bunny security advisory. It “can be controlled from a smartphone app and is outfitted with a video camera, microphone, RFID chip a speakers.” In fact, an attacker could “take control of it from a computer and remotely watch live video, turning it into an unwitting surveillance camera.”
Hacking smart houses
At the Black Hat Home Invasion v2.0 presentation, Trustwave researchers showed serious topics as well, such as how someone other than the home or business owner can unlock doors from anywhere in the world. As an example, Trustwave security researcher Dan Crowley took a random four-digit number from a hacking conference attendee and then changed the lock’s PIN. They also discussed poor security issues discovered when testing a Belkin WeMo Switch, Linksys Media Adapter, Radio Thermostat, and Sonos Bridge.
Although one of the benefits of having a smart home is that you remotely control it via a smartphone, tablet or PC, that convenience comes with a plethora of personal security and privacy risks. During the Black Hat session [pdf slides], the researchers showed how the home automation gateways Mi Casa Verde Veralite and Insteon Hub have “vulnerabilities that, if not fixed, could result in covert audio and video surveillance, physical access to buildings or even personal harm.”
“The big risk is that a compromise could give you access to hundreds of thousands of homes all at once,” Crowley stated. “I could see that as an attack someone could actually use to launch a crime spree.” He added that if someone broke into your house, but there was no sign of forced entry, then how would you get your insurance company to pay?
Granted the toilet hack is invasive but more like a prank, yet an attacker could also seriously mess with a person’s mind by simply running a web search for smart homes with Insteon and then remotely taking control of the lights as if the house were “haunted.”
The potential for hacking smart homes and the Internet of Things—from exploiting network connected toys, thermostats, wireless speakers, to automated door locks—will only continue to grow as more people adopt these technologies. There are plenty of privacy risks in addition to the security vulnerability issues as their white paper [pdf] states:
There are also privacy concerns in the compromise of these devices. Compromise of a device with a built-in microphone or camera comes with the ability to perform audio and video surveillance. Compromise of a motion sensor could be used to determine when there are people at a physical location. Reading the status of door locks and alarm systems as could be achieved by compromising the VeraLite could be used to determine when the building in which it resides is occupied.
Legally, devices that store data on third party servers also enjoy a lower level of privacy protections due to the 3rd Party Doctrine. Many of the devices in this paper fall into this category.