16 – 27 October 2016

Biometrics

Facebook Class-Action Asks Court to Decide on Facial-Recognition Tool

Here’s what we know: Every time you tag a friend in a Facebook photo, Facebook stores their image in its database. And here’s what we’re about to find out: whether that’s an illegal violation of users’ privacy. This week, a class-action lawsuit alleging that the world’s largest social network is violating its users’ privacy will enter phase two. Specifically, a San Francisco court will assess whether Facebook is breaking the law by using its facial-recognition tool, to identify faces in photographs uploaded by users, or by collecting those photographs into a central database. In use since 2010, Facebook claims its facial-recognition tool is now 97.35% accurate, which is great news if you’re trying to tag overcrowded party pictures, but less so if you’re worried about privacy. Plaintiffs in the case are concerned on a number of fronts: Facebook could be selling identifying information to retailers or other third parties. More importantly, they worry that biometric data is just as susceptible to theft, hacking, and the long and invasive arm of law enforcement as other types of data. “Unique and unchangeable biometric identifiers are proprietary to individuals,” the complaint reads (paywall). It also alleges that Facebook failed to acquire consent before collecting “faceprints.” The class-action suit hinges on a unique Illinois law passed in 2008, called the Biometric Information Privacy Act. It states that if companies fail to get consent from users before storing biometric information, they can be subject to a $5,000 fine, plus $1,000 in damages if the violation shows negligence. That’s per violation. For a company with 7 million users in Illinois, that could mean fines as high as $35 billion. There is some precedent here. In April, photo-sharing website Shutterfly reached a settlement over its facial-recognition technology. Snapchat faced a similar suit over the summer, but has denied storing any biometric information (the company says it uses “object recognition,” not facial recognition). Alphabet’s cloud-based Google Photos service also uses similar technology, and Google is facing privacy lawsuits of its own. [Quartz]

US – Researchers Find Flaws in Police Facial Recognition Technology

Nearly half of all American adults have been entered into searchable law enforcement facial recognition databases, according to a recent report from Georgetown University’s law school. But there are many problems with the accuracy of the technology that could have an impact on a lot of innocent people. Police can run any photo through a facial recognition program to see if it matches any of the license photos. It’s kind of like a very large digital version of a lineup, says Jonathan Frankle, a computer scientist and one of the authors of the report, titled “The Perpetual Line-Up.” “Instead of having a lineup of five people who’ve been brought in off the street to do this, the lineup is you. You’re in that lineup all the time.” Frankle says the photos that police may have of a suspect aren’t always that good — they’re often from a security camera. “Security cameras tend to be mounted on the ceiling,” he says. “They get great views of the top of your head, not very great views of your face. And you can now imagine why this would be a very difficult task, why it’s hard to get an accurate read on anybody’s face and match them with their driver’s license photo.” Frankle says the study also found evidence that facial recognition software didn’t work as well with people who have dark skin. There’s still limited research on why this is. Some critics say the developers aren’t testing the software against a diverse enough group of faces. Or it could be lighting. Findings

  • Law enforcement face recognition networks include over 117 million American adults — and may soon include many more.
  • By running face recognition searches against 16 states’ driver’s license photo databases, the FBI has built a biometric network that primarily includes law-abiding Americans.
  • Major police departments are exploring real-time face recognition on live surveillance camera video.
  • Law enforcement face recognition is unregulated.
  • Police face recognition could be used to stifle free speech.
  • Most law enforcement agencies do little to ensure that their systems are accurate.
  • Without specialized training, human users make the wrong decision about a match half the time.
  • Police face recognition will disproportionately affect African-Americans.

Recommendations

  • Law enforcement face recognition searches should be conditioned on an individualized suspicion of criminal conduct.
  • Mug shot databases used for face recognition should exclude people who were found innocent or who had charges against them dropped or dismissed.
  • Searches of driver’s license and ID photos should occur only under a court order issued upon a showing of probable cause.
  • Limit searches of license photos — and after-the-fact investigative searches — to investigations of serious offenses.
  • Real-time video surveillance should only occur in life-threatening public emergencies under a court order backed by probable cause.
  • Use of face recognition to track people on the basis of their race, ethnicity, religious or political views should be prohibited.
  • The FBI should test its face recognition system for accuracy and racially biased error rates, and make the results public.

[Study: Police Use of Facial Recognition Goes Unregulated | NPR.org | The Perpetual Line-Up | Facial recognition technology is taking over US, says privacy group |Study Urges Tougher Oversight for Police Use of Facial Recognition | Half of US adults are profiled in police facial recognition databases | Maryland’s use of facial recognition software questioned by researchers, civil liberties advocates

Big Data

CA – RCMP’s Counterterrorism Centre Facilitates Information Sharing

The RCMP have created a permanent place for counterterrorism detectives to work shoulder-to-shoulder – and database to database – with federal border guards, immigration officials and spy-agency analysts. The national-security joint-operations centre (NSJOC) in Ottawa is a “real-time and rapid information-sharing” crossroads where federal agents can efficiently swap files, according to recently released records. However, critics fear it will go places no watchdog can follow. The counterterrorism centre was largely unknown until RCMP Commissioner Bob Paulson made a brief reference to it in Parliament earlier this year. The Globe and Mail has acquired the centre’s terms of reference under Access to Information laws. The federal agencies constantly collect data, but under different mandates than that of the Mounties. Federal agents typically shield their files from each other unless they have a compelling reason to share. In some cases, warrants are needed for information handovers. Yet federal agents want to knock down institutional walls in times of crisis, and the RCMP-led centre seeks to keep the bureaucratic barriers to information-sharing low. The centre’s terms of reference says criminal charges are just one approach to fighting terrorism. Pooling knowledge among federal agents makes other interventions possible – such as revoking suspects’ passports, adding people to no-fly lists, or even warning the family and friends of radicalized young people “of the risks associated with violent extremist activity.” Nothing in the terms of reference suggests the agencies got new powers to share information. Federal watchdog agencies have complained for years that they cannot track what information agencies share in the name of national-security. Even as federal-security agencies increasingly swap files, none of their review bodies are legally empowered to see what is happening as it happens, or within more than one agency. “A body like this makes the case for why we need more robust real-time oversight,” says Carmen Cheung, a professor at the University of Toronto’s Munk School of Global Affairs. “It looks like they are all co-located in essentially one room, and that room has direct access to all the databases of all the respective agencies, which is amazing.” A decade ago, a judicial inquiry recommended Canada create a watchdog to track all security agencies at once, but the concept never got off the ground. The finding followed a Canadian counterterrorism investigation in which federal agents swapped information carelessly and several Canadians were wrongly jailed as presumed terrorists in Middle East prisons. [The Globe and Mail]

US – 75% of US Citizens Back Use of Data Fusion Tools: TransUnion

A TransUnion study found 75 percent of Americans support the use of data fusion tools in law enforcement investigations. Of the 1,002 respondents, 81% said law enforcement “has an obligation” to use publicly available information to solve crimes, including names, addresses, phone numbers and bankruptcy records. Support hinged on the fact non-public data, such as phone records, internet search histories, and banking statements are not included in the data gathering, with 59% saying they support data fusion tools because they do not use non-public data. “Law enforcement agencies continue to expand their use of data fusion tools. The value of linking hundreds of millions of records in a short period of time to find cyber evidence on criminals is critical in cases which need timely outcomes — such as solving a murder or finding an abducted child,” said TransUnion’s Jonathan McDonald. [MarketWire]

WW – Google, OpenAI Create Algorithms to Use Personal Data, Protect Privacy

OpenAI and Google have created a method by which artificial intelligence can study and use personal data, despite not having any access to the information. The two companies created a “student” algorithm, one designed to mimic decisions learned from “teacher” algorithms through millions of simulated decisions. Numerous teacher algorithms send information to a student algorithm, allowing the student to process the information, but making it impossible for the information to be deciphered if it were reverse-engineered. “All the research in this space explores a tension between privacy and utility, as more privacy means utility goes down,” said machine learning security researcher Thomas Ristenpart. Meanwhile, artificial intelligence and robotics were a hot topic last week at the 38thInternational Conference of Data Protection and Privacy Commissioners. [Quartz]

WW – AI’s Effect on Insurance Industry Could Lead to Privacy Issues

Advances in big data analytics and artificial intelligence could have a major impact on the insurance industry. Insurance firms could mine social media to determine proper pricing on premiums. An insurance company could look at users’ Twitter accounts and make offers based on the tone of their posts, using analytics to determine their health outlook. While companies such as reinsurer Swiss Re say the advances will drop the price of insurance protection and assist individuals in making better choices through incentive programs, those against the idea say it would violate user privacy, lead to personalized pricing, and minimize any form of shared risk. “In a relatively short period of time, maybe a few years, most of the major insurers will have integrated lessons from behavioral research,” said Swiss Re’s Daniel Ryan. “Undoubtedly, it will lead to a different interaction between insurer and policyholders.” [Reuters]

WW – Why AI May Be the Next Big Privacy Trend (Opinion)

In the past month, we have seen the launch of a major industry effort to explore the policy ramifications of AI, and the U.S. Department of Transportation has released a policy roadmap for autonomous vehicles, suggesting that regulators and policymakers are eager to get into the AI game. Even the White House got involved this spring when it announced a series of workshops to explore the benefits and risks of AI. The first fruits of that White House effort were unveiled last Wednesday with an initial report on the immediate future of these exciting technologies. It includes 23 recommendations aimed at the U.S. government and various federal agencies, and while privacy and data protection are not major focuses of the report, it does introduce a new vocabulary and raises issues that will implicate the privacy space. Writes attorney Joseph Jerome. “If the phenomenon of big data encouraged nearly every company to view itself as a data company, fueling the privacy profession, AI looks to have a similar trajectory for influencing how organizations do business,” he notes. In this post for Privacy Perspectives, Jerome details why “getting a handle on the contours of AI” and how it intersects with privacy, “could be increasingly important.” [Full Story]

Canada

CA – Submissions on OPC Consultation Show Lack of Consensus for Trustmarks and Codes of Practice

The OPC releases the submissions provided in response to its consultation on the consent model and possible alternatives. Submissions include beliefs that “one-size-fits-all” sectoral codes of practice, trustmarks, and privacy seals do not reflect the diversity of practices and needs of businesses in the digital economy, and a rejection of the voluntary, industry-drive trustmark model; suggestions include support for a trustmark overseen by a credible organization independent of industry influence (e.g. the OPC or an independent organization supervised by the OPC). [OPC Canada – Overview of Consent Submissions]

WW – Guidelines for Privacy Certifications and Trustbrands

Privacy certifications, or “trustbrands,” are seals licensed by third parties for organizations to place on their homepage or within their privacy policy. The seals typically state, or imply, that the organization which has displayed the seal has high privacy or security standards, or has had its privacy or security practices reviewed by a third party. Some seals also imply that the organization has agreed to join a self-regulatory program that may provide consumers with additional rights, such as a mechanism for resolving privacy-related disputes. A snapshot of information concerning privacy certifications:

  • Percentage of consumers that are worried about online privacy: 92%
  • Percentage of consumers who claim they look for privacy certifications and seals on a website: 76%
  • Percentage of consumers who say that they would share their interests with advertisers if the advertiser’s privacy policy was “certified: ~50%
  • The number of certifying agencies the FTC has alleged offered deceptive seals: 2

What to think about when considering whether your organization should purchase a privacy certification:

  1. Does the certifying agency have its own privacy or security standards?
  2. Do the certifying agency’s standards exceed legal requirements?
  3. Does your organization’s practices meet the certifying agency’s standards?
  4. If the certifying agency’s standards change, is your organization prepared to modify its practices accordingly?
  5. Has the certifying agency been investigated by the FTC, or another consumer protection authority, for deceptive or unfair practices?
  6. If so, are you confident that the certifying agency’s seal and review process is non-deceptive and that association with the agency will not result in negative publicity?
  7. Have consumers complained to the FTC about the certifying agency?
  8. Does your organization have a mechanism in place to ensure that the license for the seal is renewed each year and/or that the seal is removed from your website if the license expires?
  9. Have plaintiff’s attorneys used the seal against other organizations by alleging that those organizations agreed to a higher standard of care by adopting the seal? [Source]

US – Feds Love to Shred: Spending on Documents Spiked

The Government of Canada has apparently accumulated too much paper. Public Accounts documents show sudden surge in spending on document shredding and storage. The federal government spent approximately $12 million more on hiring companies that offer services like document shredding and storage in the last fiscal year than it did ten years ago. During the 2005-2006 fiscal year, the Health and Transport departments spent about $389,247 on two separate contracts with companies that are in the business of destroying and storing physical and digital documents. By 2013-2014, when the Harper government was enjoying its third term in government, that number had increased to nearly $3 million. But it was the following year that the government went all out. Public Accounts documents for the 2014-2015 fiscal year show the federal government spent nearly $13 million on similar contracts. By that time, many more departments were utilizing these services — including the Canada Revenue Agency, Employment and Social Development, and the Justice department. This past fiscal year — during which Canada underwent a change of government — saw a slight decrease in spending, to just under $12.4 million. The biggest spender in the 2015-2016 fiscal year, by a long shot, was the Canada Revenue Agency, which spent a whopping $8.4 million on contracts with Mobilshred and Shred-It. The year prior, it dished out approximately $10.3 million — which is largely responsible for the sudden spike in document spending by the government that year. The Alberta government experienced a “shred-gate” in early January 2016. The privacy and public interest commissioners found that the outgoing Progressive Conservative government improperly destroyed nearly 350 boxes of shredded documents. [iPolitics]

CA – NWT’s Protection of Health Records Still Needs Work: Commissioner

The Northwest Territories Department of Health has received a slap on the wrist from the territory’s privacy commissioner for the way it handles confidential patient information. The Information and Privacy Commissioner of the N.W.T’s annual report was tabled in the legislative assembly. In it, commissioner Elaine Keenan-Bengts criticizes the territory’s health department for the way it has implemented the N.W.T. Health Information Act that came into effect in October 2015. The act is meant to govern how personal health information is collected and disclosed. In the six months after the act became law, the commissioner says there were seven separate privacy complaints. She says it’s clear that a number of people who deal with private health information don’t properly understand the act. “While there was some training done before the act came into effect, it does not appear that the training was mandatory,” Keenan-Bengts wrote. Keenan-Bengts also says little has been done to educate the public of their rights when it comes to their personal health information. She says the majority of patients don’t know the act gives them the right to put conditions on who has access to their records, such as barring a practitioner, nurse, clerical staff or other employee in any particular office from accessing their file. Despite patients having this right, Keenan-Bengts says the health department doesn’t actually have the ability to do that. Keenan-Bengts recommends better training for health staff on the act as well as better education campaigns for the public. [CBC News]

CA – Does the Surrey RCMP Need A Surveillance Camera Database?

Surrey will soon launch Project Iris, which is based on a CCTV program out of Philadelphia residents and business owners will be able to register their surveillance cameras with the RCMP. Terry Waterhouse [Surrey’s director of Public Safety Strategies] says he has collaborated with the B.C. Privacy Commissioner’s Office to ensure the program doesn’t violate anyone’s rights. “The important parts are that it is completely voluntary and also, it’s voluntary in the sense that if they do have the footage, whether or not they provide it [to police] is voluntary as well,” he said. B.C. Civil Liberties Association policy director Micheal Vonn says she has one small concern about Project Iris. “We don’t want to encourage businesses to over-collect information.” “If you are collecting information on your property and you have appropriate signage, all of that is fine. What you can’t do is, you can’t collect footage in a public space as a private entity. You’re governed by the private sector privacy legislation.” [CBC News]

Consumer

WW – Millennials ‘Extremely Reluctant’ to Share Data: Study

A Lexis Nexis Risk Solutions study has found millennials are “extremely reluctant” to share personal information even though they use connected devices in large numbers. The study found that more than a quarter of millennials across the globe had no trust that retailers or mobile wallet programs will treat their data “correctly or securely.” “The general discomfort millennials are expressing with information sharing, beyond a couple of the most basic data points, shines a light on the need to educate this major and growing portion of the consumer population,” said Lexis Nexis Risk Solutions’ Kimberly Little Sutherland. “Likewise, it begs the question, are retailers and financial institutions optimizing their business processes for the millennial customer?” [Multichannel Merchant]

E-Government

US – California Attorney General Releases Caloppa Violation Reporting Tool

California Attorney General Kamala Harris has announced a new tool to help consumers report organizations and other entities that are not complying with the California Online Privacy Protection Act, the California Office of the Attorney General said. “In the information age, companies doing business in California must take every step possible to be transparent with consumers and protect their privacy,” said Harris. “As the devices we use each day become increasingly connected and more Americans live their lives online, it’s critical that we implement robust safeguards on what information is shared online and how. By harnessing the power of technology and public-private partnerships, California can continue to lead the nation on privacy protections and adapt as innovations emerge.” [OAG]

CA – Watchdogs Find Lax Management of Smartphones and Tablets by BC Government

BC government workers sometimes waited months to report a lost or stolen smart phone or tablet, according to a report on mobile device management by the Acting Information and Privacy Commissioner, Drew McArthur. “On average it took employees two to six days to make a report. At one ministry, employees were advised not to report lost devices for up to three days in case the device was found.” Investigators also found that records of lost and stolen devices were not properly maintained or analysed, so management missed an opportunity to provide additional training. McArthur said investigators found policies were often overlapping, inconsistent and confusing. The ministries also did not keep track of personal information stored on mobile devices or categorise sensitivity of such personal information. “Government is not meeting its statutory obligation to protect personal information stored on mobile devices.” Privacy training was not specific to mobile devices nor was it conducted frequently. Risk assessments were poor and breach and incident protocols were not consistently followed when privacy breaches happened. Auditor General Carol Bellringer also released a report looking at the security aspects of government mobile device management. She noted the size and portability of devices makes them easy to lose or steal and they often become obsolete, meaning fewer security updates as they age. Unlike desktop or laptop computers, mobile devices often remain connected around the clock, putting them in jeopardy of unauthorised access. Bellringer found there were policy gaps, the full life cycle of mobile devices is not well managed, appropriate security controls are not always in place and there is no central monitoring and logging by government of mobile device activity. Both reports said that the government began to make improvements to its policies and procedures while the investigations were underway. [Business Vancouver]

WW – This Is Why We Still Can’t Vote Online

Online voting sounds like a dream: the 64 percent of citizens who own smartphones and the 84 percent of American adults with access to the internet would simply have to pull out their devices to cast a ballot. And Estonia—a northern European country bordering the Baltic Sea and the Gulf of Finland—has been voting online since 2005. But ask cybersecurity experts and they’ll tell you it’s really a nightmare.

We are nowhere close to having an online voting system that is as secure as it needs to be. Ron Rivest, a professor at MIT with a background in computer security and a board member of Verified Voting, said it is a “naive expectation” to even think online voting is on the horizon. In 2010, the District of Columbia’s Board of Elections & Ethics conducted a pilot project where they built an Internet voting system for overseas and military voters in effort to expedite the absentee voting process. The system was simple: voters would log in, receive a ballot, print the ballot, cast their vote, and upload their ballot to the Internet. In the weeks prior to the general election, a public trial was held to see if the system could be infiltrated. J. Alex Halderman, professor of computer science and engineering at the University of Michigan, welcomed the opportunity to try to legally break into government software with his students. Within 36 hours, they found a tiny error that gave them full control of the system. “The flaw that we exploited was just such a small error—in tens of thousands of lines of computer source code, in one specific line the programmer had used double quotation marks instead of single quotation marks and that was enough to let us remotely change all the votes,” said Halderman. [Motherboard]

EU Developments

EU – CJEU Judgement: Dynamic IP Addresses Constitute Personal Data

On October 19, 2016, the Court of Justice of the European Union (CJEU) decided that the dynamic IP address of a website visitor is “personal data” under Directive 95/46EC (Data Protection Directive) in the hands of a website operator that has the means to compel an internet service provider to identify an individual based on the IP address. The case was brought by Patrick Breyer, a German Pirate Party politician. Breyer asserted that the German government’s storage of IP addresses of users visiting German government websites allowed the creation of user profiles and, therefore, was impermissible under Section 15 of the German Telemedia Act (TMA). The CJEU sided with Breyer. The Court largely followed the opinion that the court’s Advocate General issued on May 12, 2016. The CJEU relied on the Recital 26 of the Data Protection Directive, which states that in determining whether a person is identifiable, “account should be taken of all the means likely reasonably to be used either by the controller or by any other person to identify the said person.” In applying the test to the German government’s program, the Court found that the website operators were collecting the IP addresses to identify cyber attackers and, in some cases, to bring criminal proceedings against them. In this context, the government would likely have a legitimate reason to demand that the internet service provider correlate the IP address to the account holder, and thus allow the government to re-identify the individual. Therefore, the court held that the reasonable likelihood test was met, concluding that the dynamic IP addresses in these circumstances were personal data. [Data Protection Report | The Ever-Expanding Concept of Personal Data } Your dynamic IP address is now protected personal data under EU law | Websites free to store IP addresses to prevent cyber attacks: EU court IAPP: CJEU Defines Personal Data in Breyer Decision | The Ever-Expanding Concept of Personal Data | Your dynamic IP address is now protected personal data under EU law | Websites free to store IP addresses to prevent cyber attacks: EU court]

UK – Surveillance by Consent: Commissioner Launches UK-Wide CCTV Strategy

Surveillance Camera Commissioner Tony Porter say there are six million CCTV Cameras across the UK, but many of them are poor quality or in the wrong place. Mr Porter said he wants to ensure surveillance cameras are protecting members of the public, rather than spying on them and has issued a 16-page draft national strategy to raise regulatory standards regarding the surveillance of public spaces. Only a year ago, less than 2% of public authorities operating surveillance cameras were doing so in compliance to “any British standard” according to Porter, who says that as of today 85% are now demonstrably “having regard” for the Home Office’s Surveillance Camera Code of Practice | Daily Mail | The Register]

EU – Other EU Privacy News

Finance

WW – New PCI Digital Security Standard Introduces Critical Changes

The Payment Card Industry Digital Security Standard (PCI DSS) is an information security standard for organisations that handle credit and debit cards from the major card companies, including Visa, MasterCard and American Express. Organisations that take payments from, process or store, card details are obliged to meet the security standard. Those who fail to observe the standard can find themselves excluded from receiving credit card payments and those who lose credit card numbers, or have them stolen from them, can face hefty fines for failure to meet the standard. A new release (3.2 ) to the standard has significant implications for card providers and their service providers. The standard consists of twelve broad principles:

  1. Install and maintain a firewall configuration to protect cardholder data;
  2. Do not use vendor-supplied defaults for system passwords and other security parameters;
  3. Protect stored cardholder data;
  4. Encrypt transmission of cardholder data across open, public networks;
  5. Use and regularly update anti-virus software on all systems commonly affected by malware;
  6. Develop and maintain secure systems and applications;
  7. Restrict access to cardholder data by business need-to-know;
  8. Assign a unique ID to each person with computer access;
  9. Restrict physical access to cardholder data;
  10. Track and monitor all access to network resources and cardholder data;
  11. Regularly test security systems and processes; and
  12. Maintain a policy that addresses information security.

The standard document describes the processes, policies and settings required to conform to these principles in quite granular detail. Since its release in 2004 only two major releases or revisions have been made to the standard. A new Version 4.0 is expected in early 2017. However, a number of ‘sub-releases’, containing revisions and clarifications, have been made between the three major releases. The most recent, Release 3.2, contains a number of significant changes which may have significantly implications and costs for organisations required to conform to the standard. According to the PCI SSC, these new standards must be implemented by organisations before 31st October 2016, when the prior standard Release, version 3.1, will no longer be valid. Of the changes required by the new PCI DSS Release 3.2 a number appear to arise directly from the lessons learned from the large recent hacking incidents in the US. These include:

  • New Rule 8.3 requires two-factor authentication to access the PCI segment of a network
  • Rule 3.3 require athat card numbers be partially masked when displayed
  • New Rule 10.8 requires that card service providers implement a process for the timely detection and reporting of failures of critical security control systems, setting out a sizeable list of devices over which such reporting is required.
  • Additional Rule 10.8.1 requires service providers to respond to failures in these systems in a timely manner, setting out in some detail what actions such responses should include.
  • New Rule 11.3.4.1 requires that penetration tests be run on networks every six months to ensure that the PCI segment is effectively isolated from the rest of the network.
  • New rule 12.4.1 requires that that a named member of the executive management is responsible and accountable for the maintenance of PCI DSS compliance. It requires that a charter be established, setting out what information must be provided by those directly responsible for PCI compliance to the executive with direct authority.
  • Rule 12.11.1 requires organisations to perform reviews at least quarterly to confirm personnel are following security policies and operational procedures and to correctly document such reviews. The operational procedures which should be reviewed include daily log reviews.

[Mondaq] Se3e also: The PCI SSC said if breaches continue at their current rate, U.K. businesses could face up to 122 billion GBP in fines once the GDPR comes into effect, and recommends organizations work to prevent cyberattacks before 2018.

HK – E-Wallet Programs Store Data Too Long, Consumer Council Finds

The Consumer Council has revealed that some e-wallet companies have problematic data storage procedures, with information on “Alipay customers was stored permanently while Bank of Communications, O!ePay and TNG Wallet would retain the information for six to seven years.” An Alipay spokeswoman countered that only a “small portion” of consumers’ records was stored in the event of a money laundering investigation, and TNG Wallet said it maintained customer records to “meet the same standard established by the Hong Kong Monetary Authority,” the report states. However, council member Michael Hui King-man argued that the Personal Data (Privacy) Ordinance specified that “personal data should not be kept longer than is necessary.” [South China Morning Post]

FOI

CA – IPC ON Orders Disclosure of Consultant Report on Public Transit System

The Information and Privacy Commissioner in Ontario reviewed a decision by the Toronto Transit Commission to deny access to records requested, pursuant to the Municipal Freedom of Information and Protection of Privacy Act. The transit system can withhold information detailing its financial exposure and risk since disclosure would it cause severe economic and financial disadvantage during contractor negotiations; however, it must disclose a review of project that assessed performance, identified areas of improvement and recommended improvements for project efficiency. [IPC ON – Order MO-3347 – Toronto Transit Commission]

CA – OIPC SK Issues Guidelines on Conducting a Search for PHI

The Office of the Saskatchewan Information and Privacy Commissioner issued guidance on handling access requests for personal health information, pursuant to The Health Information Protection Act. A trustee of personal health information must make every reasonable effort to assist an applicant and respond to each request openly, accurately, and completely; organizations should communicate with the applicant to clarify the request, talk to people “in the know” (such as record managers), document the search strategy, and keep details of the actual search. [OIPC SK – The Search For Personal Health Information]

CA – OIPC SK Finds Disclosure of Emails, Trip Details and Public Information Does Not Qualify as Commercial Information

The Office of the Information and Privacy Commissioner in Saskatchewan reviewed a decision by Global Transportation Hub Authority to deny access to records requested, pursuant to the Freedom of Information and Protection of Privacy Act. A public body incorrectly withheld details of a trip to China, government invitations, public information about an association, and emails about a meeting; disclosure of the information would not harm the public body or a third party, and emails between the parties (where the third party objected to disclosure) cannot retroactively serve as proof that both parties intended for the information to be held in confidence. [OIPC SK – Review Report 158-2016 – Global Transportation Hub Authority]

CA – OIPC BC Orders Transportation Agency to Disclose Smart Card Defects

This OIPC order reviewed the decision by South Coast British Columbia Transportation Authority to deny access to records requested under British Columbia’s Freedom of Information and Protection of Privacy Act. Disclosure of the records would not impede a third party’s ability to obtain new work (the third party did not say how many competitors it has or refer to cases in which prospective customers rejected its bids, and it did not deny that it had been successful in recent bids despite negative media coverage); the third party could not prove that disclosure would give competitors “commercially valuable insight” into its business. [OIPC AB – Order F16-45 – South Coast British Columbia Transportation Authority (Translink)]

CA – NS Court Orders Hospital to Produce De-Identified Medical Records

The Supreme Court of Nova Scotia considered a motion for the production of records by Aberdeen Hospital in a lawsuit, pursuant to the Personal Health Information Act. The doctor seeks the disclosure of patient records with respect to his whereabouts when he was not with a patient leading up to the birth of her infant; the names and personal health information of the patients do not need to be disclosed to meet this requirement, but the hospital must produce this information for the doctor as it is relevant to the lawsuit and the records are in the hospital’s control (the doctor could seek patient consent for the release of records, however the hospital is custodian of the records). [Finney v. Joshi – 2016 NSSC 227 – Supreme Court of Nova Scotia]

Genetics

CA – Canada’s Genetic Privacy Bills and How They Compare

Timothy Banks writes about two new bills addressing genetic privacy in Canada. “News reports frequently suggest that Canada is alone amongst G-7 countries in not having a law specifically addressing genetic discrimination.” Analyzing these bills and putting them up against laws in the U.K. and U.S., Banks writes that “Canada might be late to the table, but the Canadian anti-discrimination laws, if either were passed, would prohibit the use of genetic testing and genetic characteristics to make distinctions between individuals in far more circumstances than is currently the case in either the U.K. or the U.S.” [Privacy Tracker]

Health / Medical

US – New HHS Guidance Makes Clear HIPAA Applies in the Cloud

The Department of Health and Human Services (HHS) Office for Civil Rights (OCR) has released guidance making clear that cloud service providers (CSPs) that create, receive, maintain, or transmit electronic protected health information (PHI) are covered by HIPAA. The guidance is notable for its broad scope [and] clarifies how and when HIPAA applies in the cloud service context. [Hogan & Lovells]

US – Health Care Lawyers Say Industry at Greatest Risk of Breach: Study

A study conducted by the American Health Lawyers Association and Bloomberg Law found 87% of health law attorneys believe their health care clients are more likely to suffer a cyberattack than other industries. The study polled 290 health care lawyers, with 97% saying they anticipate having greater involvement in their client’s cybersecurity efforts within the next three years, and 75% saying their practices are developing cybersecurity experience to meet the demand. However, 40% fear their plans to respond to an attack are “too generic and lack specific guidance for the types of incidents their organizations or clients might face.” Only 21 percent of the respondents are involved with cybersecurity efforts before a breach, while 46% are asked for counsel after an attack. [Modern health Care]

CA – NS Court Orders Hospital to Produce De-Identified Medical Records for Doctor’s Private Lawsuit

The Supreme Court of Nova Scotia considered a motion for the production of records by Aberdeen Hospital in a lawsuit, pursuant to the Personal Health Information Act. The doctor seeks the disclosure of patient records with respect to his whereabouts when he was not with a patient leading up to the birth of her infant; the names and personal health information of the patients do not need to be disclosed to meet this requirement, but the hospital must produce this information for the doctor as it is relevant to the lawsuit and the records are in the hospital’s control (the doctor could seek patient consent for the release of records, however the hospital is custodian of the records). [Finney v. Joshi – 2016 NSSC 227 – Supreme Court of Nova Scotia]

US – ONC, OCR Announce Updates to HIPAA Security Tool

The Office of the National Coordinator and the Office for Civil Rights have revised and updated the HIPAA Security Risk Assessment Tool. The updates include increased Windows compatibility, a Save As feature, and expanded customization of PDF files, the report states. “You can use the tool as your local repository for your answers, comments and plans,” said the ONC’s Ebony Brice and the OCR’s Nick Heesters. “Your answers are stored wherever you store the tool and neither OCR nor ONC can access your answers. You can use the tool as often as you need to reassess your organization’s health information security risks. We encourage you to conduct risk assessments on an annual basis.” [HealthITSecurity]

US – Doctors Continue to Wage War on HIPAA Requirements, Bad Yelp Reviews

Physicians are working to strike back against sometimes unfair Yelp reviews while working to stay within HIPAA confidentiality requirements. “Yelp is the bane of many doctors’ existence,” said Dr. Jonathan Kaplan. “A patient can be really vocal, but you cannot. It’s not a fair playing field.” Yelp has said it will only remove those reviews that include “hate speech, threats or harassment,” conflict of interest, or exclude “direct experience with the provider.” Otherwise, doctors are on their own. “Patients can post very detailed information about themselves and their providers, but the providers have to be very vague when they respond,” said Planet Hipaa’s Danika Brinda. Many doctors have taken to encouraging patients to leave positive reviews to offset the negative remarks, or have begun reaching out to disgruntled reviewers directly. [San Francisco Chronicle]

Horror Stories

WW – Weebly Suffers Data Breach, Compromising 43M User Accounts

Data breach notification site LeakedSource has said web design platform Weebly suffered a data breach in February, compromising the usernames and passwords of 43 million users. Weebly sent an email to users saying IP addresses were also taken in the breach. The company contends it does not believe any customer websites have been improperly accessed. The passwords taken in the breach are protected by a strong hashing algorithm, and Weebly said it does not store any credit card information, making it unlikely any users will be affected by fraudulent charges. LeakedSource was notified of the breach when an anonymous source gave the site Weebly’s database. LeakedSource then notified Weebly of the incident, and now said Weebly is in the process of resetting user passwords. [TechCrunch]

US – Report Details OPM’s 2015 Hack

A WIRED report covers the 2015 Office of Personnel Management hack, from agency employees’ discovery of the breach, their realization that the attack was most likely an advanced persistent threat, and their subsequent investigation. It also looks to the future, exploring the faults of security tools like unpaired encryption and how the agency can best rebuild. To remedy the loss of data in the attack, “a cybersecurity overhaul of this magnitude will, of course, require an abundance of talent,” the report states. “And that means much depends on how well government recruiters can convince the best engineers that being locked in a high-stakes competition with supervillain-esque adversaries is more exciting than working in Silicon Valley.” [WIRED]

Identity Issues

UK – Porn Age Verification Proposal Outrages Privacy Advocates

The UK has an online age-checking plan to stop kids from watching porn, but porn-browsing adults would also hit an Age Gate which might verify age via banking or social media accounts.The GCHQ has already expressed a Chinese-esque plan to create the Great UK firewall, but now the UK, which previously dabbled in porn blocking, wants online age verification services to ensure that people viewing porn are age 18 or over; the dangerous implementation of the system has outraged privacy advocates.ComputerWorld | Porn viewers could all be added to a country-wide database of viewing habits under new age verification scheme | Protesters gathered around Parliament to voice their disapproval of a digital economy bill penalizing online pornographic websites not asking for “robust” proof they are over 18 before accessing the content. The Department for Culture, Media and Sports cited credit cards or electoral records as possible examples of verification, but no specifics had been made. “There is an impact on privacy,” said lawyer and obscenity laws campaigner Myles Jackman. “We could see age verification be done by private companies; there is no guarantee that your personal details will be kept private, will not be sent to a third party, will not be leaked or hacked Ashley Madison-style.” The bill is currently going through the House of Commons. [BuzzFeed: Protesters Voice Disapproval of Bill Requesting ID to Watch Pornography]

EU – Ireland DPC Publishes Guidance on Anonymisation and Pseudonymisation

The Irish Data Protection Commissioner has published guidance on the use of data anonymisation and pseudonymisation. This follows similar guidance published by EU regulators in 2014. The DPC’s guidance focuses on the effectiveness of anonymisation techniques and provides recommendations for organisations wishing to use these techniques. Anonymisation of data is a technique used to irreversibly prevent an individual being identified from that data. Pseudonymisation, on the other hand, is not a method of anonymisation. Instead, it is a method of replacing one attribute in a record, such as a name, with another, such as a unique number Given this, pseudonymisation still allows an individual to be identified, but indirectly. Importantly, the DPC warns that while pseudonymisation is a useful security measure, pseudonymised data remains ‘personal data’ as defined in the Acts. Despite this, the DPC recognises that effectively anonymised data identified is not personal data and therefore falls outside the scope of the Acts. In the DPC’s view, the threshold for truly anonymised data is extremely high. To meet this threshold, organisations must take appropriate steps to ensure that individuals are not identified by or identifiable from the data in question. In other words, organisations must ensure that the information can no longer be considered personal data. In order to determine whether an individual is identified or identifiable, the DPC suggests that organisations should consider whether a person can be distinguished from other members of a group. A person is identifiable even if identification is merely a possibility (in other words, even if the person has not actually been identified). The effectiveness and strength of any anonymisation technique is primarily based on the likelihood of re-identifying an individual. There are a number of ways in which data can be re-identified, such as ‘singling out’, ‘data linking’, ‘inference’ and ‘personal knowledge’. The DPC accepts that it is impossible to state with any certainty that an individual will never be identified from an anonymised data set. This is because more advanced data de-identification technologies may be developed and additional data sets may be released into the public domain allowing for cross-comparison of data. This, again, sets the bar very high for true anonymisation. In assessing the risk of re-identification, the DPC suggests that organisations should consider whether the data can be re-identified with reasonable effort by someone within the organisation or by a potential “intruder”. In carrying out this analysis, organisations should take into account technological capabilities along with the information that is available for re-identification. If organisations intend to make anonymised data available to the public, the DPC warns that there is a much higher burden on ensuring that the information is effectively anonymised so that individuals cannot be identified. Importantly, the DPC advises that if an organisation retains the underlying source data following anonymisation, the “anonymised” data will still be considered to be personal data. The main takeaway from the DPC’s guidance is the considerable threshold for rendering data truly anonymous. Pseudonymisation alone is not sufficient to render personal data anonymous and the DPC recommends using a combination of anonymisation techniques. [MHC]

Internet / WWW

WW – International DPAs Adopt New Resolutions

At the 38th International Privacy Conference in Marrakech, Morocco, the International Conference of Data Protection & Privacy Commissioners adopted several resolutions, including a resolution for the adoption of an International Competency Framework on Privacy Education, Developing New Metrics of Data Protection Regulation, Human Rights Defenders, and International Enforcement Cooperation. The group also released an International Competency Framework for school students on data protection and privacy. In past years, the ICDPPC has issued resolutions on cooperating with the U.N. Special Rapporteur on the Right to Privacy, big data, web tracking, and cloud computing. [ICDPPC]

WW – Skype, Snapchat Low on Amnesty International Privacy Rankings

Amnesty International has graded 11 of the most popular messaging apps in its Message Privacy Ranking, in which Snapchat and Skype received some of the lowest scores. Amnesty International’s ‘Message Privacy Ranking’ ranks technology companies on a scale of one to 100 based on how well they do five things:

  • Recognize online threats to their users’ privacy and freedom of expression
  • Apply end-to-end encryption as a default
  • Make users aware of threats to their rights, and the level of encryption in place
  • Disclose details of government requests to the company for user data, and how they respond
  • Publish technical details of their encryption systems

Snapchat received a 26 out of 100 on the organization’s scale, while Skype received 40 out of 100. Neither app used end-to-end encryption, which Amnesty argues should be a given in messaging apps. “It is up to tech firms to respond to well-known threats to their users’ privacy and freedom of expression, yet many companies are falling at the first hurdle by failing to provide an adequate level of encryption,” said Amnesty. Press Release | The Huffington Post | Easy guide to encryption and why it matters]

WW – Common Thread Network Launches New Website

U.K. Information Commissioner Elizabeth Denham and Privacy Commissioner of Canada Daniel Therrien co-chaired the Common Thread Network’s Annual General Meeting in Marrakech, Morocco on Oct. 18, where they also announced the group’s new website, the U.K. Information Commissioner’s Office said in a statement. Established in 2014, the Common Thread Network is comprised of 20 data protection leaders from across the globe who work to “further a common approach to respecting citizens’ privacy, to promote and build capacity in the sharing of knowledge and good practices for effective data protection.” “The new website is one among many features which the Common Thread Network intends to use to foster a common approach and create synergies among commonwealth nations to uphold individuals’ privacy and data protection rights.” [ICO.uk]

CA – Can “Cloud Sovereignty” Keep Canadian Data Safe from Global Hacks?

Montreal cloud computing company CloudOps and Chatham, Ontario-based independent telecom provider Teksavvy are partnering to scale cloud.ca, an independent cloud infrastructure services company, in order to give Canadian businesses a stronger domestic platform on which they can more securely innovate on the global stage. Cloud.ca’s Internet-as-a-Service platform appears to be a well-timed answer to the question of whether or not it’s a great idea to run a business on servers south of the border, or to use their term, to reclaim “end-to-end data sovereignty” over how our data crosses borders. “The cloud.ca partnership between TekSavvy and CloudOps brings together leaders in regional networking, data centre, and cloud IaaS that offers a unique competitive advantage for jurisdiction-conscious Canadian customers,” said Philbert Shih, Managing Director of Toronto-based independent research and consulting firm focused on hosting and cloud infrastructure, Structure Research. Whether the appeal to independence, in either a national or “free from the Big Telcos” sense, or nationalism, for either patriotic or pragmatic reasons, is enough to appeal to a large enough user base of Canadian businesses to keep cloud.ca viable remains an open question. [Can Tech]

Location

WW – “Anonymous” Yik Yak Users Can Be Tracked Down, Say Researchers

Researchers have found that Yik Yak anonymity can be erased even without a warrant or Yik Yak’s compliance with US laws that force it to turn over user information. The researchers did it by relying on publicly available location data from the app, mixed with location-spoofing and message-recording on a device outfitted with simple machine learning. [Naked Security]

Online Privacy

WW – Google Has Quietly Dropped Ban on Personally Identifiable Web Tracking

Google is the latest tech company to drop the longstanding wall between anonymous online ad tracking and user’s names this summer, Google quietly erased that last privacy line in the sand – literally crossing out the lines in its privacy policy that promised to keep the two pots of data separate by default. In its place, Google substituted new language that says browsing habits “may be” combined with what the company learns from the use Gmail and other tools. The change is enabled by default for new Google accounts. Existing users were prompted to opt-in to the change To opt-out of Google’s identified tracking, visit the Activity controls on Google’s My Account page, and uncheck the box next to “Include Chrome browsing history and activity from websites and apps that use Google services.” You can also delete past activity from your account. [ProPublica | Google’s ad tracking is as creepy as Facebook’s. Here’s how to disable it]

US – Advertising Alliance to Begin Enforcing Cross-Device Tracking Code in 2017

The Digital Advertising Alliance has announced that it will begin enforcing the industry’s “privacy code for cross-device tracking” beginning in February of 2017. The November 2015-released code “sets out privacy rules governing ad networks, publishers and other companies that collect data from one type of computer  in order to serve ads to different devices used by the same consumer,” the report states. “This restriction means that if a user opts out on a laptop, marketers can’t use data collected from that laptop to serve ads on any device linked to the person.” The DAA’s Lou Mastria added that the agency established its February 2017 start date to allow companies time to adhere to the new code. [MediaPost]

WW – Journal Issue Focuses on Privacy and Ethics in Educational Data Analytics

An issue of the Springer journal, “Education Technology Research and Development,” covered the relationship between ethics and privacy in learning analytics. Professors Dr. Dirk Ifenthaler and Dr. Monica Tracey guest edited the issue, explaining why the growth of educational big data doesn’t necessarily result in better learning environments. Education institutions can use student data such as “socio-demographic information, grades on higher education entrance qualifications, or pass and fail rates” to allocate resources, or determine whether a student will drop out of school. “Consequently, higher education institutions need to address ethics and privacy issues linked to educational data analytics. They need to define who has access to which data, where and how long the data will be stored, and which procedures and algorithms to implement for further use of the available data,” said Ifenthaler. [phys.org See also: [Educational tech, balancing students’ privacy a challenge]

Other Jurisdictions

AU – Australian Bill to Create Mandatory Breach Reporting Regime

Australia’s Privacy Amendment (Notifiable Data Breaches) Bill 2016 received first reading. Notification of a data breach must be provided to both affected individuals and the OAIC if there is a risk of serious harm to affected individuals (determined by consideration of various factors, including the sensitivity of the information and the security measures that were in place) or if directed to do so by the OAIC; notification to affected individuals is to generally take place using the normal method of communication with the individual. [Privacy Amendment (Notifiable Data Breaches) Bill 2016 – House of Representatives, The Parliament of the Commonwealth of Australia Bill | Explanatory Memorandum | Progress of Bill] [New Mandatory Data Breach Notification Bill] See also: The Australian Senate has passed a bill allowing for a cancer screening register after the government amended it with stronger privacy protections suggested by Privacy Commissioner Timothy Pilgrim.

WW – Cavoukian Launches Global Council on Privacy by Design

Ryerson University Executive Director Ann Cavoukian has launched the International Council on Global Privacy and Security, by Design. The mission, according to a press release, “is to dispel the commonly held view that organizations must choose between privacy and public safety or business interests,” and its “goal is to educate stakeholders that public- and private-sector organizations can develop policies and technologies where privacy and public safety, and privacy and big data, can work together” for a better outcome. The council will work with businesses, data protection authorities, and technology professionals to educate and raise awareness of these privacy and public safety issues. [GPSbyDesign.org]

Privacy (US)

US – New FTC Data Breach Response: A Guide for Business

This week, the FTC announced on its Business Blog the release of Data Breach Response: A Guide for Business. The Guide’s release seems to be part of the FTC’s push to position itself as the main federal regulator of data security practices and is available for free on the FTC’s website. The Guide outlines the steps to take and those that should be contacted when there is a data breach; and includes advise on securing systems, how to handle service providers, and network segmentation. In addition, it has tips on notifying law enforcement, affected businesses and individuals. The Guide even has a model data breach letter to notify people whose Social Security numbers have been stolen. The FTC smartly drafted the Guide so that those who are not security and data privacy professionals can understand. Along with the 16-page Guide the FTC released a video. Accompanying the release of the video and blog is an update to the FTC’s guide Protecting Personal Information: A Guide for Business. The FTC has been very active in this area, last year releasing both the Start with Security: A Guide for Business and Careful Connections: Building Security in the Internet of Things. The new Data Breach Response: A Guide for Business gives insight into what the FTC expects businesses to do in the case of a data breach and following the guide will go a long way in convincing the FTC or state regulators that a business took the necessary and sufficient steps after a data breach has occurred. Note that the date of the Guide is September 2016, although the announcement occurred this week. [InfoLawGroup]

US – FTC to Host Public Conference on Identity Theft

The FTC announced it will host an all-day conference studying the current state of identity theft and what it may look like in the future. “Planning for the Future: A Conference About Identity Theft” will take place on May 24, 2017, in Washington and will bring together academics, business and industry representatives, government officials, and consumer advocates to discuss the ways identity theft affects consumers. “The FTC event will look at the full life cycle of identity theft, addressing how identity thieves acquire consumers’ information and what information they seek most often, as well as the cost and ease with which consumers’ data can be acquired. In addition, the conference will examine how identity thieves use information, and how they may attempt to use it in the future.” [FTC]

US – DOJ Wants to Overturn Microsoft V. United States

In July, the Second Circuit Court of Appeals in New York overturned a ruling in Microsoft v. United States that forced Microsoft to hand over private email correspondence and other data to US law enforcement from servers based in Dublin, Ireland. It was a victory for privacy because the Department of Justice (DOJ) was unable to force compliance of the Stored Communications Act. But last week, the DOJ expressed interest in re-hearing Microsoft v. United States, once again jeopardizing domestic and international privacy rights. If the decision is overturned, not only will Microsoft’s security be threatened, but so too will all foreign nations that house data owned by any US-based company. Sponsored Video: Know Right Now: USA Freedom Act Signed Into Law If the July ruling is indeed overturned, the Fourth Amendment will be seriously weakened and taxpayers will have no assurance that continued overreach by the DOJ will be stopped. Not only will future domestic investigations not need a warrant, but neither will those of an international scope. The utter lack of safeguards in place would point to a foreseeable overreach by U.S. investigators and the destruction of the nation’s diplomatic efforts. The U.S. government would assuredly be mad if a foreign country took private data and intelligence from our soil without a warrant. After all, the U.S. has started wars over more trivial matters. So why would any reasonable court believe that the U.S. has a special “hall-pass” to do whatever it pleases with other nations’ data? [IJR.com] See also: [US government wants Microsoft ‘Irish email’ case reopened | Microsoft Cloud Warrant Case Edges Closer to Supreme Court | Government Seeks Do-Over On Win For Microsoft And Its Overseas Data | Lawmakers question DOJ’s appeal of Microsoft Irish data case]

US – Other US Privacy News

Privacy Enhancing Technologies (PETs)

UK – Wearable Badge Could Blur Your Face in Unwanted Social Media Photos

More than 1.8 billion photos are uploaded to the internet every day. From baby showers to funerals and street photography to office parties, nearly every aspect of our life is documented and stored in the cloud indefinitely — sometimes, whether we like it or not. Now, a new physical badge is designed to give people control over their own image by signalling to algorithms that the wearer does not wish to be photographed, so their face can be automatically blurred in photos. The Do Not Snap badge is a physical, wearable symbol. It works by pairing up with software capable of identifying this symbol in different settings, which will then flag it up and automatically blur the face of the wearer on whatever platform the photo is on. Upload a photo of a friend or child wearing it to a social network and that network could censor out their face, for example, respecting their wishes not to have images of them shared online. It’s up to social networks to decide whether to honour the Do Not Snap. [UK Business Insider]

Security

WW – Over 80% of Employees Lack Security/Privacy Awareness: Report

A new study has revealed worryingly low levels of employee cybersecurity and privacy awareness, with 88% described as lacking the requisite skills to prevent an incident. The MediaPro 2016 State of Privacy and Security Awareness Report was compiled from interviews with over 1000 US employees. Only 12% were classed as ‘hero’ – meaning they are able to identify and dispose of information safely, recognize malware and phishing attacks and keep info safe when working remotely. Unfortunately, 72% were classed as ‘novice’ while 16% were judged to exhibit the kind of behaviors that could put their organization at serious risk of a major privacy or security incident. Some 39% of respondents claimed to discard password hints insecurely, for example in a bin; a quarter failed to recognize a phishing email with a suspicious looking attachment and questionable “from address”; and 26% said they thought it was fine to use a personal USB to transfer work documents outside of the office. What’s more, 30% said they thought it was fine to post on behalf of their company to a personal social media account. “This survey clearly shows the human threat vector is still largely unsecured, and most organizations don’t really know whether their employees have the necessary level of data protection awareness to avoid preventable incidents,” said MediaPro founder Steve Conrad. The most recent stats from the Information Commissioner’s Office (ICO) revealed an increase in human error-related data breach incidents reported to the UK privacy watchdog. Incidents involving data being sent by email to an incorrect recipient increased by 60% between the first and second quarters of 2016, while the number of incidents involving failure to redact data jumped by 64% from Q1 to Q2. Yet some experts at Infosecurity Europe this year argued that current training programs are largely ineffective. The focus should be on changing people’s behavior rather than raising awareness, as the latter does little to improve information security, they argued. [InfoSecurity]

Smart Cars

US – NHTSA Releases Guidelines for Automotive Cybersecurity

The National Highway Traffic Safety Administration released a set of guidelines to help improve cybersecurity in vehicles. The 22-page set of best practices is designed to help auto manufacturers handle hacking attempts and to encourage car companies to incorporate security protocols into their vehicles. The NHTSA best practices include recommending a “layered approach,” placing critical system security over other safety-specific features, while endorsing information sharing in “as close to real time as possible” in the event of a cybersecurity incident. The NHTSA also encourages revealing any potential vulnerabilities, as well as holding onto any data used for a self-audit. [TechCrunch]

Surveillance

US – Surveillance up 500 % in D.C.-area Since 2011 –Almost All Sealed Cases

Secret law enforcement requests to conduct electronic surveillance in domestic criminal cases have surged in federal courts In Northern Virginia, electronic-surveillance requests increased 500% in the past five years, from 305 in 2011 to a pace set to pass 1,800 this year. Only one of the total 4,113 applications in those five years had been unsealed as of late July, according to information from the Alexandria division of the U.S. District Court for the Eastern District of Virginia, which covers northern Virginia. The federal court for the District of Columbia had 235 requests in 2012, made by the local U.S. attorney’s office. By 2013, requests in the District had climbed 240 percent, to about 564, according to information released by the court’s chief judge and clerk. Three of the 235 applications from 2012 have been unsealed. [Washington Post]

US – Police Convinced Courts to Let Them Track Cellphones Without Warrant

The Chicago Police Department has acquired and used several varieties of advanced cellphone trackers since at least 2005 to target suspects in robberies, murders, kidnappings, and drug investigations. In most instances, officers only lightly described the devices’ advanced technical surveillance capabilities to courts, which allowed the police to use them, often without a warrant. Now, after a lengthy legal battle waged by Freddy Martinez, a Chicago software technician, court orders and case notes were released, painting a more detailed picture of how the second-largest police precinct in the U.S. uses surveillance technology to track cellphones. According to the purchase records, some of which Martinez had previously obtained in more redacted form, the Chicago Police Department’s Organized Crime Division spent hundreds of thousands of dollars over more than 10 years buying multiple different models of IMSI catchers (cellphone trackers & Cell-site simulators), as well as upgrades, training programs, software, and attachments. The department purchased Harris Corporation’s Stingrays — a popular model used by many police departments across the country, and King Fish — a more powerful cellphone tracker. It also bought DRT boxes, known as dirt boxes — military grade trackers made by Digital Receiver Technology Inc., a subsidiary of Boeing. The Chicago PD also turned over 43 records of times they deployed cellphone trackers in the past 10 years — which Martinez suggests is likely still lower than the actual amount of times the devices were used. [The Intercept]

Telecom / TV

US – Broadband Privacy Rules Approved Despite Industry Pushback

Federal regulators have approved new broadband privacy rules that make internet service providers like Comcast and Verizon ask customers’ permission before using or sharing much of their data, potentially making it more difficult for them to grow advertising businesses. Under the measure, for example, a broadband provider has to ask a customer’s permission before it can tell an advertiser exactly where that customer is by tracking her phone and what interests she has gleaned from the websites she’s visited on it and the apps she’s used. For some information that’s not considered as private, like names and addresses, there’s a more lenient approach. Customers should assume that broadband providers can use that information, but they can “opt-out” of letting them do so. The Federal Communications Commission’s measure was scaled back from an earlier proposal, but was still criticized by the advertising, telecommunications and cable industries, who want to increase revenue from ad businesses of their own. Companies and industry groups say it’s confusing and unfair that the regulations are stricter than the Federal Trade Commission standards that digital-advertising behemoths such as Google and Facebook operate under. The FCC does not regulate such web companies. FCC officials approved the rules on a 3-2 vote Thursday, its latest contentious measure to pass on party lines. “It is the consumer’s information. How it is to be used should be the consumers’ choice, not the choice of some corporate algorithm,” said Tom Wheeler, the Democratic chairman of the FCC who has pushed for the privacy measure and other efforts that have angered phone and cable companies. AT&T and other players have fought the “net neutrality” rules, which went into effect last year, that say ISPs can’t favor their some internet traffic. Another measure that could make the cable-box market more competitive is still waiting for an FCC vote. [Associated Press]

US – Ohm: FCC’s Privacy Proposal is ‘Sensible’

In a post for the Benton Foundation, Georgetown University Law Center professor Paul Ohm argues the pending FCC broadband consumer privacy proposal is “sensible.” He contends ISPs “jeopardize” consumer privacy “in ways the phone company and postal service” do not, pointing out that an ISP is the “mandatory first hop to the rest of the internet” giving ISPs “a nearly-comprehensive picture” of what a user does. He concludes, “If the FCC’s commissioners hold on to their commitments over the next few weeks and resist the continuing barrage from those urging them to water down the new privacy rules, they will accomplish something truly important. They will long be remembered and celebrated for protecting the kind of privacy we need to ensure safe, dynamic, and innovative online spaces. [Benton.org]

Workplace Privacy

US – DOT Screening Program Doesn’t Violate Drivers’ Privacy

The Transportation Department didn’t violate truck drivers’ privacy by providing information about their non-serious safety violations to prospective employers, a federal appeals court decided ( Flock v. U.S. Dep’t of Transp. , 2016 BL 351349, 1st Cir., No. 15-2310, 10/21/16 ). The ruling leaves intact the pre-employment screening program, or PSP, launched in 2010 by the DOT’s Federal Motor Carrier Safety Administration. For a fee, the program gives employers access to commercial driver applicants’ crash and inspection information. Driver consent is required before information is disclosed by the government. In the present case, drivers contended that the PSP database should include only serious safety violations. They claimed that the inclusion of non-serious offenses, such as speeding tickets and other fines, violated their rights under the Privacy Act. The U.S. Court of Appeals for the First Circuit disagreed, upholding the dismissal the drivers’ claim. The law allowing the FMCSA to collect safety information doesn’t restrict the agency’s discretion to disclose non-serious violations to employers, provided they have the drivers’ consent, the court said. The court also rejected the drivers’ argument that the PSP’s consent forms are coercive because they must be signed in order for the drivers to seek employment. Employer use of the PSP is optional, and the drivers didn’t present evidence that their employment chances are “doomed entirely” because of the inclusion of non-serious violations, the court said. [bna.com]

WW – The Changing Face of IT Training

It’s the second-most universal aspect of the job of privacy: organizing and providing privacy-related awareness and training. Not only must privacy pros be steeped in the knowledge of privacy law, but the IAPP-EY Privacy Governance Report says 78% of privacy pros also need to know how to convey some portion of that knowledge to others. Whether it’s HR, marketing or IT, different areas of the organization need different information. [IAPP.org]

+++

Advertisements
Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: