15-28 February 2019


CN – Facial-Recognition Database Exposed 2.5M Users

According to security researcher Victor Gevers, one of the facial-recognition databases used in China’s Xinjiang region has been left open on the internet for months. Gevers stated the MongoDB database belonging to Chinese company SenseNets contained information on 2,565,724 users and included names, ID numbers, nationalities, addresses, dates of birth, employer information and GPS data. Gevers also noted that the database was active and had received nearly 6.7 million GPS coordinates within a 24-hour period. After informing SenseNets, the database has been made secure and now blocks access from non-Chinese IP addresses. [ZDNet]

EU – Swedish Data Authority to Investigate School’s Use of Facial Recognition

The Swedish Data Inspection Authority, the Datainspektionen, plans to investigate the use of facial-recognition technology at a Skelleftea school. The Anderstorp school has used facial recognition to take attendance. Datainspektionen Legal Adviser Ranja Bunni will lead the inquiry to see whether the school is in compliance with the EU General Data Protection Regulation. The authority seeks to find out what personal information has been collected, the way the technology has been used, and whether the school board conducted a risk assessment for the technology. [Telecompaper]


CA – Commissioner Calls for Revamped Privacy Laws in Nova Scotia

Nova Scotia Information and Privacy Commissioner Catherine Tully called for an overhaul to the province’s privacy laws. Appearing before Nova Scotia Legislature’s public accounts committee, Tully said the province’s privacy laws need to be revamped, as they do not include features such as data breach notification requirements and rules on risk assessments. Her appearance follows a report she issued last month, scathing in its criticism of the way the Nova Scotia government handled the province’s largest privacy breach [CBC coverage]. “This law will not do, it will not protect Nova Scotians,” Tully said [watch starting at 14:34]. “Europe is miles ahead of us, why would we want this? Why would we want less for Nova Scotians than everybody else has?” Opposition members offered their full endorsement of Tully’s call for an overhaul, with NDP MLA Lisa Roberts calling for reform of the act. Tory MLA Tim Halman voiced a similar opinion.. Nova Scotia Premier Stephen McNeil said he will commit to updating the province’s privacy laws but added he does not plan to give the privacy commissioner’s office more order-making authority, as Tully has previously requested. [The Canadian Press | CBC News | Commissioner makes plea for an overhaul of Nova Scotia privacy law]

CA – Nova Scotia Receives Recommendations After Review of FOI Site Breach

Deloitte offered the Nova Scotian government 35 recommendations after it conducted a review about the breach of the province’s freedom-of-information website. The audit was ordered after Information and Privacy Commissioner for Nova Scotia Catherine Tully recommended the government put forth “an internal post-incident review.” Deloitte held “a no-fault, lessons learned discussion” with the individuals who helped conduct the portal. After the interviews were finished, Deloitte released its recommendations, which included creating guidance to define “responsible parties,” determining an incident leader earlier in the event of a breach, and considering whether any misinformation reported by the media should be corrected. [CBC]

CA – Police Are Tracking People’s ‘Negative’ Behavior in a ‘Risk’ Database

Documents obtained by Motherboard from Ontario’s Ministry of Community Safety and Correctional Services (MCSCS) through an access to information request show that at least two provinces—Ontario and Saskatchewan—maintain a “Risk-driven Tracking Database” –a collaborative approach to policing called the Hub model [described in a 2015 Public Safety Canada report] that partners cops, school staff, social workers, health care workers, and the provincial government use. It is used to amass highly sensitive information of detailed, but “de-identified” information about people’s lives including whether a person uses drugs, has been the victim of an assault, or lives in a “negative neighborhood.” The information is culled from conversations the subject has with police, social services, health workers, and more. Police, social services, and health workers use the shared databases to track the behaviour of vulnerable people—including minors and people experiencing homelessness—with little oversight and often without consent. Information is added to the database when a person is being evaluated for a rapid intervention intended to lower their risk levels. Interventions can range from a door knock and a chat to forced hospitalization or arrest. Saskatchewan and Ontario officials say data in the RTD is “de-identified” though experts said that scrubbing data so it may never be used to identify an individual is difficult if not impossible. A Motherboard investigation—which involved combing through MCSCS, police, and city documents—found that in 2017, children aged 12 to 17 were the most prevalent age group added to the database in several Ontario regions, and that some interventions were performed without consent. In some cases, children as young as six years old have been subject to intervention. More than 100 Hubs are now operating in cities and towns across Canada and the US, with 37 in Ontario (where Hubs are usually called Situation Tables) contributing to the Risk-driven Tracking Database as of April 2018, according to MCSCS documents. In total, 55 are expected to be contributing by the end of this year. [The remainder of this lengthy piece examins in some detail with]: 1) How does people’s information get added to the database?; 2) What’s in the database?; and 3) Predictive policing concerns. [Motherboard | Canada’s ‘Pre-Crime’ Model of Policing Is Sparking Privacy Concerns | The Canadian Government Is Going to Scan Social Media to See If You Smoke Pot | Academics Confirm Major Predictive Policing Algorithm is Fundamentally Flawed

CA – B.C. Privacy Commissioner Launches PIPA Educational Campaign

Michael McEvoy, British Columbia information and privacy commissioner has announced [Press Release] his office has launched new programs to help the roughly 1 million firms, non-profits, doctor’s offices, trade unions and other organizations there understand their obligations under the provincial Personal Information Protection Act (PIPA). The campaign, largely aimed at small and medium-sized organizations, will include webinars, animated pop-up online videos, podcasts and guidance documents. New tools will be published on the first Wednesday of each month throughout 2019 on the OIPC website. The idea for the campaign started when McEvoy looked at responses to a privacy self-assessment toolkit sent randomly to B.C. businesses asking for details about their privacy management program: “Many of them didn’t really understand the most fundamental aspects under our personal information protection act. We also had the same sense from the kind of calls we receive from businesses.” McEvoy officially launched the project on March 7. IT World Canada

CA – Ontario Privacy Commissioner to Investigate Sale of Health Data

The Office of the Information and Privacy Commissioner of Ontario announced it will investigate a data-sharing arrangement between U.S. health organization IQVIA and a company that sells electronic medical record software to doctors in the province. The Star first reported the unnamed software company sold anonymized data to IQVIA. The company has access to the medical records of 5 million Ontario citizens. “The article indicates that information from patient records is being provided to private sector organizations,” the privacy commissioner’s office said in a statement. “We have reason to believe that these arrangements may be contrary to the law.” [Toronto Star]

CA – Élections Québec: Political Parties Should Be Covered by Privacy Laws

Élections Québec released a report that recommends policies where parties should be covered by privacy legislation. The group suggests any data held by political parties should receive the same level of protection as the information held by public and private organizations in Quebec. Those privacy laws would also extend to municipal political parties, representatives and candidates for school elections. Élections Québec also wants The National Assembly to create a special committee to examine these issues. “Technologies are evolving, as are the challenges surrounding the protection of personal information,” the group states. “Laws must evolve to reflect these new realities. Our recommendations are meant to provide the basis for a broader reflection on the oversight of political parties with respect to the protection of personal information.” [Élections Québec]

CA – Supreme Court Rules Students Have Privacy Rights in Schools

In a decision that is expected to impact future privacy-related cases, the Supreme Court of Canada ruled that a teacher who secretly filmed female students’ chests with a camera pen is guilty of voyeurism. While lower courts had previously ruled that students had no reasonable expectation of privacy while at school, the Supreme Court’s unanimous decision found that students had a reasonable expectation of privacy, even if the school employs security cameras. Writing for the majority, Chief Justice Richard Wagner wrote, “The explicit focus of the videos on the bodies of the students recorded, including their breasts, leaves me in no doubt that the videos were made in violation of the students’ reasonable expectations of privacy.” [CBC | Commentary: Insights (Dentons) | Insights (McInnes Cooper) | Teresa Scassa Blog | Canadian Lawyer ]


CA – CIRA Report on Canadians’ Views on Fake News, Privacy, Cybersecurity and Internet Access

The Canadian Internet Registration Authority (CIRA) released a research report based on a survey of over 1,200 Canadian internet users in December 2018 displaying Canadians’ opinions and experiences regarding the internet and fake news, privacy, cybersecurity and access. Of those surveyed

  • 72% are willing to disclose some or a little personal information in exchange for a valuable/convenient service
  • 87% are concerned that businesses with access to customers’ personal data willingly share it with third parties without consent
  • 86% believe it is important that government data, including the personal information of Canadians, be stored and transmitted in Canada only
  • 87% are concerned about a potential cyberattack against organizations with access to their personal data.
  • Only 19% say they would continue to do business with an organization if their personal data were exposed in a cyberattack.
  • 78% are concerned about the potential security threats related to the Internet of Things.

CIRA’s report offers several recommendations to improve Canada’s internet, including enhanced investments by the Canadian government, actions around cybersecurity and privacy that Canadian businesses can take right away and opportunities for Canadian citizens to improve the internet they rely on every day. [Canadian Internet Registration AuthorityCIRA report highlights Canadians’ concerns about cybersecurity, privacy and fake news ]

Electronic Records

US – GAO Identifies Challenges in EHR Accuracy

The Government Accountability Office reviewed current approaches to matching of electronic health records. There are difficulties faced in inputting correct patient demographic information, requiring use of the same fields across all providers (due to proprietary software), and assessing algorithm matching accuracy. Improvements should include implementing a national unique patient identifier, common standards for demographic data, allowing patients to share data with their providers using smartphones, and improving matching abilities of algorithms used. [GAO – Health Information Technology – Approaches and Challenges to Electronically Matching Patients Records Across Providers]


WW – International Civil Liberties and Technology Coalition Files Submission Regarding Australia’s Encryption Laws

A coalition of civil liberties advocates and technology companies have filed a submission regarding Australia’s encryption laws. The submission argues against Australia’s plan to force service providers to allow law enforcement to be secretly added to encrypted communications as “ghost users.” The group also voiced its opposition to plans to force companies to reveal source code to the government, to requiring phone makers to take screenshots and send then to law enforcement, and to imposing gag orders on companies that receive technical capabilities requests from the government.

  • www.zdnet.com: Tech giants and civil liberty groups call out ghost cops and source code demands under Australian encryption laws

EU Developments

EU – EDPS releases 2018 Annual Report

The European Data Protection Supervisor has released its 2018 Annual Report. The document covers the data protection authority’s efforts to prepare for the EU General Data Protection Regulation and its work with the European Data Protection Board. In the introduction letter to the report, EDPS Giovanni Buttarelli also cited the work to reach an adequacy decision with Japan and continued efforts with the EU-U.S. Privacy Shield agreement as other notable occurrences in 2018. The EDPS also issued its guidelines “for assessing the proportionality of measures that limit the fundamental rights to privacy and to the protection of personal data.” [EDPS]

EU – Ireland’s Data Regulator Lists 16 Big Tech Investigations

Helen DixonIreland’s data regulator has listed details for the first time of 16 investigations into Facebook, Twitter, Apple and LinkedIn and warned that the companies will inevitably face significant fines if they are found to have violated the EU’s new privacy rules under the General Data Protection Regulation (GDPR), which came into force last May. The investigations into Facebook include a single case involving 12 separate data breaches since May which is separate to Ms Dixon’s investigation into cyber attack on Facebook last September in which hackers gained access to up to 50m accounts. She published a report into her current slate of investigations [DPC Annual Report 25 May – 31 December 2018 – see PR] She said that “what’s important to know is that where we do identify that there are infringements, I am obliged to apply a fine.” Breaches of GDPR carry a maximum penalty of €20m, or up to 4% of a company’s annual worldwide revenues. Ms Dixon is expected to start publishing her findings over the summer while some cases will not be finished until later in the year. She said the initial investigations are likely to set a precedent for enforcement elsewhere. Her office has initiated seven inquiries of its “own volition” since May, four of which are on Facebook, another on WhatsApp, and two on Twitter. Another nine inquiries were started with complaints to her office, including three on Facebook itself, one each on WhatsApp and Instagram, two on Apple and one each on Twitter and LinkedIn. “The issues that we’re covering between the complaint-driven and own-volition inquiries will allow us set very important standards that we anticipate all entities should reach under the different articles of the GDPR,” she said. [FT.com]

UK – ICO Releases Video on Sandbox Workshop

The U.K. Information Commissioner’s Office has released a video on a workshop it held on its sandbox initiative. The ICO used the workshop to inform stakeholders about the sandbox, where organizations can pitch their solutions that use personal data. Sandbox participants can work with the ICO to ensure their product is in compliance with data protection laws. Attendees of the workshop had the opportunity to offer feedback on the initiative. “I think there is a general perception that privacy and innovation can’t sit comfortably together,” Refinitiv Chief Privacy Officer and IAPP Board Member Vivienne Artz said. “I see this initiative as an opportunity to dispel that myth and to demonstrate that both innovation and privacy can work together to generate excellent results.” [Full Story]

Facts & Stats

WW – Human Negligence to Blame for the Majority of Insider Threats

Behavioral intelligence firm Dtex found that in 98% of the employee assessments conducted for its 2019 Insider Threat Intelligence Report [PR] employees exposed proprietary customer company information on the Web – a 20% jump from 2018. Nearly two-thirds (64%) of insider threats are caused by users who introduce risk due to careless behavior or human error 13% of threats due to compromised credentials and 23% caused by intent on harming the organization. The study also found that in 95% of the assessments, employees looked to circumvent company security policies – a notable jump from 60% last year. In many instances, people are using private VPNs and TOR browsers in the hope of shielding their activities, Koo says. While often employees are simply looking to bypass security so they can do their work more efficiently, Dtex has found the use of such tools is often motivated by malicious intent. In related research released this week, Endera [PR] reported that companies suffer from at least three workforce-related incidents per week, adding up to 156 incidents per year. And, according to Egress Technologies [PR & report] more than four out of five companies (83%) have had employees expose customer or business data. Dark Reading | Analytics, Intelligence & Response: Getting Ahead of the Insider Threat in 2019


US – N.Y. Department of Financial Services Issues Guidance Regarding Life Insurers’ Use of External Consumer Data in Underwriting

On January 18, 2019, the New York State Department of Financial Services (NYDFS) issued Circular Letter 2019-1, addressing insurers’ use of external consumer data and information sources in underwriting for life insurance. The Circular Letter follows an investigation commenced by NYDFS regarding life insurers’ use of external data, which was initiated in light of reports that insurers were using algorithms and predictive models that include unconventional sources or types of external data. Among other things, the Circular Letter provides guidance that when insurers use external data sources in connection with underwriting decisions: 1) the use of external data sources must not result in any unlawful discrimination; 2) the underwriting or rating guidelines must be based on sound actuarial principle; and 3) life insurers must have adequate consumer disclosures to notify insureds or potential insureds of the right to receive the specific reasons for any adverse underwriting decision based on such data. The bulk of this blog post discusses these issues in some detail and concludes with the following: Life insurers interested in using external data sources, algorithms or predictive modeling in accelerated underwriting processes should exercise caution to ensure that the use of data contained in such materials is not unlawfully discriminatory and would otherwise be permitted by law or regulation. In addition, life insurers should be aware that they are responsible for establishing that the external data sources, algorithms or predictive models are based on sound actuarial principles and that they must notify insureds or potential insureds of their right to receive the specific reasons for any adverse underwriting decision. Finally, life insurers are ultimately responsible for ensuring compliance with such laws through diligence, even in the event that such external data, algorithm or predictive model is provided by a third-party vendor. Data Matters Blog (Sidley)

UK – Financial Services See Fivefold Increase in Data Breaches

According to the U.K.’s Financial Conduct Authority, financial services companies in the U.K. experienced a fivefold increase in data breaches for 2018 when compared to 2017, the Financial Times reports. Companies reported 145 breaches for the year, compared to just 25 reported breaches in 2017. The article notes that while the EU General Data Protection Regulation’s breach-reporting requirements are likely to explain part of the increase, bank executives also cite an increased and constant hacking threat. [FT.com]


CA – Alberta Court Overturns OIPC’s Disclosure Order

The Court considered law enforcement’s appeal of an OIPC AB order to disclose records believed to be privileged, pursuant to the Freedom of Information and Privacy Act. The OIPC erred in its considerations for asserting privilege over records requested for public disclosure; a lawyer who provides legal information to their client is doing so for the purpose of advising their client and, when asserting privilege, the asserter does not need to consider the public interest or whether their exercise of discretion is consistent with the purpose of FOIP. [Calgary Police Service v Alberta OIPC – 2019 ABQB 109 CanLII – Court of Queens Bench of Alberta]

CA – BC Supreme Court Denies Disclosure of Medical Information

The British Columbia Supreme Court considered whether a Health Authority must produce personal information of individuals for litigation purposes. A Health Authority does not have to disclose personal information of individuals who contracted E. Coli while attending a pet farm; the Public Health Act prohibits disclosure of the information to identify potential witnesses for use in legal proceedings, and there are no benefits to the disclosure that would outweigh the privacy interests of the individuals. [Svangtun v. Pacific National Exhibition – 2019 BCSC 121 CanLII – Supreme Court of BC]

Health / Medical

CA – OIPC AB Investigating Security of Patient Health Records

Alberta’s privacy commissioner is investigating whether Alberta Health Services (AHS) properly safeguards the public’s personal health information after a 2018 audit revealed lax cybersecurity practices [CBC coverage]. The assessment by an external security firm Procyon Security Group found several “significant risks” with the health authority’s administration of the Alberta Netcare Portal. The system gives health-care providers access to key information from a patient’s medical file, such as laboratory test results and hospital visits. According to OIPC spokesperson Scott Sibbald, commissioner Jill Clayton launched her investigation on Aug. 8. “The investigation is examining safeguards of patient health records at AHS.” The audit found 108 security risks in the Portal and its associated infrastructure: 11 critical, 34 high, and 63 medium. Of particular concern to Procyon was the Alberta Netcare Portal’s “highly insecure” database access. They discovered AHS last applied security updates to its system in July 2014 — three and a half years before the company conducted its review — and the health authority did not securely store users’ passwords. The portal protects users’ passwords through a common method called hashing. But Procyon was able to obtain the password hashes of database users and crack nearly 40% of their actual passwords. From there, the firm would have been able to “exfiltrate all data in the database,” including the password hashes of Alberta Netcare Portal users and to also access “personally identifiable medical records.” As a condition of its operating agreement with Alberta Health, AHS must conduct vulnerability assessments every two years and meet certain service-level targets. Procyon’s review concluded the health authority is “in breach” of its targets. [CBC News]

CA – EHRs: Circle of Care is a Misleading Concept

The OIPC Saskatchewan investigated a privacy breach by a health authority, pursuant to the Health Information Protection Act. An investigation by the OIPC SK found that physicians accessed PHI without a need-to-know after the patients were transferred out of the physicians’ care; the concept of circle of care is not recognized under HIPA, and focuses on the physicians, rather than patients, suggests a static kind of entitlement to information, and is often misinterpreted to only include trustees and their employees (non-trustees such as police or teachers may have a need-to-know). OIPC SK – Investigation Report 180-2018_181_2018_226_2018 – Saskatchewan Health Authority involving Dr. R Dr. L and Dr. F]

US – All-Time Record Year for HIPAA Enforcement

The U.S. Department of Health and Human Services Office for Civil Rights (OCR) announced that 2018 was an all-time record year for Health Insurance Portability and Accountability Act (“HIPAA“) enforcement activity [see PR]. Enforcement actions in 2018 resulted in the assessment of $28.7 million in civil money penalties [see summary of 2018 OCR HIPAA actions]. Enforcement activity focused primarily on breaches of electronic protected health information (ePHI). Another enforcement theme in 2018 focused on physical theft of PHI or devices containing ePHI. OCR’s record-breaking enforcement activities in 2018 serve as a reminder to covered entities and business associates to conduct frequent and meaningful assessment of the security of any PHI they hold, to swiftly remediate any vulnerabilities discovered, and to carefully document the assessment, remediation, and general HIPAA policies and procedures. This blog post is part of our ongoing coverage of HIPAA issues, which includes, among others: 1) HHS Releases Voluntary Cybersecurity Guidance; 2) HHS Announces More HIPAA Enforcement Actions; 3) Twenty-First Century Cures Act Includes HIPAA Provisions; and 4)Significant HIPAA Fine Follows Business Associate’s Stolen iPhone

Source: Inside Privacy (Covington) | HIPAA Data Breach Reports Due to OCR by 2/28/19

Horror Stories

US – Marriott Service Allows Users to Check if They are Data Breach Victims

Marriott has released a tool to help individuals discover whether they were victims of the hotel chain’s data breach. The tool, which is hosted by OneTrust, also will determine whether a person’s passport numbers were stolen in the incident. Marriott has not announced how long it will take to answer each request. Meanwhile, a report from Risk Based Security found about 5 billion records were stolen around the world in 2018, down from 7.9 billion in 2017. ZDNet also reports on a hacker who has sold three databases that contained the data of millions of users. [TechCrunch]

Identity Issues

US – Potential Privacy Lapse Found in Americans’ 2010 Census Data

An internal team at the Census Bureau found that basic personal information collected from more than 100 million Americans during the 2010 head count could be reconstructed from obscured data, but with lots of mistakes chief scientist John Abowd told the American Association for the Advancement of Science annual meeting on Saturday. The age, gender, location, race and ethnicity for 138 million people were potentially vulnerable. So far, however, only internal hacking teams have discovered such details at  possible risk, and no outside groups are known to have grabbed data intended to remain private for 72 years. In the internal tests, Abowd said, officials were able to match of 45% of the people who answered the 2010 census with information from public and commercial data sets such as Facebook. But errors in this technique meant that only data for 52 million people would be completely correct—little more than 1-in-6 of the U.S. population. The Census Bureau is now scrapping its old data shielding technique for a state-of-the-art method that Abowd claimed is far better than Google’s or Apple’s. He said “I promise the American people they will have the privacy that they deserve the 2020 census will be the safest and best protected ever.” The new system involves complex mathematical algorithms that inject “noise” into the data, making it harder to get accurate information and providing “a very strong guarantee” of privacy. This increases privacy while lowering the accuracy for researchers who use the statistics. Phys.Org | Court will review census citizenship dispute this term | Supreme Court to Hear Case on Census Citizenship Question | Supreme Court to decide whether citizenship question can be included in 2020 census | Supreme Court Fast Tracks Census Case, Because They Already Know How They’re Going To Rule

Law Enforcement

CA – Vancouver Police Suspends Camera Registration Program Over Privacy Concerns

The Vancouver Police Department announced it has suspended its recently launched Community Camera Registration Program over citizens’ privacy concerns. The department asked residents and business owners to register information about their privately owned camera systems in order to assist in law enforcement investigations. Vancouver Police Department Spokeswoman Kim Kapp said in an email individuals were concerned personal information may be leaked to the public through the program. “We decided to temporarily suspend the program so that we could modify the registration form in an effort to better protect the privacy of citizens while still collecting meaningful information for investigators,” Kapp said. [The Columbian]

US – Privacy Advocate Held at Gunpoint After License Plate Reader Database Mistake: Lawsuit

Bay Area police pulled over a California privacy advocate and held him at gunpoint after a database error caused a license plate reader to flag a car as stolen, alleges a lawsuit filed by Brian Hofer, chair of Oakland’s Privacy Advisory Commission in December. According to the suit, Hofer had rented a car and was traveling with his brother when he was pulled over by a Contra Costa Sheriff’s Office deputy, and more police cars joined. Hofer alleges that an officer had a gun drawn and told him and his brother to exit the rental car, and that a deputy injured his brother by throwing him to the ground and that the officers searched the car without consent. After allegedly spending about 40 minutes detained by the officers, the two brothers were released. The complaint, filed against multiple officers, alleges that the incident was a violation of Hofer’s Fourth Amendment rights. A spokesperson for the sheriff’s office said in a statement: “The Deputy Sheriffs involved in this case followed procedure and acted appropriately” adding that the car was stolen at one point but that, for an unknown reason, the system wasn’t updated after the stolen car was recovered. The Verge | ‘They went cowboy on us’: Privacy advocate says East Bay police held him at gunpoint over license plate reader mix-up


US – Survey: 83% of US Citizens Know Companies Track Location Data

survey conducted by location-intelligence company Blis found the majority of Americans are more aware of marketers’ use of their personal data. Of the 2,000 U.S. citizens polled, 63% said they have become more cognizant of marketers’ data use compared to a year ago. When asked about location tracking, 83% said they were aware companies “actively track their location data.” Respondents were also asked to put a price on their personally identifiable information. Nearly 60% said they would sell their data for a price. Of that 60 percent, 57% said their data was worth a minimum of $10. [MarTech Today]

WW – Rise of 5G Could Lead to Location Data Privacy Issues

Columbia University’s Steve Bellovin has identified potential privacy issues related to 5G networks. Since 5G signals in the U.S. will have a shorter range, Bellovin said, more cell towers will need to be erected. Bellovin believes this gives companies the ability to collect more precise location data, and since 5G cannot go through walls, the computer science professor said towers will be placed inside buildings for even more accurate location information. “We need much clearer regulation of what carriers can do with location data,” Bellovin said. [The Wall Street Journal]

Online Privacy

WW – Study Finds 17K Android Apps Gather Information from Smartphones

study conducted by the International Computer Science Institute found about 17,000 Android apps gather identifying information to create a record of users’ activity on their devices. The researchers found the apps link a user’s Advertising ID with other identifiers found on a smartphone, such as a Mac address and Android ID. Google announced it has looked into the report and has taken action on a number of app developers. “We take these issues very seriously,” a Google spokesperson said in a statement. “Combining Ad ID with device identifiers for the purpose of ads personalization is strictly forbidden. We’re constantly reviewing apps — including those listed in the researcher’s report — and will take action when they do not comply with our policies.” [CNET]

US – EFF Puts a Spotlight on Apps’ ‘Dark Patterns’

The Electronic Frontier Foundation (EFF) looks at the tactics tech companies use to encourage users to consent to more data collection. “Dark patterns” are techniques where tech companies will use visual indicators to highlight different app functionalities. Apps may have a button that allows it to connect with another service placed in a bright box to entice users to agree. The flipside of this is “opinionated design,” which warns users about potential online dangers, such as when an app warns a connection may not be private, and the highlighted box instead allows an individual to go back to a safe website. The EFF states the best way to combat “dark patterns” is to build apps with a focus on privacy by design. [EFF.org]

WW – How Mobile Data Collection Could Raise Taxes and Privacy Concerns

The growing practice of collecting mobile-tracking data raises questions for consumers as they navigate concerns. While there are concerns around the data-collection process in terms of what data is collected, who has access to it, and how it is being shared, “the potential for this same data to be monetized via auditing and compliance fees is even more problematic.” As more state and local governments move to utilize such data to generate revenue via congestion pricing and infrastructure fees, it not only creates business opportunities for companies to harvest and analyze mobile tracking data, but as the article also points out, its use to create an additional tax will raise some “pretty significant privacy concerns.” [TechCrunch]

WW – Majority of Chrome Apps, Extensions Do Not Have Privacy Policies: Study

Research conducted by Duo Security found 85% of Google Chrome apps and extensions do not have privacy policies. Duo Security examined 120,463 Chrome extensions and apps found in the Chrome Web Store as of January 2019. The company discovered 35% of the apps and extensions can read the data of the sites a user visits, and 32% use third-party libraries with previously known vulnerabilities. Meanwhile, a survey conducted by ERP Maestro found only 9% of 2,000 Americans polled take the proper steps to protect themselves from identity theft. [Engadget]

WW – Some Apps Stop Sharing Data with Facebook After WSJ Report

New York Governor Andrew Cuomo has called for an investigation into reports that health apps were sending sensitive data to Facebook. The Wall Street Journal report said that the apps were sending health and financial data to Facebook even when users were not logged into Facebook or did not have Facebook accounts. Facebook’s response to the initial WSJ report was to contact app developers and advertisers to tell them that Facebook’s terms of service prohibits them from sending Facebook sensitive user data. Facebook maintains that the app developers are the ones who should be under scrutiny. Others say that Facebook should take responsibility for what it has created. Some of the apps have stopped sending the data to Facebook.

  •  www.wsj.com: Popular Apps Cease Sharing Data With Facebook (paywall)
  •  www.wsj.com: You Give Apps Sensitive Personal Information. Then They Tell Facebook. (paywall)
  •  www.wsj.com: Eleven Popular Apps That Shared Data With Facebook (paywall)
  •  www.bleepingcomputer.com: NY Governor Cuomo Calls For Investigation on Facebook Health Data Collection
  •  www.zdnet.com: Another Facebook privacy scandal, this time involving its mobile analytics SDK

Other Jurisdictions

AU – Political Parties Should be Stripped of Privacy Act Exemptions After Hack: Experts

In the wake of an online attack that left political parties exposed to fears that a “sophisticated” agency had obtained highly confidential records [see PM’s statement] several former privacy commissioners have called for an end to the exemption from privacy and security laws political parties have been afforded since the Privacy Act in 2000 which some experts say spares parties from the obligation to protect valuable information based on the electoral roll. Former federal privacy commissioner, Malcolm Crompton [see bio] who opposed the exemption for political parties when he was the federal privacy commissioner in 2000 said the parties should be compelled by law to notify authorities of a “notifiable data breach” under legislation that came into effect one year ago and applies to all federal agencies, health service providers and businesses with a turnover of more than $3 million. “The political parties have access to some of the most accurate details about voters in Australia that there is. They have access to the electoral roll and it contains more demographic detail than simply name and address. In addition, each of the major parties have their own systems for ingesting further detailed information about voters, for example in regard to every contact somebody might have with the electoral office of each parliamentarian, and other information that they can purchase from information aggregators.” While it has become common for banks or others to notify customers of a privacy breach, there is no similar requirement for the political parties. The Australian Law Reform Commission recommended an end to the special treatment in 2008 [see ALRC Report 108 here: 833 pg PDF vol 1 here, 870 pg PDF vol 2 here & 997 pg PDF vol 3 here]. The Sydney Morning Herald | Australia’s major political parties hacked in ‘sophisticated’ attack ahead of election | With elections weeks away, someone “sophisticated” hacked Australia’s politicians | Australia’s major political parties targeted by ‘sophisticated state actor’, PM says | China rejects claims it hacked Australian political parties

Privacy (US)

US – Department of Education Issues Updated Student Privacy Guidance Following Federal Commission on School Safety Report

On February 5, 2019, the U.S. Department of Education (ED) released new and important regulatory guidance entitled “School Resource Officers, School Law Enforcement Units, and the Family Educational Rights and Privacy Act“ [see PR] This guidance document was prepared and issued in response to the December 2018 final report of the Federal Commission on School Safety [see 180 pg PDF report here] One of that report’s specific recommendations was that ED clarify the parameters of information sharing between school staff, school resource officers (SROs) and school safety officers (SSOs), with special consideration and training regarding the privacy requirements of Family Educational Rights and Privacy Act (FERPA) and Health Insurance Portability and Accountability Act (HIPAA). The new guidance aims to give school officials a better understanding of how FERPA specifically relates to circumstances that threaten the health or safety of individuals, and thus to empower such officials to act more quickly and decisively when challenges arise. It consolidates multiple prior guidance and technical assistance documents into a single resource. DBR on DATA (DrinkerBiddle)

US – FTC’s Record Fine to TikTok Makes One Thing Clear: Illegally Collecting Kids’ Data Won’t Be Tolerated

Musical.ly, the lip-dubbing app that became a part of video app TikTok last year, agreed to pay the Federal Trade Commission $5.7 million over allegations that it collected personal information from children under the age of 13 without getting parental consent [see FTC blog post here, PR here & 11 pg complaint and 25 pg Order]. The TikTok app allows users to create short 15-second videos, showing everything from dancing to silly stunts, which can then be strung together to create minute-long stories that can be shared with other people on the social platform. “The operators of Musical.ly—now known as TikTok—knew many children were using the app, but they still failed to seek parental consent before collecting names, email addresses, and other personal information from users under the age of 13,” FTC Chairman Joe Simons said It’s the largest civil penalty ever collected in a privacy case related to children, according to the FTC’s news release. In a statement the company said it will create “a limited, separate app experience that introduces additional safety and privacy protections designed specifically” for a younger audience under which “users cannot do things like share their videos on TikTok, comment on others’ videos, message with users, or maintain a profile or followers. However, they will be able to experience what TikTok is at its core—showcasing creativity—as they enjoy curated content and experiment with TikTok’s unique, fanciful, and expressive features.” Fortune | FTC Hits Musical.ly With Record-Setting $5.7M Fine | FTC Exacts Record Fine for Kids Privacy Violation

US – Privacy Issues With School Safety Technology

As schools embrace technologies to monitor and collect data, there is a need to balance privacy and security in a school environment. A recent panel discussion at SXSW EDU brought together privacy advocates, a school administrator and a school safety software product manager to discuss this balance and left one assistant principal to admit feeling “caught in the middle” when it came to ensuring student safety and protecting student data. New Knowledge Researcher Bill Fitzgerald noted, “The vast majority of people in this space are doing it for the right reasons,” but added, “a lot of these tools are less about what’s better for kids, and more about what’s easier for adults.” [EdSurge]

NZ – Pilot Project Introduces AI to Track Truancy

A pilot project implementing facial-recognition technology to monitor student attendance at two New Zealand tertiary providers is set to complete in the coming month. Run by Aware Group, the $150,000 project utilizes artificial intelligence to track arrivals and departures in an effort to monitor “safety and truancy” in real time. Aware Group CEO Brandon Hutcheson did not confirm which institutions had used the technology but noted that students involved in the project had signed consent forms. [Stuff]

US – Lack of K–12 CPOs isn’t likely to end

While employing a chief privacy officer in K–12 education would offer benefits to student privacy, the reality is that the education system is far away from actually employing dedicated privacy employees. After reaching out to various educational nonprofits, a technology association and several privacy professionals, EdSurge states that none could identify a single K–12 CPO. Linnette Attai, founder of global compliance consulting firm PlayWell, said the main reason the role is sidelined is due to funding constraints. “It should be a leadership position, but it’s not,” she said. “We’re a really long way off from it ever being there, and we may never be there.” [EdSurge]

US – Children’s Advocacy Groups File FTC Complaint Against Facebook

The U.S. Federal Trade Commission received a complaint from 17 children’s advocacy groups accusing Facebook of deceiving children into accruing fees from games on the platform. The groups argue that in-app purchases were often completed without parental consent. In the complaint, the groups said, “Facebook’s exploitative practices targeted a population universally recognized as vulnerable — young people.” The groups have asked the FTC to investigate. In its response, Facebook noted, “As part of our long history of working with parents and experts to offer tools for families navigating Facebook and the web, Facebook also has safeguards in place regarding minors’ purchases.” [The New York Times]

Privacy Enhancing Technologies (PETs)

WW – Google Announces Shift in Approach to Global Policy

An internal email from Google Global Policy Chief Karan Bhatia outlined a reorganization of the company’s approach to global policy. According to a source familiar with the decision, the move will include additional resources for emerging markets and is a reaction to the ability of policymakers worldwide to regulate the company’s core businesses. Meanwhile, Google announced an “error“ in not disclosing a built-in microphone in devices belonging to the company’s Nest Secure home security system. The company said, “The on-device microphone was never intended to be a secret and should have been listed in the tech specs. That was an error on our part. The microphone has never been on and is only activated when users specifically enable the option.” [Bloomberg]

WW – Blockchain May Be More Vulnerable Than Previously Thought

Blockchain technology, once revered as impenetrable to hacking, appears to be highly vulnerable. The flip side to the technology’s unique security features is unique vulnerabilities. While vulnerabilities can exist from a development perspective, the article highlights that in other instances, it stems from “more of a gray area — the complicated result of interactions between the code, the economics of the blockchain, and human greed.” [MIT Technology Review]


WW – Employees’ Emails, File Sharing Are Data Breach Trojan Horses: Survey

Employees’ emailing and file sharing practices are the leading cause of accidental data breaches, according to a new survey of 1,000-plus U.S. companies conducted by Opinion Matters research group on behalf of data security platform Egress [see here – see PR here & download the full survey and report here]. The respondents were senior and midlevel security professionals. Eighty-three percent of organizations surveyed said they experienced an accidental data breach. When an employee has unintentionally exposed sensitive data, 51% of respondents said it was through an external email provider, such as Gmail and Yahoo. Meanwhile, 46% said corporate email was used in an accidental data breach. Common employee email pitfalls include sending emails to the wrong address, forwarding sensitive information and sharing attachments with hidden sensitive content, according to the survey. Collaboration and file share services like Dropbox and Slack are becoming commonly used at organizations and as a result, sensitive information is being exposed. 40% said file sharing technology was used in employee-caused breach accidents, followed closely (38%) by collaboration tools. The survey singled out encryption technology as a standard best practice for securing and sharing sensitive data through emails and file sharing. However, only 79% of employees said they are required to use encryption when externally sharing personally identifiable information (PII) or critical business data, while, 64% were required to use encryption when internally sharing PII or critical business data. While most respondents said their biggest IT security risk was ransomware and malware (48%) and external attacks (45%), only 40% said accidental data breaches by employees was a risk. New regulations such as the GDPR and the pending California Consumer Privacy Act have influenced 54% of respondents to invest in new security technology, according to the survey. Data privacy regulations have also led to 52% of organizations to invest in employee training and 44% have restricted the use of external data sharing tools. Meanwhile, only 8% said new regulations haven’t changed their organization’s data sharing habits. [Legal Tech News (Law.com) Human Negligence to Blame for the Majority of Insider Threats | Analytics, Intelligence & Response: Getting Ahead of the Insider Threat in 2019


CN – Surveillance Project Raises Privacy Concerns

By the end of March, 12 surveillance cameras will be mounted on streetlights as part of a pilot project to identify whether a car is parked illegally or not. The project, put forward by the Development Bureau’s Energising Kowloon East Office, would automatically read the license plate and alert police. The report notes that various legal and privacy concerns have been raised during the project’s public consultation process. [The South China Morning Post]

WW – Google Says Nest’s Secret Microphone Was ‘Never Intended to be a Secret’

When Google announced earlier this month that its Nest Secure smart home hub [here & wiki here] would double-up as a Google Assistant [here & wiki here], it sparked anger. Google hadn’t told anyone that the security hub had a microphone inside to begin with. There was no mention of the microphone on  the initial list of tech specifications, nor was it mentioned after the company announced Google Assistant integration. (It’s there now.) In an email to TechCrunch Google spokesperson Nicol Addison said: “The on-device microphone was never intended to be a secret and should have been listed in the tech specs. That was an error on our part. The microphone has never been on and is only activated when users specifically enable the option.” Not disclosing the inclusion of a microphone in a device that sits in your home just looks bad. And it couldn’t come at a worst time for tech giants, as they try to clamber back any ounce of respect they have from privacy-conscious consumers. It makes you wonder how many other devices you have in your home — and out in the world — that could be used to spy on you. TechCrunch | Google says the built-in microphone it never told Nest users about was ‘never supposed to be a secret’ (GOOG, GOOGL) | Users alarmed by undisclosed microphone in Nest Security System |Google calls Nest’s hidden microphone an ‘error’

US Government Programs

US – U.S. Shared Terror Watchlist With 1,400 Private Groups

The U.S. federal government has acknowledged that it shares its terrorist watchlist [see Terrorist Screening Database here & wiki here], with more than 1,400 private entities, including hospitals and universities, prompting concerns from civil libertarians that those mistakenly placed on the list could face a wide variety of hassles in their daily lives. The watchlist is supposed to include only those who are known or suspected terrorists but contains hundreds of thousands of names. The government’s no-fly list is culled from a small subset of the watchlist. The exact number on the list is kept secret by the government, but it has acknowledged that it adds hundreds of thousands of names to the list every year. It also emphasized that names are routinely removed from the list. Critics say that the watchlist is wildly overbroad and mismanaged, and that large numbers of people wrongly included on the list suffer routine difficulties and indignities because of their inclusion. The government’s admission that it shares the list so broadly comes — after years of insistence that the list is generally not shared with the private sector in a constitutional challenge/class-action lawsuit filed in Federal Court, Eastern District of Virginia [see El Hady v. Kable: Case No. 1:16-cv-375 (AJT/JFA), PR here] by Gadeir Abbas [LinkedIn here], a lawyer with the Council on American-Islamic Relations [here & wiki here] on behalf of Muslims who say they regularly experience difficulties in travel, financial transactions and interactions with law enforcement because they have been wrongly added to the list [see CAIR blog post here]. Abbas said now that the government has disclosed how many private entities receive access to [the database] it needs to explain exactly which private entities are receiving it and what they’re doing with it. He’s asked a judge to require the government to be more specific. A hearing is scheduled for Friday. In some quarters, the government has been criticized for failing to widely disseminate the list to private agencies who might need to know about suspected terrorists. A 2007 report from a government watchdog criticized the government for just that [see 106 pg PDF here]. CTV News | More than 1,000 private entities have access to terrorism watch list, government says

US Legislation

US – Democrats Vow Congress Will ‘Assert Itself’ Against Tech — Starting With Silicon Valley’s Privacy Practices

The House Energy and Commerce Consumer Protection Subcommittee here] embarked on a wide-ranging campaign to probe Facebook, Google and their peers in the tech industry, a new burst of oversight that could bring heightened attention to some of Silicon Valley’s controversial business practices. At the first major tech policy hearing [see Democrat PR, details & witnesses witnesses & watch starting at 11:37] since Democrats took control of the House [members] charged that long-standing inaction on Capitol Hill had left consumers unprotected in the digital age. They pledged to grill tech companies, shine a harsher light on their missteps and write tough federal laws, including new rules to protect web users’ online privacy. Chairperson Rep. Jan Schakowsky (D-Ill.) opened the session by saying some tech giants’ data-collection practices “undoubtedly give Americans the creeps,” pointing to recent reports that apps pass sensitive information, including details about women’s menstrual cycles, along to Facebook. “Without a comprehensive federal privacy law,” she said, “the burden has fallen completely on consumers to protect themselves, and this has to end.” Rep. Greg Walden (R-Ore.), who chaired hearings with top tech executives on the Energy and Commerce Committee last year, also endorsed a federal law during the session Tuesday. In the Republican-controlled Senate, lawmakers [in the Senate Commerce, Science and Transportation Committee] plan to hold their own hearing focused on online privacy on Wednesday [Feb. 27 at 10:00 am – see details]. Adding to the pressure on Silicon Valley are new, tech-savvy Republicans who arrived this year in the Senate, including Sen. Josh Hawley (Mo.). The former Missouri attorney general has spent his first two months in the chamber sharply criticizing tech giants’ privacy practices and urging the Trump administration to take a closer look at the industry. [The Washington Post House United on Privacy, Still Divided on Details | US lawmakers kick off debate over online privacy

US – GAO to Congress: It’s Time for Data Privacy Legislation

A report from the US Government Accountability Office (GAO) [see reporthighlights and recommendations – also see House Energy & Commerce Committee PR] recommends that Congress develop data privacy protection legislation much like the European Union’s General Data Protection regulation (GDPR). Among the incidents referenced in the report is the Cambridge Analytica scandal, in which “Facebook disclosed that a Cambridge University researcher may have improperly shared the data of up to 87 million of [its] users with a political consulting firm.” The report is the result of a request from the House Energy and Commerce Committee two years ago.

US – The FTC Decides to Uphold the CAN-SPAM Rule Without Any Changes

On February 12, 2019, the Federal Trade Commission [FTC] announced that it completed its first review of the CAN-SPAM Rule [see here & text here], a rule governing commercial e-mail. Based on its review, the FTC announced its decision, available here, to “retain the Rule in its present form.” [see FTC’s Federal Register confirmation of the Rule] The FTC reviewed public comments and proposals in making its determination [see FTC notice for public comment]. According to the FTC’s confirmation of the Rule, of the 92 comments received, most were submitted by individual consumers and many suggested modifications to the Rule. Many comments were responses to specific issues raised by the FTC regarding whether the FTC should: 1) modify the type of messages treated as “transactional or relationship messages,”; 2) shorten the time period for processing opt-out requests; or 3) identify additional practices that constitute “aggravated violations.” In rejecting all of the suggested modifications, the FTC found that no proposed modification presented sufficient evidence that its added consumer benefit would outweigh its increased burden on businesses. However, the FTC stated that it would monitor matters and, if necessary, amend the Rule at some point in the future. The FTC also stated that it will “review and consider revising its existing Compliance Guide for Business” to help businesses “more easily understand the Rule’s protections and requirements.” Finally, the FTC noted that many of the suggested modifications could “inform industry best practices” even if they ultimately didn’t become requirements under the Rule. Privacy & Data Security Blog (Alston & Bird) No Change to CAN-SPAM | FTC Retains Current CAN-SPAM Rules | FTC Enforcement Trends in Consumer Protection

US – ACLU, Advocacy Groups Voice Support of Calif. ‘Privacy for All’ bill

The American Civil Liberties Union of California and a collection of advocacy organizations have voiced their support for the “Privacy for All” state bill introduced by Assemblymember Buffy Wicks, D-Calif. Bill AB 1760 would ensure companies get consumer consent before any of their data is shared. The “Privacy for All” bill seeks to close a loophole within the California Consumer Privacy Act, as in its current form, companies only need to obtain consent when data is sold. AB 1760 would give California citizens the ability to find out what data companies share and who it is shared with and would prohibit discrimination against anyone who exercises their data rights. [ACLU]





01-14 February 2019


US – SF lawmaker Seeks Ban on Use of Facial Recognition by Cops

San Francisco could become the first city in America to outright ban the use of facial recognition technology by its police department or any other city agency if a new municipal ordinance proposed by Supervisor Aaron Peskin called “Stop Secret Surveillance Ordinance“ – see Fact Sheet] passes in the coming months. The city would also impose a new pre-emptive “Surveillance Technology Policy” for city agencies that want to acquire any new gear that could impact privacy. Such a requirement would put San Francisco in line with its neighboring cities of Oakland and Berkeley. The bill states unequivocally that the risks involved in using the technology “substantially outweigh its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring.” But the legislative aide also said that the board of supervisors still does not have a full inventory of what surveillance technology both agencies have. Facial recognition historically has resulted in more false positives for African-Americans. As Ars has reported before: if the training data is heavily skewed toward white men, the resulting recognizer may be great at identifying other white men but useless at recognizing anyone outside that particular demographic. Last May, the Congressional Black Caucus wrote to Amazon CEO Jeff Bezos expressing concern over the “profound negative consequences” of the use of such technology. Nevertheless, law enforcement at airports in particular have recently expanded their use of the technology. The SFPD would not give its opinion on the bill. The bill is set to go to the Board’s Rules Committee in 30 days and could be in front of the entire Board within months. It requires six votes to pass—but would be vetoed by the mayor. Eight votes (of the 11 total supervisors) would constitute a veto-proof majority. Ars Technica

WW – WhatsApp Update Adds Biometric Authentication Option

A recent update to WhatsApp allows users to lock the app with biometric tools. WhatsApp version 2.19.20 on iPhones lets users lock the app with Face ID or Touch ID. A caveat: if users have set their notifications to allow message previews, those will still be visible and can be replied to without opening the app. Calls to WhatsApp can also be answered without unlocking the app. A version of WhatsApp with a biometric protection feature for Android is reportedly in beta testing.

  • cyberscoop.com: WhatsApp adds biometric feature to help protect messages
  • theverge.com: WhatsApp can now be locked using Face ID or Touch ID


CA – Extend Freedom of Information Law to B.C. Legislature, Say Watchdogs

B.C.’s Information and Privacy Commissioner Michael McEvoy, Merit Commissioner Fiona Spencer and Ombudsperson Jay Chalk have recommended changes to require public disclosure of B.C. legislature expenses, in the wake of the suspension of the top two managers for financial irregularities. On Tuesday NDP house leader Mike Farnworth said the B.C. legislature will open up its secret operations. However Farnworth said the legislative changes may have to wait for the fall session to present to MLAs for a vote. In a February 4 letter addressed to Darryl Plecas Speaker of the B.C. Legislative Assembly – see Press Release] McEvoy, Spencer and Chalk write: “We wish to emphasize that we are making these suggestions regardless of the resolution of the status of the suspended permanent officers” [on reference to suspended officers see here] Their recommended changes are for the “corporate Legislative Assembly,” historically controlled only by the Speaker, the letter adds. Recommendations include extending the Freedom of Information and Protection of Privacy Act to the legislature, which has a $70 million budget to run MLA constituency offices and the parliament buildings, including everything from a restaurant to security to the chamber itself. Surrey Now – Leader

CA – N.L. looking for New Privacy Watchdog as Donovan Molloy Takes Judge Job

The search is on for a new information and privacy commissioner in Newfoundland and Labrador after Donovan Molloy takes the bench in the Northwest Territories [see coverage here]. Molloy, who was named commissioner in 2016, will become a territorial judge on Feb. 20 Over the years, Molloy rebuked the justice department for its handling of an access to information request and scolded the province for breaking its own information laws. Molloy has also reviewed high-profile privacy concerns, including the town of Paradise’s use of security cameras as well as the widespread sharing of videos appearing to show shoplifters. Perry Trimper, N.L.’s house speaker, is responsible for convening the committee that will find Molloy’s replacement — something he said will not be easy. He said the committee will first find an acting commissioner. He wouldn’t say how long it might take to find a permanent hire. “We will be starting immediately to put someone in there. So we’ve got some ideas but I’ll let the process unfold,” he said. CBC News

CA – SK Minister of Justice Hopes to Clear the Way for Police to Name Homicide Victims

Justice Minister Don Morgan says bringing Saskatchewan police services under freedom of information regulations had unintended consequences, when police services decided the new regulations meant they should not release victims’ names after a homicide. On Jan. 22, the province made an amendment to The Local Authority Freedom of Information and Protection of Privacy Amendment Regulations. The amendment, recommended by Morgan, explicitly states that police services are permitted to “disclose to the public the name of a deceased person whose death is being investigated as a homicide.” Morgan said some police services interpreted the wording of the [unamended regulations] to mean that the deceased in a homicide investigation had the right to privacy until a charge was laid. As a result, the Regina Police Service decided to stop naming some homicide victims in its public releases. The amendment also applies to Investigative Services and Security Intelligence Units within the Ministry of Corrections and Policing. Despite the amendment, Saskatoon Police Service spokesperson Julie Clark said SPS will not change its conduct around the naming of homicide victims. “We are continuing our practice of not naming homicide victims, unless requested otherwise by the family of the deceased,” said Clark. The Regina Police Service did not comment by publication time In the past, the RPS has opted to act in line with recommendations from the information and privacy commissioner and will not [automatically] release a homicide victim’s name Morgan said the privacy commissioner has taken the position that a deceased person still has rights to privacy. However, he takes a different view and hopes the amendment will allow the old practice of naming homicide victims by default to come back into common practice in Saskatchewan. Source: CBC News



WW – Report Predicts Companies Will Give Users More Privacy and Control of Data

In its “Technology Vision 2019” report, Accenture predicts organizations will give consumers more privacy and control over their data. Accenture’s annual report covers the trends it believes will impact businesses over the upcoming three years. The report states in order for companies to build trust with consumers, they need to place an emphasis on transparency and the ability to manage their own information. “Companies are amassing tremendous amounts of information about consumers,” Accenture Chief Technology and Innovation Officer Paul Daugherty said. “The key thing for companies to think about is just because you can do something doesn’t mean you should do something.” [Fortune | Accenture]


CA – Ontario Launches Consultations on Data Collection to Create Provincial Strategy

The Government of Ontario has launched data strategy consultations to gather information to create a provincial strategy to address concerns around personal data collection, privacy and security. Progressive Conservative MPP Bill Walker, who is also the Ontario Minister of Government and Consumer Services, said that the government is “seeking to get a better understanding” of how the government is able to drive innovation by protecting data at the same time. He said that through the consultation process the government will look at whether current laws and policies “provide sufficient protection in an age of widespread data collection, sharing and use.” He noted that some practices of data collection these days are shaping a lot of key decisions regarding health, finances and education. The public can participate in the consultation through an online survey until March 7th. Walker said that the consultations will focus on three topics: promoting public trust and confidence, creating economic benefits, and enabling a better, smarter, efficient government. He elaborated the government intends on “introducing world-leading, best-in-class privacy protections” and helping Ontario firms develop a business that is data-driven and able to “seize the commercial value of data.” The idea behind the consultations is so that the government can create a Task Force on Data; the task force will later create a draft Data Strategy document based on responses from the consultation. Walker expanded that the government will also seek further public consultations on the strategy before finalizing it. He did not say when the report will be final and did not say when consultations will end but said that they will continue throughout 2019. He also noted that no decision has been made on “the composition of the task force.” Mobile Syrup | Ontario launches consultations on data collection to create provincial strategy

CA – British Columbia Political Parties Illegally Gather Voters’ Data: OIPC BC

Information and Privacy Commissioner for British Columbia Michael McEvoy said political parties have illicitly gathered the personal information of citizens within the province. While political parties can conduct efforts to learn about voters, McEvoy said many attempts to do so happen without consent, a violation of provincial law. The commissioner cited canvassers who record symbols to hint at voters’ religious preferences or ethnicities. McEvoy said political parties have sent email addresses to Facebook in order for the social media company to find demographic patterns. “Essentially, they have to have the consent of people they’re collecting information from,” McEvoy said. “You need to ask permission. That’s the basis of the law.” [StarMetro]


US – FTC Completes Review of CAN-SPAM Rule

The Federal Trade Commission announced that it has completed its first review of the CAN-SPAM Rule [see here & text here — & wiki here], which establishes requirements for commercial e-mail messages and gives recipients the right to opt out of receiving them. The Commission voted to keep the Rule with no changes. The Rule requires that a commercial e-mail contain accurate header and subject lines, identify itself as an advertisement, include a valid physical address, and offer recipients a way to opt out of future messages. As part of its regular, systematic review of all its rules and guides, the FTC in June 2017 sought public comment on the Rule, including whether it is still needed, the costs and benefits of the Rule, and whether changes needed to be made to the Rule in response to technological and economic developments. The FTC also sought comment on three specific issues related to the CAN-SPAM Rule, including whether the Commission should change the categories of messages treated as “transaction or relationship messages,” shorten the time period for processing opt-out requests, or specify additional activities or practices that might be considered as aggravated violations. The FTC received 92 comments, which overwhelmingly favored keeping the Rule. After reviewing the comments, the Commission concluded that the Rule does benefit consumers and does not impose substantial economic burdens, and that no changes to the Rule were needed at this time. The Commission voted 5-0 to approve publication of the confirmation of the Rule [see 24 pg PDF here] in the Federal Register. Source: FTC News & Events (US Federal Trade Commission)

Electronic Records

WW – Increased Digitization Turning Privacy Pros into Strategic Advisors

As digitization continues to influence organizations’ business models, there has been a growing need for privacy professionals to become strategic advisors. Gartner Director and Team Manager Stephanie Quaranta said privacy pros have seen their roles expand to cover the management of risk and help senior leaders understand the value of information. “Information is becoming the most valuable asset organizations hold, but that value can be trapped if organizations don’t understand how they should use that information,” Quaranta said. “Privacy executives can help navigate not only the regulatory environment, but increasingly, also questions about customer, board and other external expectations.” Gartner also offers advice on ways privacy pros can manage consumers’ privacy appetite, such as the creation of consumer-facing policies and stronger data rules with third parties. [Gartner]

EU Developments

EU – Key Takeaways from the Privacy Shield Annual Review

In January, the EU European Data Protection Board issued its report 29 pg report on the second annual review of the EU-US Privacy Shield. It provides some valuable compliance reminders for organizations that have certified or intend to certify to the Privacy Shield program. The report, which mainly focuses on the EU regulators’ ongoing concerns about the US government’s access to personal data and their desire to see more substantive certification reviews by the US government, details oversight efforts currently being undertaken by the Department of Commerce (Commerce) and the Federal Trade Commission (FTC). Both Commerce and the FTC have significantly increased their oversight and enforcement of the Privacy Shield program. The report provides a useful roadmap for organizations to avoid getting caught in the US government’s enforcement crosshairs. This alert highlights some of the key findings of the Report with respect to: 1) the commercial functioning of the Privacy Shield; 2) reviews the US government’s current Privacy Shield compliance oversight initiatives; and 3) provides a list of compliance tips for Privacy Shield-certified organizations. Source: Client Alert (Morrison & Foerster)

EU – Update on Status of the Draft e-Privacy Regulation

It looks unlikely that the draft e-Privacy Regulation will come into effect before 2021. European Council negotiations on the text of the draft Regulation are currently ongoing, and trilogue discussions by the Council, Parliament and Commission will then take place. However, the upcoming May 2019 European elections may lead to a delay in the Council adopting a common position and the trilogue discussions commencing.  The latest draft text of the Regulation was published by the European Council October 19, 2018 and will apply 24 months from the date it is adopted, with the result that even if it is adopted imminently, it may not come into effect until 2021. Late last year, various industry associations raised concerns about the draft Regulation in a joint letter urging the EU institutions not to rush negotiations, stating that many substantive issues which have been raised since the draft Regulation was first put forward have not yet been addressed and that the expanded scope of the draft Regulation would create a large overlap with the GDPR, effectively replacing large portions of the GDPR for a vast majority of data processing activities.  It called for closer consideration of the legal bases for both electronic communications data and terminal equipment data and alignment with those available under the GDPR. The Council of the EU has released a progress report on the draft Regulation, highlighting the main topics where further work is necessary. In particular, the Report notes that Article 10 (the provision on privacy settings) has raised a lot of concerns, including with regard to the burden for browsers and apps, the competition aspect, the impact on end-users, and the ability of this provision to address the issue of consent fatigue. The original aim of Article 10 was to address the issue of users being overloaded with pop-up windows requesting consent to the use of cookies. Source: Ireland IP & Technology Law Blog (A&L Goodbody)

EU – EDPB Issues Guidance on Clinical Trials Regulation and the GDPR

The European Data Protection Board (EDPB) recently adopted its opinion [see 9 pg PDF here] on the interplay between the Clinical Trials Regulation 536/2014 (CTR) [see here & 76 pg PDF here] and the General Data Protection Regulation 2016/679 (GDPR) [see here & wiki here]. The opinion was given at the request of the European Commission. The opinion distinguishes between the primary use of data including: 1) Processing for reliability and safety purposes; and 2) Processing for research activities and the secondary use of data in clinical trials which is the processing of data for scientific purposes, but outside the scope of the clinical trial protocol and requires a separate legal basis under the CTR. However, the EDPB suggests that the GDPR’s presumption of compatibility applies here and it is presumed that the secondary use is not incompatible with the original purpose (and thus, is within the scope of the protocol) if the data is processed for archiving purposes in the public interest, scientific research, historical research or statistical purposes, and there are appropriate safeguards. The EDPB’s opinion provides some clarity on the relationship between the CTR and the GDPR. Sponsors will particularly benefit from the guidance on legal bases. The interplay between secondary use under the CTR and the GDPR’s presumption of compatibility needs to be addressed further; the EDPB plans to issue guidance on this in the future. The CTR is expected to enter into force in 2020. Technology Law Dispatch (ReedSmith)

EU – German Regulators Prohibit Facebook from Merging User Data Without Consent

German regulators have forbidden Facebook from combining user data from its different platforms (such as Instagram and WhatsApp) without explicit user permission. The decision from Germany’s Bundeskartellamt also forbids Facebook from combining user data with information from third-party sources without user consent. Bundeskartellamt president Andreas Mundt notes that “an obligatory tick on the box to agree to the company’s terms of use is not an adequate basis for such intensive data processing. The only choice the user has is either to accept the comprehensive combination of data or to refrain from using the social network. In such a difficult situation the user’s choice cannot be referred to as voluntary consent.” Facebook disagrees with the regulator’s decision, writing in a blog post, “While we’ve cooperated with the Bundeskartellamt for nearly three years and will continue our discussions, we disagree with their conclusions and intend to appeal so that people in Germany continue to benefit fully from all our services.”

  • bundeskartellamt.de: Bundeskartellamt prohibits Facebook from combining user data from different sources
  • bundeskartellamt.de: Background information on the Bundeskartellamt’s Facebook proceeding
  • zdnet.com: Facebook broad data collection ruled illegal by German anti-trust office
  • scmagazine.com: Germany bans Facebook from combining user data without permission
  • bbc.com: Facebook ordered by Germany to gather and mix less data

EU – Bavarian DPA Conducts Website Cookie Practices Sweep, Announces Fines

The Data Protection Authority (DPA) of the German state of Bavaria announced it was considering fining a number of companies under the GDPR for their website cookie practices. It conducted a sweep of 40 large companies’ website cookie and user tracking practices. None of the 40 companies it audited had built GDPR-compliant cookie/tracking practices into their websites. While the identities of these companies have not been published the Bavarian DPA identified the industries in which the companies were active including: (a) Online retail; (b) Sports; (c) Banking & insurance; (d) Media; (e) Automotive & electronics; (f) Home and residential; and (g) Other and no company was identified as a technology or ‘tech’ company. The Bavarian DPA’s action potentially signals that cookies, user tracking, and online advertising are not a ‘tech industry issue,’ but instead a priority issue for companies irrespective of their industry the Bavarian DPA’s action is [evidence] that cookie compliance appears to be becoming a front-burner issue for EU privacy regulators – and an issue that can generate fines. Source: Privacy & Data Security Blog (Alston & Bird)

EU – Irish DPC Opened Seven Different Probes Against Facebook

Irish Data Protection Commissioner Helen Dixon announced Facebook faces seven different data-protection investigations by the DPC. Dixon said the inquiries are part of 16 cases the DPC has launched against tech companies, which include Twitter, Apple and LinkedIn. “We’re at various concrete stages in all of them, but they’re all substantially advanced,” said Dixon, who added final decisions in the investigations may not come until the summer. CNBC reports Facebook has seen an increase in users and strong earnings despite its privacy issues, and CNET reports Apple has reinstated Facebook’s enterprise certificates to run internal-tested iPhone apps. [Bloomberg]

UK – ICO Releases Discussion Paper on Regulatory Sandbox Beta Phase

On January 30, 2019, the UK Information Commissioner’s Office (“ICO”) released a discussion paper on the upcoming beta phase of its regulatory sandbox initiative [see blog post here]. The ICO had launched a call for views on creating a regulatory sandbox in September 2018, and the feedback received facilitated developing systems and processes necessary to launch the beta phase. According to the ICO, the purpose of the sandbox is to support the use of innovative products and services that are in the public interest, to assist in developing a shared understanding of what compliance in innovative areas looks like and to support the UK in being an innovative economy. [also see “what is a regulatory sandbox?” here] The Discussion Paper outlines the application process for entering the beta phase of the sandbox, how the ICO sees the sandbox working in practice and the types of support it will offer organizations in the sandbox. It also presents various questions on its proposed approach, to which it seeks feedback. The ICO has launched an intention to apply survey to allow organizations to express interest in applying and to provide information about any product or service they plan to enter into the beta phase [see here]. Full details of the beta phase will be made available by the end of March with formal applications opening towards the end of April. The beta phase is expected to run from July 2019 to September 2020. Source: ICO.org and Privacy & Information Security Law Blog (Huton Andrews Kurth)

EU – Dutch Ministry Concerns Prompts Microsoft to Update Office Pro Plus

After privacy concerns were raised by the Dutch justice ministry, Microsoft has agreed to update its Office Pro Plus products by the end of April. The ministry’s primary concern centered on the transfer of diagnostic data by the Microsoft products from Europe to the U.S. The Dutch ministry could raise the concern with European data protection authorities should Microsoft implement “unsatisfactory” changes, a ministry spokesman said. “The ministry commissioned the report in its capacity as a customer to clarify how our services are run and we’re working with the ministry’s staff to share additional information and help resolve its questions as we would for all enterprise customers,” Microsoft Corporate Vice President and Deputy General Counsel Julie Brill said. [Politico]

Facts & Stats

US – Report Finds 447M Records Breached in 2018

A 2018 End-of-Year Data Breach Report from the Identity Theft Resource Center found that hackers stole 447 million customer records involving sensitive data, representing a 126% increase from the previous year. Despite the increase, the report also found that the number of data breaches went down 23% but still concluded, “Data breaches are now a normal, everyday occurrence.” Meanwhile, one of the largest hospital networks in the U.S., Community Health Systems, reached a settlement with 4.5 million patients impacted by a 2014 malware attack that, if approved, could give those impacted up to $5,000 in losses. [NBC News]

EU – Businesses Reported 59K Data Breaches Since GDPR: Study

A study from DLA Piper found European businesses have reported 59,000 data breaches since the EU General Data Protection Regulation went into effect. The Netherlands reported the most breach notifications with 15,400, followed by the U.K. with 10,600. Lichtenstein had the fewest incidents with 15. DLA Piper Partner Ross McKean said the GDPR is “driving personal data breach out into the open.” The report found 91 fines have been administered since the GDPR became law. “We anticipate that regulators will treat [a] data breach more harshly by imposing higher fines given the more acute risk of harm to individuals,” DLA Piper Partner Sam Millar said. “We can expect more fines to follow over the coming year as the regulators clear the backlog of notifications.” [IT Pro Portal]


WW – Cryptocurrency Funds Frozen After Death of Founder

The founder of Canadian cryptocurrency exchange QuadrigaCX, and the only individual holding the passwords to the company’s “cold wallets” has died, leaving the company unable to access as much as US$190 million in cryptocurrency and fiat currency (legal tender). The company continued to operate for a month after the founder’s death using funds in its hot (live) wallet and in its fiat accounts. Canadian authorities have frozen the company’s assets.

  • zdnet.com: $145 million funds frozen after death of cryptocurrency exchange admin
  • com: Crypto Exchange Says It Can’t Repay $190 Million to Clients After Founder Dies With Only Password


CA – PEI Whistleblowers File Lawsuit Against Ex-Govt Officials Over Data Leaks

The three former Prince Edward Island government employees who had their information leaked to the press after they came forward with allegations of corruption have sued several former provincial officials. Former Premier Robert Ghiz has been named in the lawsuit, as well as former Innovation Minister Allan Campbell, former Deputy Minister of Economic Development Michael Mayne and former Liberal Party Spokesman Spencer Campbell. The plaintiffs allege in the lawsuit the defendants created a “strategy … to undermine the plaintiffs’ credibility by portraying them as liars, ‘crazy,’ or partisan towards the Prince Edward Island Conservative Party.” [Canadian Press]

CA – How to Avoid a Paper Trail: The Reliable — Sometimes Illegal — Tricks Used by Bureaucrats and Political Staff

The practice of leaving no paper trail is a well-known strategy among political staff and bureaucrats. The underlying idea of avoiding the creation of written records is deeply embedded in governments across Canada — and it has been exposed time and time again. Not all cases are illegal, but the practice violates the principle that governments are supposed to be accountable to the people who elect them. The issue has surfaced in the criminal trial of Vice Admiral Mark Norman, as his defence team wages battle to collect subpoenaed documents across seven government departments and agencies. Over the years, a few common themes have emerged over how government officials, bureaucrats and political staff avoid leaving a paper trail — or, when the trail does exist, attempt to block its disclosure. [This post reviews] a few of the best-known tactics: 1) Don’t write it down; 2) Code words and pseudonyms; 3) Sticky notes; 4) Delete, destroy or rename; 5) Use personal phones for government business; and 6) Claim the documents don’t exist — even when they do. Source: The London Free Press | See also: Several witnesses in Norman trial still haven’t searched personal records for evidence, court told | As Liberal insider takes the stand, Norman’s lawyers hint at more ‘code names’ | Trudeau asked about Scott Brison’s emails in the Mark Norman case | Scott Brison seeks standing at Mark Norman hearing, looking to protect his ‘privacy’ | Twin investigations launched into whether military blocked access to information in Mark Norman case | Code Name ‘Kraken’: How Mark Norman’s lawyers allege military used pseudonyms to hide records | In the Mark Norman case, the Crown doesn’t seem to be curious about the truth | Military never investigated leak of Mark Norman letter from HQ, says it wasn’t a breach of security


US – At-Home DNA Testing Company Grants FBI Selected Access to Database

At-home DNA testing company Family Tree DNA is allowing the U.S. Federal Bureau of Investigation to search its genealogy database to help solve violent crimes. Though the FBI cannot freely browse the genetic profiles, the access “would help law enforcement agencies solve violent crimes faster than ever,” the company said. According to the report, Family Tree does not have a contract with the agency but “has agreed to test DNA samples and upload the profiles to its database on a case-by-case basis since last fall.” The company said customers can opt out of familial matching, which would prevent them from being searchable by the FBI. [Buzzfeed]

Health / Medical

US – ONC Releases Proposed Rule on Patient Data Access

The U.S. Office of the National Coordinator for Health Information Technology proposed a new rule on patient data access. The proposed rule would require health care organizations to give patients their data electronically for free in order to prevent any form of information blocking. The ONC rule also seeks to have health care adopt standardized application programming interfaces to help patients examine their records on their smartphones and mobile devices. Meanwhile, the U.S. Department of Health and Human Services announced it will require health care professionals and organizations to use digital health records by 2020. Healthcare IT News

WW – Microsoft Announces New Health Care Tools

Microsoft’s new suite of capabilities to its cloud network offerings and communication tools aimed at addressing the needs of the health care industry and access to medical records. Through partnerships with interoperability providers, Microsoft announced features of the Teams app, its priority notification feature for users, and an artificial intelligence–powered virtual assistant chatbot at the Healthcare Information and Management Systems Society conference. The company reported that Quest Diagnostics recently introduced a version of the health care chatbot in compliance with various privacy regulations, including the Health Insurance Portability and Accountability Act and the EU General Data Protection Regulation. [FierceBiotech]

US – OCR Reaches $3M Settlement to Conclude Record Year for HIPAA Actions

The U.S. Department of Health and Human Services’ Office for Civil Rights reached a settlement with Cottage Health for $3 million over violations of the Health Insurance Portability and Accountability Act. Cottage Health suffered two data breaches that impacted more than 62,500 individuals. The first incident involved a server left accessible on the internet, while the second involved a misconfigured server an IT team worked on in response to a troubleshooting ticket. The $3 million settlement added to an all-time record year for HIPAA enforcement activity for the OCR. The agency received $28.7 million from enforcement actions in 2018, up from the previous high of $23.5 million in 2016. [HHS]

US – Public-Private Partnership Guidelines for Protecting Patient Data

A public-private healthcare group partnership has published a four-volume guide to protecting patients and patients’ information in the digital age. The first volume “discusses the current cybersecurity threats facing the health care industry and sets forth a call to action for the health care industry with the goal of raising general awareness of the issue.” The second and third “technical” volumes address cybersecurity practices for small and medium-to-large healthcare organizations. The fourth volume comprises supplemental references and resources.

  • Federal News Network: Industry, gov’t groups publish cyber guide to protecting patients’ information
  • phe.gov: Health Industry Cybersecurity Practices: Managing Threats and Protecting Patients

CA – Not All Privacy Breaches of Humboldt Broncos’ Records by Doctors are ‘Snooping,’ Says College

Saskatchewan’s privacy commissioner found several doctors had inappropriately accessed electronic health records of Humboldt Broncos [see three January 29 Sask OIPC Investigation Reports: a) 11 pg PDF here; b) 10 pg PDF here; and, c) 10 pg PDF here] involved in a fatal bus crash last April [see wiki here], but a spokesperson for the College of Physicians and Surgeons of Saskatchewan [here] says some of the cases don’t meet the bar of “snooping.” Saskatchewan’s eHealth branch began monitoring the files of people involved in the high-profile tragedy almost immediately after the fatal crash and “Anytime the profiles were accessed, eHealth would receive an alert,” confirmed Shaylene Salazar, who is the VP of Strategy, Quality and Risk for eHealth Saskatchewan. A few of the circumstances pointed out by the privacy commissioner involved three doctors who had provided emergency care to some Broncos at the Nipawin Hospital. Those doctors later reviewed patient records of those they treated, believing they were in the patient’s “circle of care.” The privacy commissioner’s findings indicate that “ there is a misunderstanding among a lot of health care practitioners — when they provided care to a particular patient — when it may be appropriate to look at information about the patient they provided care to,” said Bryan Salte, legal counsel for the College of Physicians and Surgeons of Saskatchewan. Salte said it appears the physicians are “a very different circumstance then what we regard as snooping, which is something that is significant and serious.” Salte said current legislation says doctors must only look at a file in order to provide care, regardless of past or potential future encounters and this is problematic because they may not know if they provided adequate care, he said. The privacy commissioner issued several recommendations in the reports, including that eHealth should conduct regular monthly audits of the physicians involved for the next three years. He also recommended that the organization comply with a need-to-know principle rather than a circle-of-care concept. There are about 10,000 healthcare works in the province who have access to the electronic viewer which contains patient records, Salazar said. … People can also request increased security on their personal information. “Patients can ask eHealth to fully block or mask their information” Patients can also request a report that details everyone who has viewed their information. CBC News and Doctors snooped into Humboldt Broncos patient records, privacy commissioner finds

US – 2018 An All-Time Record Year for HIPAA Enforcement Actions by HHS-OCR

The Office for Civil Rights at the U.S. Department of Health and Human Services (HHS-OCR) had a record-breaking year in 2018 with Health Insurance Portability and Accountability Act (HIPAA) enforcement activity [see PR here]. HHS-OCR entered into 10 settlements and received summary judgment in a case before an Administrative Law Judge, totaling nearly $28.7 million in enforcement actions [see summary of 2018 settlements & judgements]. According to the HHS-OCR Director, Roger Severino, this record year underscores the need for covered entities to be proactive about their HIPAA data security. Here are three overarching themes from HHS-OCR’s 2018 HIPAA enforcement activity for HIPAA Covered Entities to consider: 1) Several settlements indicate failures to obtain written business associate agreements from business associates that maintain protected health information (PHI) and electronic protected health information (ePHI) on behalf of Covered Entities; 2) HHS-OCR is citing failures to conduct thorough risk analyses of potential risks and vulnerabilities to Covered Entities’ ePHI; and 3) PHI disclosures to the media are thoroughly assessed for compliance with the HIPAA exception. Source: DBR on Data (DrinikerBiddle)  See also: HIPAA Enforcement Update: Areas of Focus | Protecting Patient Privacy: HIPAA Compliance in the Electronic Age  | Past cyberattacks offer clues to future threats HIT execs may face  | Cottage Health Settles with OCR for $3M  | HIPAA enforcements hit record $28 million in 2018 | What Can We Learn From the Healthcare Data Breach ‘Wall of Shame’?

US – HHS Proposes New Rules to Improve the Interoperability of Electronic Health Information

The U.S. Department of Health and Human Services (HHS) today proposed new rules to support seamless and secure access, exchange, and use of electronic health information. The rules, issued by the Centers for Medicare & Medicaid Services (CMS) [see 251 pg PDF proposed rule- CMS-9115-P & fact sheet] and the Office of the National Coordinator for Health Information Technology (ONC) [see 724 pg PDF proposed rule – RIN 0955-AA01& fact sheets] which aim to increase choice and competition while fostering innovation that promotes patient access to and control over their health information. The proposed ONC rule would require that patient electronic access to this electronic health information (EHI) be made available at no cost. CMS’ proposed changes to the healthcare delivery system support the MyHealthEData initiative [see here] and [aims to] increase the seamless flow of health information, reduce burden on patients and providers, and foster innovation by unleashing data for researchers and innovators. In 2018, CMS finalized regulations that use potential payment reductions for hospitals and clinicians to encourage providers to improve patient access to their electronic health information. For the first time, CMS is now proposing requirements that Medicaid, the Children’s Health Insurance Program, Medicare Advantage plans and Qualified Health Plans in the Federally-facilitated Exchanges must provide enrollees with immediate electronic access to medical claims and other health information electronically by 2020. CMS would also require these health care providers and plans to implement open data sharing technologies to support transitions of care as patients move between these plan types. The CMS rule also proposes to publicly report providers or hospitals that participate in “information blocking,” practices that unreasonably limit the availability, disclosure, and use of electronic health information undermine efforts to improve interoperability. ONC’s proposed rule calls on the healthcare industry to adopt standardized application programming interfaces (APIs), which will help allow individuals to securely and easily access structured and unstructured EHI formats using smartphones and other mobile devices. It also implements the information blocking provisions of the 21st Century Cures Act, including identifying reasonable and necessary activities that do not constitute information blocking. The proposed rule helps ensure that patients can electronically access their electronic health information at no cost. The proposed rule also asks for comments on pricing information that could be included as part of their EHI and would help the public see the prices they are paying for their healthcare. Source: U.S. Department of Health and Human Services | With new proposed rules, HHS takes major stab at interoperability framework | CMS proposes interoperability rules to increase EHR access | HHS unwraps long-awaited new information blocking rule

US – What Can We Learn From the Healthcare Data Breach ‘Wall of Shame’?

Covered entities under the Health Insurance Portability and Accountability Act (HIPAA) are required to report breaches to the Department of Health & Human Services’ (HHS) Office for Civil Rights (OCR). But the pain doesn’t end there. If the breach reported to HHS involved more than 500 individuals, it is published for the world to see on an HHS website, colloquially referred to as the “wall of shame.” In existence since 2009 but questions have arisen regarding the value of the site, how the data is presented and how long the data should be available to the public. The persistence of the information available on the site has caused angst and criticism. Nevertheless, it has proved valuable for Academics. A recent paper in JAMA Internal Medicine examined the causes of breaches based on 1,138 breaches reported on the wall of shame between Oct. 21, 2009, and Dec. 31, 2017, affecting 164 million patients. The researchers found that theft of protected health information (PHI) by outsiders was the cause of a significant portion of the 1,138 breaches analyzed. There were 370 breaches (or 32.5% ) that were caused by outside thefts. The next-largest category was mailing mistakes (either via email or physical mail). These incidents accounted for 119 of the reported breaches (10.5%). Overall, however, the data analysis concluded that more than half of the breaches (53%) could be attributed to internal mistakes or neglect (as opposed to outside causes). These internal issues, in addition to mailing and emailing mistakes, included employees clicking on phishing emails, forwarding PHI to personal accounts and accessing PHI without authorization. Healthcare providers may want to more closely consider their own houses and what their own employees may be doing (or not doing). While additional employee training and more effective system controls and monitoring will not stop all employee mistakes, such steps could go a long way toward reducing the number of breaches. Data Privacy Monitor (Baker Hostetler)

Horror Stories

WW – Report: Details on 617 Million User Accounts Up For Sale On Dark Web

Citing details provided by the seller, The Register reports that a dark web marketplace Tor network-based site Dream Market cyber-souk this week began selling stolen data linked to roughly 617 million user accounts from 16 different websites. The affected online services consist of video messaging application Dubsmash (162 million accounts affected); health apps MyFitnessPal (151 million) and 8fit (20 million); genealogy platform MyHeritage (92 million); content sharing service ShareThis (41 million); Nordstrom’s member-only shopping website HauteLook (28 million); cloud-based video creation service Animoto (25 million); photography sites EyeEm (22 million), Fotolog (16 million) and 500px (15 million); online directory Whitepages (18 million); game portal website Armor Games (11 million); e-book subscription service BookMate (8 million); dating site CoffeeMeetsBagel (6 million), art appreciation website Artsy (1 million); and online learning platform DataCamp (700,000). … MyFitnessPal [here], Animoto [here] and MyHeritage [here] each disclosed a data breach last year that corresponds to this latest incident, while the remaining websites have not (possibly because they were unaware they were victimized). Compromised data primarily consists of individuals’ names, email addresses and hashed or encrypted passwords. But depending on the website, other lifted information includes usernames, IP addresses, birthdays, locations, countries, language, interests, account creation dates and security questions and answers. Presumably, cybercriminals who engage in spamming and credential stuffing campaigns would be able to make use of this information. Reportedly, the seller has set the value of the entire data set at approximately $20,000, but is offering each website’s data individually. This latest data breach headache follows news of a series of major data dumps known as Collection #1 [see wiki here] and Collection #2-5, which left billions of email addresses and associated passwords exposed on the web. Sources: SC Magazine | 617 million accounts stolen in latest online data breach | Hackers Have Just Put 620 Million Accounts Up For Sale On The Dark Web — Are You On The List?

Law Enforcement

CA – Montreal Rejects Body Cameras for Police Officers

Montreal last week became the latest city in North America to decide against making the cameras standard police equipment.

  • The Montreal police force’s 235-page report on the results of a $3.4-million pilot project that saw 78 officers wear cameras between May, 2016, and April, 2017 ruled them out as costly and ineffective
  • According to Alex Norris, chairman of Montreal’s public security committee, what has been described as a tool to increase transparency in the police force and improve relations between officers and citizens is “not ready for prime time.”
  • Officers didn’t have the reflex to turn on the camera in an emergency situation or when they needed to use force, Norris said. In moments of tension or during a physical altercation, the cameras often captured no images or just fragments. Norris doesn’t blame the officers.
  • Cities across Canada, including Toronto and Vancouver, are experimenting with the technology. In December, 2016, the RCMP decided against equipping its officers with the cameras. Halifax, like Montreal, has ruled them out as too costly.
  • The report estimated it would take roughly five years and $17.4-million to equip about 3,000 front-line officers with body cameras. It would cost an additional $24-million a year to maintain the camera program – equal to 4% of the force’s current annual operating budget.
  • Norris said the cost of storing the footage and ensuring video evidence is transferred and edited in accordance with legal principles was another reason the city decided against going ahead with the project.
  • Elsewhere in Canada, Calgary, Victoria and smaller towns such as Amherstburg, Ont., and Kentville, N.S., have decided in favour of cameras for their police officers. Calgary stated in July, 2018, that it was committed to arming all its front-line officers with body cameras by the end of 2019.
  • Police departments across the United States have had mixed results with the technology. The Washington Post reported in January that about half of the 18,000 law enforcement agencies in the U.S. “have some type of body-camera program, with many still in the pilot stage.” The newspaper reported that many smaller forces are having a difficult time paying to maintain the equipment and store the footage.
  • A research study on the use of body cameras within the Washington, D.C. Metropolitan Police Department found the cameras had “no detectable effect” on the use of force by officers or the volume of civilian complaints. The 18-month study published in 2017 analyzed data from 2,000 police officers who wore the devices.
  • Dan Philip, president of the Black Coalition of Quebec, said he thinks body cameras on police are necessary to protect the rights of citizens. Cameras, he said, would help in cases of racial profiling: “It would give victims the evidence that is necessary in order to bring the matter to court.”
  • His organization is trying to get a $4-million class action authorized against the city’s police force on behalf of people of colour allegedly profiled by Montreal officers.
  • “When there are no body cameras, the injustices continue,” Philip added. “And there is no recourse, because it will be the word of the police against the word of the victim – and we know which one will carry.”
  • Norris acknowledged that relations between the police and minority communities need to be improved, but he said cameras are not the answer – at least not yet.
  • “In many big cities, politicians are under pressure to come up with an answer when there is dissatisfaction expressed regarding relations between police and citizens,” he said. “And this technology is seen as a quick fix that will solve the problem. What we discovered is that it’s not a quick fix. It’s very expensive. It’s very cumbersome.” [Globe & Mail]

CA – Toronto Police End Shotspotter Project Over Legal Concerns

Tony Veneziano, the chief administrative officer of Toronto Police, told a budget committee meeting at City Hall that it is abandoning plans to bring in a high-tech gunshot-detection system [wiki] known as shotspotter – championed by Mayor John Tory and approved in the wake of a wave of summer shootings [coverage] – due to legal concerns about the technology. The ShotSpotter system, used in many U.S. cities, uses a network of microphones, usually deployed in troubled neighbourhoods, to pinpoint the exact location of a shooting. Veneziano said “We are no longer pursuing that technology. There’s legal issues that certainly have to be addressed, so we will no longer be looking at that.” While the firm behind the system, ShotSpotter Inc., based in Newark, Calif., says it cannot eavesdrop on conversations, some councillors raised privacy concerns about the surveillance. “They are not proceeding for the same reason many of us voted against it in the first place … an invasion of privacy, that there were severe risks around data collection and use,” Councillor Joe Cressy said. “Frankly, it was a shiny object in a RoboCop-style of enforcement model that was intended in the midst of the summer of the gun to make us all feel better.” Just last week, Toronto Police Chief Mark Saunders said police were still in the early stages of evaluating ShotSpotter, but confirmed that so far both Ottawa and Queen’s Park had declined to fund it. The Globe and Mail | Plans stall for Toronto Police Service’s gunshot-detection system


US – Investigation Shows Bounty Hunters Accessed Location Data Intended for 911 Operators

Approximately 250 bounty hunters and related businesses had access to AT&T, T-Mobile and Sprint customer location data. The documents also show that data intended for 911 operators and first responders was sold to data aggregators who, in turn, sold the data to bounty hunters. The report notes that one data seller had access to “assisted GPS” data, which is intended to provide a user’s location data to within a few meters. According to those interviewed, this is the first instance of a telecom selling A-GPS data. [Motherboard]

Online Privacy

CA – Air Canada Records Users’ Interactions with Smartphone App

Air Canada’s app is found to have used an analytics service designed to capture the ways users interact with their phones while they use the product. The “session replay” service records a user’s phone screen in order to capture booked flights, changed passwords and credit card information. TechCrunch reports Air Canada is not the only company to use “session replays,” as Hollister, Expedia and Hotels.com also use the service. “Air Canada uses customer provided information to ensure we can support their travel needs and to ensure we can resolve any issues that may affect their trips,” an Air Canada spokesperson said. “This includes user information entered in, and collected on, the Air Canada mobile app. However, Air Canada does not — and cannot — capture phone screens outside of the Air Canada app.” [Global News]

WW – Investigation Finds Apps Employ ‘Session Replay’ Technology Without Consent

An investigation found that several popular apps record users’ iPhone screens without their knowledge or consent by embedding third-party “session replay” technology into their apps. The report notes that several apps employ Glassbox, a customer experience analytics firm, to allow them to see a user’s screen, follow and track keyboard entries, and understand how that user interacted with the app. There are several other session replay services available that are often used to understand why apps break, but the report notes the failure of some app developers to properly mask its session replay files when they are sent from a device to the company’s servers, potentially exposing sensitive data to attack. [TechCrunch]

WW – Apple Orders Apps to Disclose Screen-Tracking Technology

Apple has told app developers to either remove or properly disclose the use of analytics tools that allow them to record screen interaction or face removal from Apple’s App Store. The call follows an investigation that found several apps employed “session replay” technology. Meanwhile, Gizmodo reports that Apple has begun removing an option in Safari’s privacy settings called “Do Not Track“ after the privacy project ended last month. Previously, Gizmodo reported on the general ineffectiveness of such options. [TechCrunch]

WW – Apple Revokes Facebook and Google Developer Certificates Because They Used them to Collect User Data

Facebook paid adults and teenagers to install a data-slurping iOS app using their enterprise certificate, bypassing the Apple App Store and requisite security checks. Apple had previously banned the application from the App Store for violating their data privacy rules. The app allows Facebook to see virtually everything a user does on the device. Apple states that distribution of the application for consumer research violates the terms of their enterprise development license. Google used a similar application to collect user and device data on iOS devices. Google acknowledged their mistake and disabled the application before Apple revoked its enterprise certificate. Both the Facebook and Google app are still available on Android.

  • wired.com: Why Facebook’s Banned ‘Research’ App Was So Invasive
  • theregister.co.uk: Furious Apple revokes Facebook’s enty app cert after Zuck’s crew abused it to slurp private data
  • com: Apple revokes Facebook’s developer certificate over data-snooping app—Google could be next
  • cnet.com: Google’s data-gathering app may have also violated Apple’s policies
  • zdnet.com: Google shuts down iPhone data-gathering app: ‘This was a mistake, and we apologize’
  • theverge.com: Apple blocks Google from running its internal iOS apps

WW – Report Finds Apps are Back to Integrating SDKs

A report from SafeDK found that while the number of unused software development kits dropped by 1.2, the total number of SDK integrations averaged at 18. This finding comes after an initial slowdown of SDK integrations ahead of the EU General Data Protection Regulation. TabTale CEO and Founder Sagi Schliesser explained, “It’s not difficult to clean up unused SDKs, but it’s also not a high priority for a lot of developers, because it’s more important to them to update their game than think about something like GDPR and how SDKs could make them vulnerable.” The report analyzed 190,000 top-charting apps in the Google Play store. [AdExchanger]

WW – Facebook Warned Over Privacy Risks of Merging Messaging Platforms

Recently the New York Times broke the news that Facebook intends to unify the backend infrastructure of its three separate products: Facebook Messenger, Instagram, and WhatsApp. The Irish Data Protection Commissioner, Facebook’s lead data protection regulator in Europe, has asked the company for an “urgent briefing” regarding plans to integrate the underlying infrastructure of its three social messaging platforms. In a statement the Commission wrote: “Previous proposals to share data between Facebook companies have given rise to significant data protection concerns and the Irish DPC will be seeking early assurances that all such concerns will be fully taken into account by Facebook in further developing this proposal.” When we asked for a response to the NYT report, a Facebook spokesperson confirmed it and said “We want to build the best messaging experiences we can; and people want messaging to be fast, simple, reliable and private. We’re working on making more of our messaging products end-to-end encrypted and considering ways to make it easier to reach friends and family across networks. As you would expect, there is a lot of discussion and debate as we begin the long process of figuring out all the details of how this will work” There certainly would be a lot of detail to be worked out. Not least the feasibility of legally merging user data across distinct products in Europe, where a controversial 2016 privacy u-turn by WhatsApp — when it suddenly announced it would after all share user data with parent company Facebook (despite previously saying it would never do so), including sharing data for marketing purposes — triggered swift regulatory intervention. Facebook was forced to suspend marketing-related data flows in Europe. Though it has continued sharing data between WhatsApp and Facebook for security and business intelligence purposes, leading to the French data watchdog to issue a formal notice at the end of 2017 warning the latter transfers also lack a legal basis. A court in Hamburg, Germany, also officially banned Facebook from using WhatsApp user data for its own purposes. Early last year, following an investigation into the data-sharing u-turn, the UK’s data watchdog obtained an undertaking from WhatsApp that it would not share personal data with Facebook until the two services could do so in a way that’s compliant with the region’s strict privacy framework, the General Data Protection Regulation (GDPR). The 2016 WhatsApp-Facebook privacy u-turn also occurred prior to Europe’s GDPR coming into force. And the updated privacy framework includes a regime of substantially larger maximum fines for any violations. We’ve reached out to Facebook for comment on the Irish DPC’s statement and will update this report with any response. TechCrunch

WW – Social Media Privacy Might Not Be Possible: New Research Study

The conventional wisdom is that the easiest way to stop social media companies like Facebook and Twitter from tracking and profiling you is simply by deleting your social media accounts. That, for example, was the basis for the #DeleteFacebook movement that gained momentum around the time of the Facebook Cambridge Analytica scandal in early 2018. But now a new study published January 21 by researchers at the University of Adelaide in Australia and the University of Vermont in the United States suggests that even deleting your social media accounts might not be enough to protect your social media privacy [see “Information flow reveals prediction limits in online social activity” by James P. Bagrow, Xipei Liu & Lewis Mitchell in the journal Nature Human Behaviour here]. The study analyzed 30.8 million Twitter messages from 13,905 Twitter accounts to see whether it might be possible to profile an individual simply by examining the profiles and interactions with his or her friends. To test out that hypothesis, the researchers were able to sub-divide the 13,905 Twitter accounts into 927 “ego-networks” consisting of 1 Twitter user and 15 other accounts that interacted with that individual most frequently. The researchers hypothesized that it might be possible to see if interactions and communication with those 15 social networking accounts somehow “encoded” information about a user and his or her interests, likes and behaviors This was the first-ever study that analyzed how much information about an individual is encoded in interactions with friends. From a social media privacy perspective, the study turned up some very concerning results. It turns out that the team didn’t even need 15 accounts to figure out a person’s profile. All they needed was tweets from 8-9 accounts (i.e. the “friends” of the user), and they could start to create some startlingly accurate profiles. For example, machine learning algorithms could start to predict factors such as “political affiliation” or “leisure interests” simply by studying the tweets of someone’s friends. Often, they were able to do this with up to 95% accuracy. The remainder of this article discusses the study from within the framework of the following subheadings: 1) Friends can put you at risk on social networks; 2) The concept of privacy as an individual choice; 3) Why the Facebook Cambridge Analytica scandal matters; 4) Still looking for a solution to social media privacy  Sources: CPO Magazine See also: #DeleteFacebook? #DeleteTwitter? #FatLotOfGood that will do you | On Facebook and Twitter your privacy is at risk — even if you don’t have an account, study finds

Other Jurisdictions

AU – Australia Sees Increase in Reported Data Breaches

In a recent report, the Office of the Australian Information Commissioner stated the number of Australian organizations reporting data breaches hit a new high with 262 notifications received in the third full quarter the Notifiable Data Breaches scheme. The majority of breaches stem from malicious attacks. The report noted that data breaches from human error fell from 37 to 33%. “Preventing data breaches and improving cyber security must be a primary concern for any organisation entrusted with people’s personal information,” Australian Information Commissioner and Privacy Commissioner Angelene Falk said. “Employees need to be made aware of the common tricks used by cyber criminals to steal usernames and passwords.” [iTnews]

Privacy (US)

US – ‘Near-Consensus’ Reached on Data Breach Victim Suits

Alison Frankel looks at the 9th U.S. Circuit Court of Appeals in 2018’s in re Zappos.com to see how the case impacts data breach victims. Frankel writes that the federal appellate courts seemed to have reached a “near-consensus” that data breach victims need only to allege an increased risk of identity theft to establish their constitutional right to sue a company for leaving their personal data vulnerable to hackers. Disputing this, Zappos has asked the U.S. Supreme Court to decide on the constitutional standing of data breach victims, but the justices have decided to hold the case until Frank v. Gaos is resolved. [Reuters]

US – NYT Reporters Land Facebook Privacy Publishing Deal with HarperCollins

A pair of reporters from The New York Times has received a publishing deal with HarperCollins to write a book about the privacy-related stories about Facebook. Sheera Frenkel and Cecilia Kang landed a seven-figure deal to write the book. Frenkel and Kang’s book will be based on a Times report from November that covered Facebook’s responses to several privacy incidents that have taken place over the past couple of years. Vanity Fair

US – Alphabet Warns Investors on the Impact of Data Privacy

In Alphabet’s latest earnings report filed with the U.S. Securities and Exchange Commission, Google’s parent company points to the impact of customers’ growing expectation of privacy. With a reported 83% of revenue stemming from the sale of digital ads, the company wrote, “Changes to our data privacy practices, as well as changes to third-party advertising policies or practices may affect the type of ads and/or manner of advertising that we are able to provide which could have an adverse effect on our business.” Alphabet also highlighted that as attention surrounding data privacy and security increases, the company “will continue to be subject to various and evolving laws.” [Forbes]

US – Report Grades States on Student Data Privacy

Peter Greene looks at how the proliferation of computer-based technology in schools has impacted student data privacy. “Schools are where one thorny modern issue — data privacy — meets our most vulnerable population — students,” Greene writes. Pointing to the 2019 State Student Privacy Report Card, which grades states across seven categories of protecting student data privacy, he writes, “The picture is not pretty; no state earned an A, and 28 states failed with either a D or an F.” Furthermore, 11 states were reported to have no student data privacy laws in existence. [Forbes]

US – CDT Releases Resource for CPOs in Education

In a blog post for the Center for Democracy & Technology, Senior Fellow of Student Privacy Elizabeth Laird announced the release of an issue brief focused on making the case for incorporating chief privacy officers into education. Describing a variety of practices that can help to support the addition of a CPO, the brief calls attention to the role of leadership and ways to broadcast the role as an organizational asset. Divided into two sections, the brief also examines the role of the organization in making a successful CPO, as well as what the role of a CPO should entail. [CDT.org] See also: Washington Post: Facebook Controversy Spurs Momentum in Congress for Privacy Safeguards for Kids and Teens

Privacy Enhancing Technologies (PETs)

WW – Chrome Offers “Password Checkup” Service

A new extension for Google’s Chrome browser checks to see if username/password combinations used in login forms have been leaked online. If the credential pair is flagged as being leaked, Chrome users will see a red warning pop-up box suggesting that they change that password. Firefox introduced the Firefox Monitor feature last November. It displays a one-time alert recommending users change their passwords when they visit websites that have been breached within the past 12 months. The Google support site says, “Password Checkup works when you’re signed in to the Chrome browser on a computer.” And elsewhere says installing the extension means you agree to Google’s Privacy Policy and Terms of Service, which just apparently changed again on 22 January. If you sign in to Chrome, you are automatically signed into every other service (like Google Search) that Google owns. Do you trust Google? [The Verge | See also:

  • wired.com: A New Google Chrome Extension Will Detect Your Unsafe Passwords
  • scmagazine.com: Google adds Password Checkup Chrome extension
  • zdnet.com: Google releases Chrome extension to check for leaked usernames and passwords
  • cnet.com: How to use Google’s new Password Checkup tool


CA – Two-Factor Authentication Mandatory for New Ryerson Email Users

Students who enroll at Ryerson next fall and onwards will be required to use two-factor authentication when signing into their university email accounts. The university’s cybersecurity team wants all Ryerson email users to sign in by entering their password and an authenticator code by 2022. That means all students will have to refer to a code on an authenticator app linked to their email or buy a U2F key to insert into their computer to complete the second security step. Ryerson’s chief information officer said it’s time to make the extra security login measure mandatory since the university is facing an increasing number of security threats as the tools hackers use become more advanced. Brian Lesser said the university had to lock 22 student accounts last week because hackers accessed their accounts and made their passwords publicly accessible on online databases. “This week we locked another eight accounts for the same reasons,” he said. According to Statistics Canada, universities reported one of the highest levels of cybersecurity incidents of all Canadian businesseswho experienced attacks in 2017. About 46% of universities reported attacks against them – second only to the banking sector, where 47% of institutions reported experiencing cybersecurity incidents. “Email accounts are great targets,” Lesser said. “If I can get hold of a lot of email accounts, I can send out spam for free.” Ryerson staff members are already required to use the additional security measure when signing in. [Ryersonian]

CA – Govt Creates Security and Intelligence Threats to Elections Task Force

The Canadian government announced it has created the Security and Intelligence Threats to Elections Task Force. The group aims to prevent any form of interference with the upcoming federal election. The task force will decide whether to release public warnings about malicious actors’ attempts to influence results; however, the group can only give public warnings after an election has been formally called. Any revelations of suspicious activity will be done by a nonpartisan body of senior lawmakers. The federal government also announced it plans to spend $7 million to educate citizens about where they get their news on social media. [IT World Canada]

Smart Cities / IoT

CA – Majority of Canadian Citizens Concerned About Smart-City Privacy: Survey

A survey conducted by McMaster University found 88% of Canadians expressed some level of concern about their privacy in smart cities. Of the 1,011 individuals surveyed, 23% they said they were extremely concerned about smart-city privacy. When asked about what they wish to have done with their data, 71% would be open to their data used for traffic and city planning, and 63% said they would be open for police to use their information for crime prevention. However, a third of Canadians do not want law enforcement to access their data. Minority and indigenous participants expressed higher levels of disapproval to the concept. [The Conversation]

US – NIST Calls for Feedback on IoT Cybersecurity Discussion Draft

The U.S. National Institute of Standards and Technology’s Cybersecurity for the Internet of Things Program wants stakeholders to offer feedback on its “Considerations for a Core IoT Cybersecurity Capabilities Baseline “discussion draft. NIST has called for insight on the identification of a minimum set of cybersecurity capabilities for IoT devices, as well as whether the capabilities listed in the discussion draft are reasonable for a core baseline and if they are useful for device manufacturers. All feedback will be considered and is expected to be a part of NIST’s next paper on IoT cybersecurity. [NIST]

WW – Report Examines Privacy Implications of APIs

Researchers from the University of Michigan and Fordham Law School released a report to help educate internet patrons on the privacy concerns around application programming interfaces. The report, titled “APIs and Your Privacy,” examined 11 online services to show the ways APIs gather and send consumer data, such as with Candy Crush Saga, Netflix, Google Maps and Search, Pandora, Tinder and ESPN. The researchers received support from AT&T for the report and presented their findings at the AT&T Policy Forum’s Symposium on Application Programming Interfaces and Privacy in Washington. “While APIs are an inherent part of how the online ecosystem works, their privacy implications deserve closer scrutiny — for APIs made available to both developers and advertisers,” the report states. [Fordham U | Phys.org | Are APIs the True Privacy Villains?  | Survey: Developers Want API Standards  | The case for healthcare-specific APIs | Google asks Supreme Court to overrule disastrous ruling on API copyrights – appeal against Oracle has huge stakes for the industry.


US – NY Allows Life Insurers to Use Social Media Data to Help Set Premium Rates

The New York State Department of Financial Services will now allow life insurers to use social media data and other forms of information to help set premium rates. Life insurers are permitted to incorporate this information into their processes as long as they can prove their algorithms are not biased against any marginalized groups. New York Financial Services Superintendent Maria Vullo said the department’s goal is to create a set of ground rules before the use of social media data becomes more widespread. Meanwhile, ride-hailing companies in New York City face new requirements to hand over more data to the city’s Taxi and Limousine Commission. [The Wall Street Journal]

US Government Programs

US – Advocacy Groups Ask Lawmakers to Reject Border Surveillance Proposals

In a letter to U.S. House Speaker Nancy Pelosi, D-Calif., the Democratic leadership and the full membership of the House of Representatives, advocacy groups urged lawmakers to oppose the proposal for “Smart, Effective Border Security,” which, they write, calls for funding of “various invasive surveillance technologies that would intrude on the liberties of travelers, immigrants, and people who live near the border,” The Washington Post reports. The letter expresses concern with risk-based targeting technologies, mass surveillance, license plate readers, and biometric and DNA screening. “The prospect of the U.S. government building a surveillance wall that vacuums up the private information of immigrants and travelers and U.S. citizens alike is a menace to privacy,” Electronic Frontier Foundation Staff Attorney Adam Schwartz said. [Full Story]

US Legislation

US – Congress Should Consider Comprehensive Privacy Legislation: GAO

In a report released February 13 The Government Accountability Office (GAO) is recommending that Congress consider coming up with comprehensive internet privacy legislation, something both sides of the aisle have been saying needs to happen [see House Energy & Commerce Committee PR here, 56 pg PDF report here, highlights here and recommendations here]. The report, which was requested by Rep. Frank Pallone (D-N.J.) [here & wiki here], chairman of the House Energy & Commerce Committee [here & wiki here], found that while most industry stakeholders favored the current approach of the Federal Trade Commission enforcing unfair and deceptive practices rather than new rules consumer advocates and most former FTC and FCC commissioners [the GAO] interviewed (from both parties), favored having the FTC issue enforceable regulations. GAO recommended that Congress should at least “consider” coming up with “comprehensive legislation on Internet privacy that would enhance consumer protections and provide flexibility to address a rapidly evolving Internet environment. Issues that should be considered include what authorities agencies should have in order to oversee Internet privacy [including civil penalty authority for first time offenses], including appropriate rulemaking authority.” Pallone will try to get that ball rolling with the committee’s first privacy hearing under new, Democratic, control, which he has scheduled for Feb. 26 in the Consumer Protection Subcommittee, which is chaired by Rep. Jan Schakowsky (D-Ill.). While GAO couches its recommendation with words like “consider” and plenty of conditional language, as is its practice, it clearly suggests legislation would be a good thing. Over the past few months several Senators have proposed their own measures, including: a) Ron Wyden, D-Ore.; b) Marco Rubio, R-Fla.; c) Amy Klobuchar, D-Minn.; and d) Brian Schatz, D-Hawaii.   Broadcasting & Cable | Federal GAO Supports New Privacy Laws, Fines For Violators | Government watchdog finds weak enforcement of US privacy regulations | House panel to hold hearing on data privacy legislation

US – Chamber Releases Model Privacy Legislation, Urges Congress to Pass a Federal Privacy Law

On February 13 The U.S. Chamber of Commerce released model privacy legislation calling for a federal privacy law that would protect consumers and eliminate a confusing patchwork of state laws [see 1 pg overview and model text]. The U.S. Chamber worked with nearly 200 organizations of all sizes and sectors to draft the model legislation. It focuses on transparency, consumer control, and support for innovation. The model legislation would require businesses to be transparent about how personal information is used. Businesses would also have to comply with requests from consumers regarding how personal information is being used and shared. It includes with common-sense exceptions opt-out and data deletion provisions which the USCC says are a critical part of ensuring consumers have control of how personal information is used. The USCC says the model legislation would support innovation through regulatory certainty. Businesses would comply with one nationwide privacy framework, as opposed to having to navigate 50 unique state laws. The FTC would be tasked with enforcing the legislation and would have the ability to impose civil penalties on businesses that violate transparency, opt-out, or data deletion provisions. U.S. Chamber of Commerce

US – Calif. Lawmakers Introduce Data Privacy Omnibus Package

A group of five California State Assembly members has introduced a data privacy omnibus package. It is expected to include four bills and a resolution designed to bolster the California Consumer Privacy Act. The four bills include rules to no longer allow companies to store voice data on smart speakers used for marketing, requirements for social media companies to obtain a parent’s permission before any child uses their platforms, a new data breach notification rule, and a push to have Congress and the U.S. Federal Trade Commission update federal antitrust legislation. AB 288 is the only part of the omnibus to be formally introduced, which mandates social media companies must give users who shut down their accounts the choice to have their personally identifiable information permanently removed. [Government Technology | Five Questions You Should Be Asking as Congress Takes on Privacy Legislation | Tech Group Favors Privacy Bill That Preempts Tougher State Laws | How are Businesses Preparing for Proposed Federal Data Privacy Legislation? Part One: Understanding Current Proposals

US – Takeaways from CCPA Public Forums

When California Governor Jerry Brown signed the California Consumer Privacy Act (CCPA) [see wiki & infographic] into law on June 28, 2018, there was broad agreement that revisions and clarifications were necessary. The CCPA was written and enacted with extraordinary speed, as legislators moved to pre-empt a data privacy ballot initiative. The California legislature has already passed a “clean-up” bill [see SB-1121 here] to address concerns expressed about the CCPA, and heated debates over the meaning and merits of specific provisions continue. Against this backdrop, and with less than a year before the CCPA goes into effect on January 1, 2020, eyes are now increasingly turning to the California Attorney General (AG). The CCPA mandates that the California Attorney General “solicit broad public participation and adopt regulations to further the [CCPA’s] purposes,” including with respect to seven specific focus areas, before July 1, 2020 [see Sec. 13 here]. Given the public interest in, and lingering questions about, the CCPA, this rulemaking is eagerly anticipated, and the AG’s Office has consequently decided to host a series of public forums throughout the state in order to collect stakeholder input [see schedule here & AG’s forum slide deck 6 pg PDF here]. While it’s of course still too early to tell how the AG’s regulations will ultimately shake out, these forums nonetheless are valuable indications of what may be to come as businesses wrestle with several key questions for CCPA compliance: 1) What provisions might the AG regulations address?; 2) What other provisions might they address, such that compliance efforts should be careful not to get too far ahead of the regulatory clarification?; and 3) What are the next steps?. The remainder of this blog post addresses these questions in some detail. Sources: Data Matters Blog | Sacramento CCPA Public Forum Attracts Among the Largest Turnout to Date | California Consumer Privacy Act: The Challenge Ahead – The CCPA’s “Reasonable” Security Requirement | California DoJ Sets March 8 Deadline for CCPA Pre-Rulemaking Comments | California Consumer Privacy Act: Are You Prepared for 2020? — CyberSpeak Podcast | As Businesses Prep for California’s Data Privacy Law, They’re Also Fighting to Change It & Big Tech isn’t the only group concerned | Bill Package Looks to Strengthen Data Protections in California | California AG’s Office Gets Public Input on CCPA | Public Forums on the California Consumer Privacy Act Continue in Los Angeles – Rulemaking to Follow | Data Privacy Day – Special Report – California Consumer Privacy Act FAQs for Employers | California Attorney General’s Office Gathers Public Opinions Regarding the Implementation of the California Consumer Privacy Act

US – Comprehensive Data Privacy Legislation Introduced In Massachusetts

Massachusetts state Senator Cynthia Creem [see here & wiki here] has introduced a consumer data privacy bill, SD 341 [see text here], that would give Massachusetts consumers the right to sue in the event their personal information or biometric data is improperly collected or distributed or for any other potential violation of the new law. Under SD 341, and similar to Illinois’s Biometric Information Privacy Act (BIPA) [text here & wiki here], consumers may not be required to demonstrate or have suffered monetary or property losses in order to seek damages for an alleged violation. Any violation of the proposed new law could be grounds for a valid private action. It states that “a violation of this chapter shall constitute an injury in fact to the consumer who has suffered the violation, and the consumer need not suffer a loss of money or property as a result of the violation in order to bring an action for a violation of this chapter.” A prevailing plaintiff can receive the greater of $750 “per consumer incident” or actual damages and can also receive attorneys’ fees. It would grant Massachusetts consumers certain rights with respect to their personal data, including: 1) A right to notice “at or before the point of collection” of the personal information that will be collected and disclosed and the purpose of such collection or disclosure; 2) A right to request a copy of collected personal information; and 3) A right to request deletion of collected personal information. Additionally, consumers would have the right to demand that covered businesses not disclose their information to third parties – in other words, with limited exceptions, consumers would be able to opt out of any transfers of their personal information by a business to other businesses that are not service providers. If SD 341 is enacted, it would not take effect until January 2023 after related rule-making is conducted by the Massachusetts attorney general. Sources: Technology Law Dispatch (ReedSmith) | Massachusetts Considers Bill to Limit Facial Recognition | Massachusetts State Senators Seek to Enact Biometric Data Protection Law | Notable challenges from the updated Massachusetts data breach notification law | Massachusetts Amends Data Breach Notification Law to Require Free Credit Monitoring | Massachusetts Amends Data Breach Notification Law

US – How to Shine a Light on U.S. Government Surveillance of Americans

This year, Congress must vote on reauthorizing provisions of the Patriot Act [see here & wiki here] that are due to expire [December 15] — including Section 215 [see wiki here, also see EFF take here], which the government abused for years to illegally collect Americans’ phone records in bulk. As this debate gets underway, both Congress and the public need some answers. In 2015, Congress passed the USA Freedom Act [see here & wiki here] to reform parts of the Patriot Act and make other much-needed changes to the government’s surveillance activities. Perhaps most notably, the law prohibited the bulk collection of Americans’ call records, internet metadata, and other private information under several statutes. It also sought to enhance transparency, so that illegal surveillance programs under these authorities would never again flourish in secrecy. Four years later, however, serious questions remain about whether these reforms have successfully halted bulk collection and other forms of overbroad surveillance. It’s also unclear whether additional measures are needed to safeguard communities of color and Americans engaged in First Amendment-protected activities. The ACLU has filed a new Freedom of Information Act lawsuit in an effort to shed light on these significant gaps in the public’s understanding. Similarly, in order to inform the coming debate, Congress and the public must demand answers to the following questions. [The remainder of this blog post addresses the following questions]: 1) What additional changes are needed to prevent the bulk collection of Americans’ private data under the Patriot Act?; 2) Is the Patriot Act being used to infringe on First Amendment-protected activities?; 3) Is the Patriot Act being used to discriminate on the basis of race, religion, national origin, or other protected factors?; 4) Why has the government not disclosed more opinions from the secret intelligence court?; and 5) How many Americans have had their private information collected under other surveillance authorities? … Answers to these questions are essential as Congress debates expiring provisions of the Patriot Act and the additional safeguards that are needed to protect Americans’ rights.   Speak Freely blog (American Civil Liberties Union) and Three FISA Authorities Sunset in December: Here’s What You Need to Know

US – Oregon Lawmakers Pass Bill to Let Patients Get Paid for Health Data

As data becomes more valuable, and privacy seems increasingly elusive, state lawmakers in Oregon are starting a conversation about a new way to empower consumers. This week, a group of Oregon lawmakers have introduced legislation that would empower Oregon residents to get a cut of the value of their medical data [it’s titled “Health Information Property Act” – see Oregon Senate Bill 703 here]. The Health Information Property Act, which effectively treats personal health data like property. The bill has three components. It would:

1) Require HIPAA-covered entities — as well as their business associates or subcontractors — to get signed authorization from consumers before de-identifying their personal health information (PHI) to sell the data to a third party;

2) Allow consumers to elect to receive payment in exchange for authorizing the de-identification of their PHI for the purpose of sale; and

3) Prohibit companies subject to HIPAA from discriminating against a consumer who refuses to sign such an authorization or who wants to get paid for it.

State Rep. David Gomberg [see here & wiki here] one of the sponsors of the Health Information Property Act heard about the idea to treat personal data as property from Humanity.co [see here], a company that’s built a blockchain-based app that lets people sell their personal data. Humanity.co has had similar conversations about introducing this kind of legislation in other states, including New Jersey. ZDNet

Workplace Privacy

US – Non-Desk Employees Use Messaging Apps, Even Without HR’s Knowledge

A survey conducted by Speakap found most non-desk employees use messaging applications, such as WhatsApp and Facebook Messenger. Speakap defines non-desk employees as staff members who work at retail stores, hotels and restaurants. Of the 1,000 non-desk employees polled, 53% said they use messaging apps for work-related communications up to six times a day, with 16% believing their company’s human resources department did not know of such use. Speakup states companies in Europe have tried to ban the use of messaging apps in order to avoid issues with the EU General Data Protection Regulation. [Adweek]





24-31 January 2019


US – Illinois Supreme Court Rules Actual Harm Not Required for BIPA Claim

On January 25, 2019, the Illinois Supreme Court published a widely anticipated decision in Rosenbach v. Six Flags Entertainment Corporation et al., addressing the question of what it means to be an “aggrieved” person under the Illinois Biometric Information Privacy Act, 740 ILCS 14/1 et seq. (“BIPA“) [see wiki]. Under BIPA, aggrieved persons are entitled to seek liquidated damages and injunctive relief. In a unanimous decision authored by Chief Judge Karmeier, the court held that individuals seeking relief under BIPA “need not allege some actual injury or adverse effect” to be considered aggrieved persons. The court’s decision reversed a lower court ruling that distinguished between “actual” and “technical” BIPA violations. BIPA requires private entities to (i) inform individuals about the collection and storage of their biometric identifiers or information, (ii) detail the purpose and length of time for which such data will be collected, stored, or used, and (iii) obtain a written release from individuals prior to the collection of such data. The lower court characterized violations of these three requirements alone as “technical” violations of BIPA that would not entitle plaintiffs to relief absent allegations of “injury or adverse effect.” This decision will affect numerous pending federal and state BIPA actions that have struggled to interpret BIPA’s statutory requirements. It may also re-ignite pressure on the Illinois legislature to clarify and limit the scope of the statute—an effort contemplated by an amendment that was introduced, but not passed, last year. Despite developments in the Illinois state courts, a recent Northern District of Illinois decision suggested that BIPA claims brought in federal court may not be able to satisfy Article III standing requirements under the Supreme Court’s decision in Spokeo v. Robins absent an “actual” harm. Thus, the Rosenbach decision, while significant in its substance, may have limited effect in the federal courts. Inside Privacy Blog (Covington) See also: No Harm, Still a Foul: Illinois Supreme Court Rules on the Collection of Biometric Data | Illinois Supreme Court Says Infringement of Rights Under Biometric Act Is Sufficient for Standing, Even Absent Additional Harm | Rosenbach v. Six Flags Entertainment Corporation – Illinois Supreme Court Holds That a Technical Violation of Statutory Biometric Rights is Sufficient to Bring a Claim | Illinois Supreme Court Empowers Claims Under Biometric Information Privacy Act | Court rules companies can be sued for collecting biometric data without consent

US – Prisons Quietly Building Databases of Incarcerated People’s Voice Prints

In New York and other states across the country, authorities are acquiring technology to extract and digitize the voices of incarcerated people into unique biometric signatures, known as voice prints. Prison authorities have quietly enrolled hundreds of thousands of incarcerated people’s voice prints into large-scale biometric databases. Computer algorithms then draw on these databases to identify the voices taking part in a call and to search for other calls in which the voices of interest are detected. Some programs, like New York’s, even analyze the voices of call recipients outside prisons to track which outsiders speak to multiple prisoners regularly. Authorities and prison technology companies say this mass biometric surveillance supports prison security and fraud prevention efforts. But civil liberties advocates argue that the biometric buildup has been neither transparent nor consensual. Once the data exists, they note, it could potentially be used by other agencies, without any say from the public. The rapid, secretive growth of voice-print databases is “probably not a legal issue, not because it shouldn’t be, but because it’s something laws haven’t entertained yet,” noted Clare Garvie, a senior associate at Georgetown Law’s Center on Privacy and Technology. “It’s not surprising that we’re seeing this around prisons, just because it can be collected easily,” she continued, referring to biometric voice data. “We’re building these databases from the ground up.” The scale of prisons’ emerging voice biometric databases has not been comprehensively documented nationwide, but, at minimum, they already hold more than 200,000 incarcerated people’s voice prints. The databases of recorded calls from which prison authorities could search for outsiders’ voice samples could also potentially include millions of recorded calls for state and countywide systems. According to the design requirements New York’s Department of Corrections gave to Securus, for example, the company must be able to record every call, archive all call recordings for a year, and maintain any calls flagged for investigative purposes “indefinitely” through the life of the contract, which ends in 2021. The Intercept

CA – QB Court Rules DNA Collected from Coffee Cup is Admissable Evidence

A man found guilty of brutally assaulting four women in Montreal has lost his attempt to have his convictions overturned. The Quebec Court of Appeal said that DNA evidence collected from a cup of coffee left at a restaurant does not violate the right to privacy. Giovanni D’Amico was found guilty in 2014 on multiple counts of sexual assault, one charge of sexual assault causing bodily harm, and one count of assault. The attacks occurred between 2002 and 2005, but D’Amico was only arrested in 2008 after DNA testing linked him to the crimes. The trial took more than four years and several victims were reluctant to co-operate with police, while one of them died before she could testify. The NDG businessman was convicted in 2014 and sentenced a year later to 12 years in prison. D’Amico appealed the verdict with his lawyer arguing that the collection of DNA did not take place appropriately. In its ruling the court said that manner of collection was appropriate and the conviction would stand. However the court said the process raises another question that society and the legislature should analyze: Should police be able to collect DNA from suspects, and how long should they be allowed to keep samples? CTV Montreal

Big Data  / Data Analytics / AI

EU – Convention 108 Committee Releases AI Guidelines

The Committee of the Council of Europe’s Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, or Convention 108, has released new guidelines on artificial intelligence and data protection. The guidelines are designed to help lawmakers, developers and manufacturers ensure AI applications uphold data subjects’ rights. “Artificial intelligence brings benefits to our daily lives,” Council of Europe Chair of the Committee of Ministers Timo Soini said. “It is necessary to look into the ethical and legal questions that it raises. To ponder this, we have invited many high-level experts from all member states to a conference on the impacts of artificial intelligence development on human rights, democracy and the rule of law in Helsinki on [Feb. 26 and 27] that will allow us to exchange thoughts and knowledge.” [Council of Europe]


CA – Canada’s PIPEDA Consent Guidelines Now In Effect

Canada’s new guidelines for obtaining consent under PIPEDA are now in effect. Last year federal Office of the Privacy Commissioner and the Alberta and British Columbia Offices of the Information and Privacy Commissioner jointly issued the guidelines, which outline how to get “meaningful” consent. The OPC will now apply the guidelines when looking at how companies obtained consent, and it has been reported that the guides are viewed by the regulators to have the force of law. Companies are expected to find creative solutions for developing a consent process, and the guidelines provide seven principles for companies to consider. These include transparency: making clear what is being collected and why. Also part of transparency is if the information is shared. Companies should also give people clear options (“yes” or “no”) and be innovative when putting together the consent process. The consents should, similarly, be user-friendly. Finally, the guidelines urge companies to be ready to show how they implemented the principles when designing their consent process. To help companies, the guidelines include “must do” and “should do” checklists. Putting it Into Practice: This Canadian guidance gives helpful insight into what regulators expect from a consent process, which may be useful even for those that operate outside of Canada. Eye on Privacy Blog (Shappard Mullin)

CA – Data Privacy Day Focuses Attention on Canadians’ Privacy Rights: OPC

As people around the world mark Data Privacy Day, Privacy Commissioner of Canada Daniel Therrien is highlighting the importance of protecting, understanding and exercising privacy rights. “Privacy plays an important role in protecting other fundamental rights and values, including freedom and democracy,” says Commissioner Therrien. “It’s therefore critical for Parliamentarians, organizations and individual Canadians to understand, protect and promote privacy rights.” Versions of a Data Privacy Day op-ed by Commissioner Therrien reflecting on why privacy is essential were published by two newspapers. The Office of the Privacy Commissioner of Canada (OPC) recently launched a downloadable “Know your privacy rights!“ poster that can be used by organizations to mark Data Privacy Day and also to highlight the importance of privacy protection all year round. The poster offers tips such as reading up on privacy law basics and learning how to raise a concern with organizations. The OPC’s website offers a broad range of other tips and guidance for individuals, including 10 Tips for Protecting Personal Information. Canada has observed Data Privacy Day on January 28th since 2008. The day commemorates the 1981 signing of the first legally binding international treaty on privacy and data protection. Office of the Privacy Commissioner of Canada News

CA – Data Privacy, Today and Every Day – IPC

Facebook and Cambridge Analytica. General Data Protection Regulation. These were headline news for much of 2018. Both served to highlight how advancements in technology can infringe on privacy rights and the importance of valuing and protecting personal information. While it might be tempting to just close the book on 2018 and move on, issues related to privacy, data and technology aren’t going anywhere soon. It’s a fitting topic given that today is Data Privacy Day, but it’s one we should be thinking and talking about more often. Recognized around the world, Data Privacy Day raises awareness of privacy and data protection among individuals, organizations and government officials. The date coincides with the first legally binding international treaty on the protection of personal data, Convention 108, signed on January 28, 1981. Last week, our office held a privacy day event to help people understand some of the potential privacy challenges of technology-driven smart cities and spark discussion about this very important, and timely, topic. It included a panel with experts from the legal, information management and public service fields, answering questions from people across Ontario. If you missed it, a full replay of the day’s discussion is available on our YouTube channel. Our office has many resources to help you cut through the noise and gain a better understanding of how privacy affects your organization and your day-to-day life (links to a few of them are included below). These resources, and many more on a variety of topics, are available by selecting the guidance tab on our website’s homepage. Use them to expand your knowledge and make privacy a priority in 2019. Resources: Smart Cities and Your Privacy Rights ; General Data Protection Regulation ; Big Data and Privacy Rights ; and Your Child’s Privacy in School | Source: IPC Blog See also: ‘Privacy Day’ explores how to yield benefits from smart cities without privacy pains | 11 Expert Takes On Data Privacy Day 2019 You Need To Read | National Data Privacy Day Is Wishful Thinking


WW – People Will Trade Personal Data for Convenience and Security: Study

People are more than happy to share their personal data, just as long as they’re getting something in return. That’s the main takeaway from Experian’s Global Identity and Fraud Report, which found that growing privacy concerns have not soured people on the overall potential of the digital experience. The vast majority (90%) of consumers are aware that businesses are collecting personal information, but 70% would still be willing to hand over more information if it would make their online interactions faster and safer. Additional findings from the third annual fraud report include:

  • 55% of businesses reported an increase in fraud-related losses over the past 12 months, particularly account opening and account takeover attacks.
  • 60% of consumers globally are aware of the risks involved with providing their personal information to banks and retailers online.
  • Banks and insurance companies are the organizations trusted most by consumers across most regions. Online retail sites and social media sites trail considerably on trust.
  • Nearly nine out of 10 consumers report conducting personal banking as their top online activity.
  • Passwords, PIN codes and security questions remain the authentication methods most widely used by businesses, followed by document verification, physical biometrics and CAPTCHA.

Sources: Experian Study | Mobile ID World


CA – 1.6 million Canadian Banking Records Shared with IRS

The Canadian government has shared more than 1.6 million Canadian banking records with the U.S. Internal Revenue Service (IRS) since the start of a controversial information sharing agreement in 2014 [Foreign Account Tax Compliance Act (FATCA) – overview]. In 2016 and again in 2017, the Canada Revenue Agency (CRA) provided the IRS with information on 600,000 Canadian bank accounts each year. That’s a sharp increase from the 300,000 records shared in 2015 and the 150,000 records shared in 2014, the year the sharing began. However, that doesn’t necessarily correspond to the number of people affected. Some people may have more than one bank account, while some joint accounts could have more than one account holder — including people who don’t hold U.S. citizenship. Among the items of Canadian bank account information being shared with the U.S. are the names and addresses of account holders, account numbers, account balances or values, and information about certain payments such as interest, dividends, other income and proceeds of disposition. Under the intergovernmental agreement, Canadian financial institutions transfer information on bank accounts held by people who could be subject to U.S. taxes to the CRA. In return, the IRS is supposed to send the CRA information about U.S. bank accounts held by Canadians. The CRA, however, has repeatedly refused to reveal how many records, if any, it has received from the IRS as a result of the agreement. Nor does the CRA automatically notify Canadian account holders when their information is transferred to the U.S. All this comes as the Federal Court of Canada prepares to hear a constitutional challenge of the information-sharing agreement in Vancouver. Those challenging the agreement argue that it violates sections 7, 8 and 15 of Canada’s Charter of Rights, which protect Canadians from violations of their right to life, liberty and security, unreasonable search and seizure and discrimination against those who hold U.S. as well as Canadian citizenship. In its submission [see here & here] to the court, the plaintiffs argue that some of the people whose banking records have been shared with the IRS may not be subject to U.S. taxes. [CBC News | The Post Millennial: The CRA is failing to notify Canadians that their banking records are being shared with the IRS]

CA – Thousands Affected by CRA Employees Snooping

The information of thousands of Canadians has been accessed inappropriately by Canada Revenue Agency employees. The CRA confirmed that there were 264 privacy breaches between Nov. 4, 2015 and Nov. 27, 2018. A total of 41,361 Canadians were impacted. Of those people, 37,502 were deemed to face a “low risk of injury” and weren’t contacted by CRA. The CRA said that it has notified 1,640 of the affected individuals and is in the process of sending letters to 34 more. “For a number of other reasons, 2,185 individuals were not notified,” the CRA added, pointing out that some individuals were deceased or there was no address available. Conservative national revenue critic Pat Kelly said that it’s unacceptable that information like a person’s income was accessed inappropriately. The CRA said that 182 of the 264 CRA employees who accessed data without authorization have been disciplined, 36 face a pending decision and 46 have “left” the CRA. Tobi Cohen, a spokesperson for the Office of the Privacy Commissioner, noted that the office had conducted an audit of the CRA in 2013 and that the agency claims to have “substantially or fully implemented all measures that we recommended.” “The Agency reported that it made several important improvements to its management of personal information, including introducing new policies, increasing corporate oversight and ensuring more timely assessment of privacy and security risks,” Cohen said. “The fact that unauthorized/inappropriate access by employees is still happening at all, despite the measures CRA has taken, remains an ongoing concern,” Cohen added. Deb Schulte, parliamentary secretary to the minister of National Revenue, said the government takes the matter seriously and has invested $10 million on prevention. “We now have an enterprise fraud management system that reveals every time someone is in where they shouldn’t be,” said Liberal MP Schulte. The software was implemented in 2017. [CTV News]

CA – Therrien Shares Thoughts on Privacy of Digital Government Services Study

Privacy Commissioner of Canada Daniel Therrien spoke in front of the Standing Committee on Access to Information, Privacy and Ethics to voice his views on the study of the privacy implications of the implementation of digital government services in the country. Therrien cited the government’s Data Strategy Roadmap it published in November. The road map stated data has the ability to allow the government to make “better decisions”; however, the government needs to “refresh our approach.” “I would ask you to remember that while adjustments may be desirable, any new legislation designed to facilitate digital government services must respect privacy as a fundamental human right,” Therrien said. “Modalities may change but the foundation must be solid, and that foundation must respect the right to privacy, and be underpinned by a strengthened privacy law.” [Priv.gc.ca]

CA – Joint Treasury Board-Digital Government Role Leads to Privacy Concerns

Ken Rubin writes about the conflicts of interest that have emerged from the government’s decision to merge the treasury board president role with the minister of digital government. Rubin writes the treasury board is in place to monitor government spending; however, it has spent public money to implement governmentwide data delivery. “There has not been a privacy impact assessment done by the privacy commissioner on the implications of moving to a more digital government under a combined Treasury Board/Digital Ministry,” Rubin writes. “Treasury Board, which has a lead role in privacy protection, however, can find itself in a conflict because its dual role as a digital ministry means Canadians using its services may be in for more, not fewer, privacy invasions and breaches.” [Ottawa Citizen]

EU Developments

EU – European Data Protection Board Releases Report on the Privacy Shield

On January 24, the European Data Protection Board [EDPB] adopted a report [Press Release] regarding the second annual review of the EU-U.S. Privacy Shield. In a press release accompanying the Report, the EDPB welcomed efforts by EU and U.S. authorities to implement the Privacy Shield, including in particular the recent appointment of a permanent Ombudsperson. But the EDPB also noted that certain concerns remain with respect to the implementation of the Privacy Shield. The Report is not binding on the EU or U.S. authorities directly; instead it will serve to guide regulators considering the implementation of the Privacy Shield. The Report is also likely to influence the EU Commission’s assessment of the Privacy Shield, and to contribute to political pressure in the European Parliament to continue to reform the Shield. The Report focuses on assessment of both the commercial and government access aspects of the Privacy Shield, and presents the EDPB’s findings based on its participation in the second annual review in Brussels. On the commercial aspects, the Report acknowledges that “significant progress has been made” since the first annual review, and highlights a number of improvements (which the European Commission had also called out in its recent report on the second annual review of the Shield) [IPB post here]. The Report also recalls “remaining issues” initially raised in a 2016 Opinion by the Article 29 Working Party [EDPB’s predecessor see wiki here], which “remain valid” [see IPB post here]. Ultimately, while the Report highlights certain successes and concerns with the Privacy Shield that arose during the second annual review, many of the Report’s concerns have been raised before in other forums. And the Report acknowledges that these same concerns will likely be addressed by the European Court of Justice in challenges to the Privacy Shield pending before that Court. Inside Privacy Blog (Covington) and Privacy & Information Security Law Blog (Hunton Andrews Kurth)

EU – Google Fined €50 Million Over GDPR Violations

French data regulator CNIL has fined Google €50 million (US$ 57 million) for violations of the General Data Protection Regulation (GDPR). CNIL says that Google failed to make its data collection policies easily accessible and that it did not obtain sufficient, specific, consent for ad personalization across its services. The ruling against Google focuses on making it hard for users to understand what data is being collected and sold, as well as the basic “opt-out, if you can figure out how” philosophy that causes users to automatically give away their data when enrolling in a service and is prohibited by GDPR. [Sources: CNIL.fr | BBC | Ars Technica | ZDnet | Data Protection Report (Norton Rose) See also: GDPR Alert: Google Gets Biggest Fine Ever Issued by a European Data Protection Authority and First sanction decision rendered by the CNIL under the GDPR: GDPR awareness 2.0 has begun

EU – Advocate General Opinion Supports Limiting the “Right to be Forgotten”

On January 10, Advocate General Maciej Szpunar released an opinion [in French] recommending that Google and other search engines should not be forced to apply the EU’s “right to be forgotten” beyond the EU. This opinion is part of a long-running battle over privacy rights in the EU. In May 2014, the CJEU issued an opinion in Google Spain v. Agencia Española de Protección de Datos (AEPD), ruling … Google and similar commercial search providers could be required to remove links to personal information from search results. Commonly referred to as the “right to be forgotten,” the now-operative General Data Protection Regulation includes a right to request erasure in Article 17. Leading to the issue now before the CJEU In May 2015, the president of the French Commission nationale de l’informatique et des libertés (National Commission for Information Technology and Civil Liberties; the “CNIL”) put Google on notice that it must remove results on all of the search engine’s domain name extensions, because a Google search consists of a single process across all domains. In July 2015, Google filed an informal appeal asking the president of the CNIL to withdraw this public formal notice. In September 2015, the CNIL rejected this informal appeal. In last week’s opinion, Advocate General Szpunar proposed argued that the fundamental right to be forgotten must be balanced against other fundamental rights, including the right to data protection and the right to privacy, as well as the legitimate public interest in accessing the information sought. However, the Advocate General did acknowledge that some situations may call for worldwide erasure, though he declined to suggest it in this case. This case will test the CJEU on how to balance globally mobile data with national laws and territorial jurisdiction, and it highlights the practical difficulties of compliance. A decision from the CJEU is expected in 2019, and no final appeal is possible within the EU. The advocates general assist the judges of the Court of Justice of the European Union (CJEU), providing independent legal solutions to issues presented to the CJEU. The judges decide whether an official opinion from an advocate general is necessary. The judges are not obligated to follow an advocate general’s recommendation but often do. Sometimes the CJEU will also arrive at the same conclusion as the advocate general but through different legal analysis. Data Privacy Monitor (Baker Hostetler) and Security, Privacy and the Law: Is the Right to be Forgotten National, European or Worldwide? The Advocate General Issues an Opinion in the Google Case (Foley Hoag LLP)

UK – ICO to Investigate Google for Potential GDPR Violations

The U.K. Information Commissioner’s Office has launched an investigation into Google for potential violations of the EU General Data Protection Regulation. The ICO’s inquiry comes after France’s data protection authority, the CNIL, fined the tech company $57 million for GDPR infractions. “Following the notice of the French supervisory authority (CNIL) to fine Google, the ICO is currently reviewing the notice to consider its content and possible next steps,” an ICO spokesperson said. “The ICO is also liaising with other data protection authorities across Europe on this topic.” [ITPro]

WW – ICDPPC Releases Final Report

The final report from the 40th International Conference of Data Protection and Privacy Commissioners has been released. The report runs down both the closed and public sessions from the conference, most of which were centered on the theme of “Debating ethics: Dignity and respect in data driven life.” “We chose ethics therefore as the theme of this year’s conference, because we wanted to interrogate the notions of right and wrong around the world and across different disciplines which underpin law, technology and how people behave,” European Data Protection Supervisor Giovanni Buttarelli wrote. The report covers the keynote speech from Apple CEO Tim Cook and the panel moderated by IAPP President and CEO J. Trevor Hughes.  [EDPS]

EU – Complaints filed with DPAs Over Ad Auction Companies’ Use of Data

Representatives from the Open Rights Group, University College London and Brave have filed new evidence in their complaints on ad auction companies’ illegal use of personal data. The complainants sent their requests to the data protection authorities in Ireland, Poland and the U.K. The ad auction companies have been accused of the illicit use of sensitive data, such as users’ religious beliefs, health histories, ethnicities and sexual orientations. “Ad auction companies can fix this by simply excluding personal data, including their tracking IDs, from bid requests,” Brave Chief Policy & Industry Relations Officer Johnny Ryan said. “If the industry makes some minor changes then ad auctions can safely operate outside the scope of the GDPR. This would protect privacy, but would also protect marketers and publishers from very significant risk.” [Brave]

Facts & Stats

WW – Hackers Exposing Megaleak of 2.2 Billion Breach Records

Someone has assembled together breached databases and many more into a gargantuan, unprecedented collection of 2.2 billion unique usernames and associated passwords and is freely distributing them on hacker forums and torrents, throwing out the private data of a significant fraction of humanity. Earlier this month, security researcher Troy Hunt identified the first tranche of that mega-dump, named Collection #1 by its anonymous creator, a patched-together set of breached databases Hunt said represented 773 million unique usernames and passwords. Now other researchers have obtained and analyzed an additional vast database called Collections #2–5, which amounts to 845 gigabytes of stolen data and 25 billion records in all. After accounting for duplicates, analysts at the Hasso Plattner Institute in Potsdam, Germany, found that the total haul represents close to three times the Collection #1 batch. You can check for your own username in the breach using Hasso Plattner Institute’s tool here, and should change the passwords for any breached sites it flags for which you haven’t already. As always, don’t reuse passwords, and use a password manager. (Troy Hunt’s service HaveIBeenPwned offers another helpful check of whether your passwords have been compromised, though as of this writing it doesn’t yet include Collections #2-5.) [Wired]


CA – City Staff Argue Provincial Law Prevents Sharing Open Data on Deadly Collisions

Edmonton City staff say making data about fatal and serious collisions available to the public would violate provincial privacy laws. A report going to city council’s community and public services committee [see Agenda] argues specific information about collisions cannot be made available on its open data portal because it would violate privacy rules under the Traffic Safety Act. At an October 2018 meeting, Mayor Don Iveson made a motion asking administration to explore adding that information to the city’s data catalogue. The city does share some collision information on its website, but not in a form that can be sorted or used by the public. Conrad Nobert was able to get about 10 years worth of data on collisions with pedestrians and bicycles through a freedom of information requests in 2015 [and 2016]. In 2017, however, the city refused to hand over the data. Norbert said in a recent interview: “I think whatever privacy the city is protecting is less important, frankly, than the public’s right to know where these collisions are happening because a lot of people are getting hurt and killed in our city” he said the most frustrating aspect of Edmonton citing provincial rules is that the City of Calgary has live traffic incident data in its open portal that updates every 10 minutes. The city’s transportation department tweets out basic details and locations of collisions as they occur. Other traffic safety advocates have argued for Alberta Minister of Transportation Brian Mason to make province-wide collision data available [see here]. Edmonton Journal | City against releasing details of fatal collisions publicly due to privacy concerns


US – How at-Home DNA Kits Are Opening Up Family Secrets

An in-depth piece by The Wall Street Journal looks at the rise of at-home DNA testing kits and the effect they can have on family secrets. Sales of the DNA kits “are soaring as people seek to learn more about their roots,” the report states, but the genetic information can lead to the discovery of extramarital affairs, lost siblings and more. “Given the rapid growth of consumer genetic testing,”… “people can often be identified even if they don’t take a test themselves.” A paper published in Science last October found that more than 60% of individuals in the U.S. of European descent have a third cousin or closer in a DNA database. One of the authors of the paper said, “DNA tests can reveal that there is something odd going on. But they don’t tell you the story of what happened.” Wall Street Journal

Health / Medical

CA – New App Will Let Albertans See Their Own Health Records

Alberta Health is set to launch a new tool, in the coming weeks, that will give patients access to some of their personal health information online. The MyHealth Record portal is designed to be used on computers, tablets or smartphones and the province confirms it is in the final consultation process with physicians and other healthcare providers. According to a web page for health-care professionals, patients will be able to view some lab tests — including results for cholesterol, iron, kidney and liver function — immunization records and their medication history. Lab results are expected to come with links to information about the tests and the results. “This is really the wave of the future,” said Tom Keenan, digital security expert and professor in the faculty of environmental design at the University of Calgary. He points out Alberta Health already keeps personal health information in an electronic health record — Netcare. “All we’re really talking about here is giving you access to some of it.” “The really deep stuff — the stuff that you’ve been in the hospital and had MRIs — those are not going to be available in the public portal and I think that’s a good idea,” he said. One risk, according to Keenan, is people may panic when they see test results they don’t understand, prompting unnecessary doctors’ appointments. It’s something Alberta Health has considered. It’s promising to provide links to medical information to help explain the tests and says public-health nurses at Healthlink — the province’s health information phone line — will also be able to answer questions about information released on MyHealth Record. Alberta Health plans to add more features in the months and years after MyHealth Record is launched. CBC News

US – Apple and Aetna Teaming Up on a New App to Help Track and Reward Healthy Behavior

Apple and insurance giant Aetna have teamed up on an iPhone and Apple Watch app that provides rewards, including an option to earn a free Apple Watch, to members who engage in healthy behaviors like getting regular exercise and more hours of sleep. The new app, dubbed Attain, also provides Aetna members who sign up with nudges, such as to get an annual flu shot or take their medication on time. The two companies have been working together since 2016 on the Attain app, which will be available in the spring of this year. Apple has also made clear that health care is a key area of future growth. The company has a variety of health-related initiatives in progress, ranging from software to collect medical information to biomedical sensors. Earlier this month, Apple’s CEO Tim Cook told CNBC’s Jim Cramer: “I believe, if you zoom out into the future, and you look back, and you ask the question, “What was Apple’s greatest contribution to mankind?” It will be about health.” Understanding that some users may be skittish about sharing personal health information, Apple and Aetna are making privacy a priority. Apple executives have often stressed that the company user privacy very seriously. In this case, members’ data is encrypted in transit and at rest, and Apple has said it will not access data that uniquely identifies an individual. For its part, Aetna said that it won’t use the data gathered from the Apple Watch to make coverage decisions, including to increase monthly premiums. [CNBC]

US – Companies Selling ‘Risk Scores’ on Patients With Opioid Risks

Information is being sold to doctors, insurers and hospitals to identify patients who may be at risk of opioid addiction. The data is packaged as a “risk score” and is normally done without a patient’s consent. Companies have been able to gather information from insurance claims, digital health records, housing records and data from a patient’s friends and family. While the risk scores are used to help doctors make informed decisions when they prescribe opioids, patient advocates are concerned the data will be used to prevent patients from getting the medication they need. [Politico]

US – HHS Opens Public RFI to Improve HIPAA Privacy Rule

The U.S. Department of Health and Human Services’ Office for Civil Rights has released a Request for Information on the best ways to improve the Health Insurance Portability and Accountability Act Rules in order to improve care via increased data sharing. The agency will accept public comment on aspects of the HIPAA Privacy Rule, such as the facilitation of parental involvement in care and the disclosures of personal health information for treatment and payment. “We are committed to pursuing the changes needed to improve quality of care and eliminate undue burdens on covered entities while maintaining robust privacy and security protections for individuals’ health information,” OCR Director Roger Severino said. All public comments must be submitted by Feb. 12. [HHS]

WW – Women’s Health Apps Fail to Meet Privacy Standards

Over the past few years, the period-tracking app market has grown by at least $350 million. Despite handling personal health data, the article highlights that the apps are not held to the same privacy standard as health care organizations. “Barring some form of regulation, the market is likely to keep sliding toward ever-more-intensive data mining.” [Bloomberg Businessweek]

US – HITRUST Announces New Framework

The HITRUST Alliance announced it is expanding its framework to now include the EU General Data Protection Regulation and the Singapore Personal Data Protection Act, describing the new model as a “one framework, one assessment.” HITRUST officials also announced the alliance has submitted a formal application with the EU’s Data Protection Board and the Irish Data Protection Commission to have its HITRUST CSF recognized as a standard for GDPR certification. It was also announced that HITRUST is exploring the process for becoming an Accountability Agent under the Asia-Pacific Economic Cooperation Cross-Border Privacy Rules and Procedures for Processing programs. [Healthcare IT News]

US – Insurers’ Access to Medical Data Criticized as Privacy Breach

Insurance companies often require applicants to sign over “full authority” to view their medical records for claims and cover application, causing some to argue the consent grants access to a far greater amount of data than necessary. Josh Mennen from the Australian Lawyers Alliance has called for the “open-ended” access to end, adding, “We believe that it is appropriate to limit the period in which an insurer can go back through medical records to five years.” He also argues that the insurance industry should be required to inform patients of the information obtained under their consent. He adds, “the law and insurance practices do not strike the right balance.” [ABC News]

Horror Stories

WW – 126% Increase in Exposed Consumer Data, 1.68 Billion Email-Related Credentials

The Identity Theft Resource Center and CyberScout, have released the 2018 End-of-Year Data Breach Report. The number of U.S. data breaches tracked in 2018 decreased from last year’s all-time high of 1,632 breaches by 31% (or 1,244 breaches), but the reported number of consumer records exposed containing sensitive personally identifiable information jumped 126% from the 197,612,748 records exposed in 2017 to 446,515,334 records this past year. Another critical finding was the number of non-sensitive records compromised, not included in the above totals, an additional 1.68 billion exposed records. While email-related credentials are not considered sensitive personally identifiable information, a majority of consumers use the same username/email and password combinations across multiple platforms creating serious vulnerability. “When it comes to cyber hygiene, email continues to be the Achilles Heel for the average consumer,” said CyberScout founder and chair, Adam Levin. “There are many strategies consumers can use to minimize their exposure, but the takeaway from this year’s report is clear: Breaches are the third certainty in life, and constant vigilance is the only solution.” Identity Theft Resource Center | On Data Privacy Day, here’s a reminder that you have none Or at least very little

Identity Issues

US – Louisiana Introduces Digital Driver’s Licenses

Drivers in the US state of Louisiana now have the option of obtaining a digital driver’s license, or DDL. Louisiana’s DLL launched in July 2018. While law enforcement will accept the DDL as a valid identification document, other entities, such as retail stores are not required to accept it. Louisiana’s DDL is not currently accepted by TSA. Several other US states are in various stages of developing similar systems. Digital Drivers Licenses are in various stages of development in several states, including Iowa, Idaho, Colorado, Maryland and the District of Columbia, but none has a statewide rollout. The piloted security features explored include remote revocation by the DMV, encryption at rest/transit and biometric authentication to access the license or transmission of that information. As the states are using different solution providers including Gemalto and IDEMIA, interoperation and equivalent protections are going to be key. Source: Govtech

WW – Digital Drivers Licenses Expose Citizens to Hackers and Abuse: Critics

The world’s largest biometrics surveillance company wants to add your driver’s license photo to its digital library, which already has collected and processed some 3 billion faces. Idemia, based in Paris, is at the center of a push to create digital driver’s licenses, also known as mobile driver’s licenses, that could allow motorists to flash an app on their smartphones — instead of showing traditional plastic ID cards — to prove they can drive, vote or drink beer. Idemia systems are responsible for issuing traditional licenses in 42 states that account for 80% of all U.S. drivers. Idemia is a multinational company that has partnered with U.S. security and law enforcement agencies for decades to provide multilevel data-gathering, including fingerprinting, airport security and facial recognition technologies. But massive data breaches such as those at Facebook and Equifax have put Idemia under scrutiny, especially among privacy and digital rights groups. Critics say the company is vulnerable to hackers and government abuse as it fosters an “Orwellian vision” of a monitored society in which privacy and civil liberties yield to intrusion in the name of public safety and security. Idemia’s U.S. headquarters is in Reston, Virginia. Information that the company collects every day flows into databases at the Departments of Defense and Homeland Security and the FBI, where millions of personal, biographic and biometric files are kept on Americans and foreigners. Critics of the company say it’s unclear how long Idemia stores data because so many details are categorized as “classified” or too sensitive to national security to be made public. According to the Center on Privacy and Technology at the Georgetown Law Center, most adult Americans are already in a facial recognition database because of how governments format driver’s licenses and passport photos for such use. The center notes that 31 states currently allow law enforcement to search driver’s license image databases with facial recognition software. Jennifer Lynch, director of surveillance litigation for the Electronic Frontier Foundation, a San Francisco-based digital rights group, warned that too few federal and state regulations sufficiently govern police use of facial recognition technology and that poor data management and “a high rate of misidentifications” have plagued agencies such as Homeland Security. She noted that the department’s inspector general recently criticized the office of biometric identity management for failing to train personnel properly and for relying too heavily on third-party data collectors. In an interview, Ms. Lynch said Idemia poses a threat because of its position at the center of so many government databases. Idemia now is pushing for mobile driver’s licenses as a form of universal digital ID. According to industry reports, Idemia is working with 38 state driver’s license programs and that much of the work focused on mobile versions using facial recognition technology to unlock access to the app. Police say mobile driver’s licenses connected to a central database could make their work safer and easier because updates could provide information about a motorist’s license suspension, change of address or outstanding tickets and warrants. Civil liberties advocates worry that multiple state mobile driver’s license programs could morph into a de facto national ID system without any significant public debate. Source: Washington Times

Internet / WWW

US – Facebook Hires Privacy Policy Managers

In an attempt to improve its less than stellar record on privacy, Facebook has hired several people who have been critical of the company’s practices. In December, Facebook hired Nathan White, formerly senior legislative manager for Access Now, to be the company’s privacy policy manager. Earlier this week, Facebook announced that it has hired lawyers Robyn Greene and Nate Cardozo, formerly senior policy counsel at New America’s Open Technology Institute and senior information security counsel at the Electronic Frontier Foundation (EFF), respectively. Cardozo will be the WhatsApp privacy policy manager; Greene will be privacy policy manager for law enforcement and data protection. Facebook has also hired Bijan Madhani as its privacy and public policy manager; Madhani was formerly senior policy counsel at the Computer & Communication Industry Association. Sources: Wired: | Cyberscoop | Meritalk

WW – Google’s Proposed Changes to Chrome Extension APIs Could Break Ad Blockers

Google’s proposed changes to its Chrome browser would break content-blocking extensions. This includes ad blockers. The potential changes will limit the capabilities of extension developers. The proposed changes would also affect antivirus browser extensions, parental control extensions, and others. A Google software engineer notes that “this design is still in a draft state, and will likely change.” [ZDnet | Wired | The Register | The Register]

Law Enforcement

CA – Sask. Police Reflect on First Year Under New Access and Privacy Laws

Saskatchewan’s The Local Authority Freedom of Information and Protection of Privacy Amendment Act, 2017 (LAFOIPP) came into effect on Jan. 1, 2018. Since then, police agencies subject to freedom of information requests and the public is taking advantage of it, with more than 400 requests made to police in five Saskatchewan cities in 2018. The Saskatoon Police Service received a total of 275 information requests in 2018. Of those, 36 were granted in full, 132 were disclosed in part (parts were redacted), 17 brought up no responsive records and 10 were withdrawn by the applicant. Eighteen of the requests were from media, five from researchers and three from associations, groups or businesses. The rest were made by individuals or individuals represented by legal counsel. Smaller police services experienced considerably fewer requests, but some of those requests took exceptional resources to fulfill. Saskatchewan Information and Privacy Commissioner Ronald Kruzeniski said the transition and implementation of the new amendments has gone relatively smoothly. Kruzeniski said one of the biggest challenges has been around the naming — or not naming — of homicide victims in Regina. After the amendments to LAFOIP, the Regina Police Service (RPS) said they would not be automatically releasing the names of homicide victims. The decision triggered backlash from local media and was quickly reversed as the RPS sought insight from Kruzeniski’s office and consulted with media. As it stands now, the RPS will release names on a case-by-case basis. Kruzeniski anticipates similar challenges may arise if the Saskatchewan government follows through on a plan to implement Clare’s Law, which would allow police to disclose records of potentially abusive partners. He also expects more issues will arise as the amendments play out over a second year. [Regina Leader-Post | Saskatoon police disclosed personal information to Legal Aid without consent: Privacy Commissioner | Prof taking U of R to court over denied freedom of information request]


WW – Google’s Sidewalk Labs Plans to Package and Sell Location Data on Millions of Cellphones

A new initiative from Sidewalk Labs, the city-building subsidiary of Google’s parent company Alphabet known as Replica, offers planning agencies the ability to model an entire city’s patterns of movement. Like “SimCity“ [a city-building video game] Replica’s “user-friendly” tool deploys statistical simulations to give a comprehensive view of how, when, and where people travel in urban areas. In recent months, transportation authorities in Kansas City, Portland, and the Chicago area have signed up to glean its insights. The only catch: They’re not completely sure where the data is coming from. Replica uses real-time mobile location data. As Nick Bowden of Sidewalk Labs has explained, “Replica provides a full set of baseline travel including the total number of people on a highway or local street network, what mode they’re using (car, transit, bike, or foot), and their trip purpose (commuting to work, going shopping, heading to school).” The program gathers and de-identifies the location of cellphone users, which it obtains from unspecified third-party vendors. It then models this anonymized data in simulations — creating a synthetic population that faithfully replicates a city’s real-world patterns but that “obscures the real-world travel habits of individual people,” as Bowden told The Intercept. The program comes at a time of growing unease with how tech companies use and share our personal data — and raises new questions about Google’s encroachment on the physical world. The New York Times revealed how sensitive location data is harvested by third parties from our smartphones — often with weak or nonexistent consent provisions. A Motherboard investigation in early January further demonstrated how cell companies sell our locations to stalkers and bounty hunters willing to pay the price. The Google sibling’s plans to gather and commodify real-time location data from millions of cellphones adds to these concerns. An Associated Press investigation showed that Google’s apps and website track people even after they have disabled the location history on their phones. Quartz found that Google was tracking Android users by collecting the addresses of nearby cellphone towers even if all location services were turned off. The company has also been caught using its Street View vehicles to collect the Wi-Fi location data from phones and computers. However, Sidewalk Labs maintains it has instituted significant protections to safeguard privacy, before it even begins creating a synthetic population. Any location data that Sidewalk Labs receives is already de-identified (using methods such as aggregation, differential privacy techniques, or outright removal of unique behaviors). Some urban planners and technologists … remain skeptical about these privacy protections A landmark study uncovered the extent to which people could be re-identified from seemingly-anonymous data using just four time-stamped data points of where they’ve previously been. There are also lingering questions about how Sidewalk Labs sets limits about the type and quality of consent obtained. A document from the Illinois Department of Transportation describes Replica’s data sources as “mobile carrier data, location data from third-party aggregators and Google location data, to generate travel data for a region.” This data sample, it adds, “is not limited to Android devices” and “is collected from individuals for months at a time, allowing for a complete picture of individual travel patterns.” The Intercept | Related: Location tracking is here to help real estate developers get even richer | 15 senators demand FCC, FTC investigate carriers selling location data

Online Privacy

WW – Apple Revokes Facebook and Google Developer Certificates Because They Used Them to Collect User Data

Facebook paid adults and teenagers to install a data-slurping iOS app using their enterprise certificate, bypassing the Apple App Store and requisite security checks. Apple had previously banned the application from the App Store for violating their data privacy rules. The app allows Facebook to see virtually everything a user does on the device. Apple states that distribution of the application for consumer research violates the terms of their enterprise development license. Google used a similar application to collect user and device data on iOS devices. Google acknowledged their mistake and disabled the application before Apple revoked its enterprise certificate. Both the Facebook and Google app are still available on Android. [Wired | The Register | Ars Technica | CNet | ZDnet | The Verge]

US – Report Finds Friends Have a Lot to Do With Personal Privacy

A report from the University of Vermont and University of Adelaide found it is possible to predict what a person would post on social media with 95% accuracy, even if the person did not have an account. The discovery was based solely on what a person’s friends posted and looked at more than 30.8 million tweets from 13,905 accounts. “You alone don’t control your privacy on social media platforms,” University of Vermont Professor Jim Bagrow said. “Your friends have a say too.” [CNET]

Other Jurisdictions

WW – New Map Sheds Light on Global Data Protection

In honor of Data Privacy Day, the United Nations Conference on Trade and Development released an interactive map tracking data protection laws across the globe. The Global Cyberlaw Tracker identifies e-commerce legislation, including e-transactions, consumer protection, data protection/privacy, and cybercrime adoption across the 194 member states. While only 58% of the world’s countries have data protection and privacy legislation in place, an additional 10% has drafted legislation currently in the works. [Fast Company]

Privacy (US)

US – Concerns Raised Over Practice of Student Data Collection

Universities are using student data to help determine “demonstrated interest.” The data is used to help inform enrollment officers if an applicant is considering whether to attend by analyzing information such as email opening rates, link clicking, and RSVPs versus attendance to online events. Privacy advocates have raised concerns over the practice. “It feels like surveillance and I don’t think it’s a healthy thing for schools to do,” Common Sense Media Founder and CEO James Steyer said. “Universities should not take privacy rights for granted.” [The Wall Street Journal]

US – Judge Disagrees with Facebook’s Harm Argument in Privacy Case

U.S. District Judge Vince Chhabria disagreed with an argument made by Facebook as the tech company attempts to have a multidistrict privacy case dismissed. Facebook lawyers said the company cannot be sued for third parties that accessed users’ private data since no “real world” harm took place from the arrangement due to users allowing third parties to obtain the data through their privacy controls. Chhabria disagreed with Facebook’s position. “The injury is the disclosure of private information,” said Chhabria, who called the wording Facebook used in the privacy controls “quite vague.” Meanwhile, the Irish Data Protection Commission warned Facebook about the privacy issues it faces in its attempt to merge its messaging apps. [Courthouse News Service]

Privacy Enhancing Technologies (PETs)

WW – New Tech Aims to Add Transparency to Privacy Notices

In a news release, PrivacyCheq announced the release of Privacy Facts Interactive, a new privacy service designed to inform users about privacy notices and fulfill transparency requirements established by the EU General Data Protection Regulation. “Realizing that mobile devices are now the dominant method used to access content, we completely redesigned the privacy notice for optimal use on mobile devices, using the ‘Nutrition Facts’ paradigm that consumers already understand,” PrivacyCheq CEO Roy Smith said. Editor’s Note: The IAPP Privacy Tech Vendor Report lists more than 150 vendors. [Press Release]

RFID / Internet of Things

US – FPF Releases Paper on IoT-Device Privacy for People With Disabilities

The Future of Privacy Forum has released a new paper called, “The Internet of Things and People with Disabilities: Exploring the Benefits, Challenges, and Privacy Tensions.” It includes recommendations for approaches to incorporate privacy and accessibility by design. FPF CEO Jules Polonetsky said, “Internet of things devices in homes, cars and on our bodies can improve the quality of life for people with disabilities — if they are designed to be accessible and account for the sensitive nature of the data they collect. We expect this first-of-its-kind paper to inspire collaboration among advocates, academia, government, and industry to ‘bake in’ privacy and accessibility from the start of the design process.” Full Story


WW – Report: Data Breaches and Cyber Attacks in Global Risks List Top Five

The World Economic Forum’s (WEF’s) Global Risks Report 2019 places large-scale cyber attacks and mass incidents of data theft at the top of the list of global risks, alongside natural disasters and climate change. The report notes the risk that cyberattacks pose to critical infrastructure, and well as rising concerns about identity theft and the erosion of privacy. [Source: ZDnet | Weforum.org | www3.weforum.org]

Smart Cities

CA – NDP MP Calls Government to ‘Push the Pause Button’ on Sidewalk Labs

In a January 11th, 2019 letter to Infrastructure Minister François-Philippe Champagne, NDP Member of Parliament Charlie Angus addresses Ontario’s auditor general’s report which had concerns that the project coordinators and organizers didn’t consult anyone from different levels of the government and instead quietly discussed the project with senior political staff [full report see Ch.3 sec. 3.15 “Water Front Toronto“] In his letter Angus states: “Having a project of this scale pushed through by political staff behind the scenes would be wildly inappropriate. Your government needs to be clear about exactly how this deal was arrived at, particularly the role of several ministerial staffers who have gone on to work directly for Sidewalk Labs.” Angus noted that the lack of transparency surrounding the size of the project is “worrying.” The suggested plan is to construct the smart city within a 12-acre plot of land on Waterfront in downtown Toronto. But Angus said that images of what the site looks like make it seem as if designated project would take over the entire of Waterfront Toronto. Privacy concerns around the project have also been a highly contentious topic, which Sidewalk Labs has yet to detail to the public. Angus, who is also the co-chair of the House of Commons Standing Committee on Access to Information, Privacy and Ethics, also noted that the Liberal government doesn’t make a clear distinction between Google and Sidewalk Labs, especially when it comes to the company lobbying. He said that the deal seems to look more like a “plan cooked up with American lobbyists to benefit Google,” adding that Champagne’s failed in his leadership to alleviate growing concerns towards the project. Betakit | If done properly, smart cities hold promise in improving quality of life, says information and privacy commissioner | Sidewalk Labs is set to transform Toronto. It’s starting with its own office | The CRA is failing to notify Canadians that their banking records are being shared with the IRS



16 – 23 January 2019


US – New York City Bill Requires Biometric Use Transparency

Int. No. 170, a local law requiring businesses to notify customers of the use of biometric identifier technology was introduced in the New York City Council. Commercial establishments that collect, retain, convert, store or share biometrics must disclose such use by placing a clear and conspicuous sign in all entrances (in a form and manner prescribed by the Commissioner), and post online the period of retention, the kind of biometrics collected, any privacy policy governing the use of biometrics, and whether biometrics are shared with third parties. The Law takes effect 180 days after it becomes Law. [Int. No. 170 – A Local Law Requiring Businesses to Notify Customer of the use of Biometric Identifier Technology – New York City Council]

Big Data / Data Analytics / Artificial Intelligence

EU – EU Working Group Urges Organizations to Assess AI Systems

The EU Commission’s working group on artificial intelligence issued draft ethics guidelines for processing personal data through AI. To comply with the GDPR, businesses must be mindful of having a legal basis for processing personal data; while existing law may warrant application of some AI systems (e.g., for money laundering and terrorist financing), the default assumption should be that consent has not been given to be identified (including re-identification from anonymised data). [EC – Draft Ethics Guidelines For Trustworthy AI]


CA – OIPC AB Asks Government to Protect Citizens’ Data from Political Parties

Alberta Information and Privacy Commissioner Jill Clayton has asked the provincial government to consider alterations to its Personal Information Protection Act to safeguard citizens’ data from political parties. Clayton said even the most basic requirements for political parties’ use of data would be a step in the right direction. Service Alberta Spokesperson Annalise Klingbeil said in an email the government has looked at Clayton’s request. “You would have a right to go to a private sector company and say, ‘What do you have about me, and where did you get it and stop using it and stop disclosing it or, more importantly, safeguard it, and tell me if there is a breach.’ But none of that applies (with political parties),” Clayton said in an interview with Postmedia. [Edmonton Journal]

CA – OIPC NL: More Work on Surveillance Needed by Newfoundland Schools

The OIPC Newfoundland and Labrador reviewed the use of video surveillance in provincial schools and school buses, pursuant to the Access to Information and Protection of Privacy Act. An audit by the OIPC found that the school district did not demonstrate an overarching authority to collect PI via video surveillance, and not all schools have any or enough signage; recommendations include to complete a template for exterior signage (currently in development), and in the absence of general authority, discontinue the installation of new cameras in existing schools. [OIPC NFLD – The Use of Video Surveillance in Schools and on School Buses

CA – Canadian Association Explores GDPR Impacts on Businesses

The Canadian Chamber of Commerce issues a report on how nations are reacting to the use and abuse of personal information through privacy legislation. Canadian businesses identified as unintended consequences of the GDPR job complexity, costs of ensuring compliance, risk of losing valuable data through compulsory retention and deletion, data localization restrictions, and reduced ability to be competitive; changes to Canada’s regulatory framework must consider both privacy and economic factors. [A Data Deficit – The Risk of Getting it Wrong – Canadian Chamber of Commerce]

CA — Ontario Court Finds Online Post by Law Enforcement Defamatory

The Court considered a request for dismissal by police services and Crime Stoppers of an individual’s claim for defamation and negligence. The police services and Crime Stoppers posted an image of an individual along with a description which suggested that she was guilty of a purse snatching; generally words suggesting that a person is guilty of a criminal act are defamatory as they tend to lower the person’s reputation, and the message was communicated to other persons (on Crime Stoppers’ webpage). [Gabrielle Roy v. Ottawa Capital Area Crime Stoppers et al. – 2018 ONSC 4207 CanLII – Ontario Superior Court of Justice]

CA – OIPC AB Finds Increased Likelihood of Harm

The Office of the Information and Privacy Commissioner of Alberta was notified by a human resources service company of unauthorized access to personal information, pursuant to the Personal Information Protection Act. A company was subject to a targeted email phishing attack that resulted in email accounts of two employees being accessed and used by a third party to send out further phishing emails; there is an increased risk of harm as identity information can be used for harms of identity theft and fraud, and employment information can be used to cause hurt, humiliation and embarrassment. [OIPC AB – Breach Notification Decision – P2018-ND-156 – Morneau Shepell Ltd]


WW – Majority of Facebook Users Unaware of Data Algorithms for Targeted Ads

A survey conducted by the Pew Research Center found the majority of Facebook users do not know the platform uses algorithms to collect their information for targeted ads. Of the 963 U.S. Facebook users polled, 74% said they were unaware Facebook tracked a list of their interests and traits for their “Your ad preferences” page. When asked how they felt about the ad preferences page, 51% said they were not comfortable with the practice. While 59% of users said the findings on the ad preference page were accurate, 27% said the listings were not a proper representation of their interests. [Full Story]


US – Audit Found Illegal Data Sharing at Utah Driver License Division

A state audit found that the Utah Driver License Division is illegally sharing personal data with five government agencies. The data is reported to include Social Security numbers, birthdates, physical characteristics, addresses and license numbers. The audit notes that state law bans the sharing of personally identifiable information “except in the interest of public safety or as specifically authorized in statute.” In response, the Driver License Division claimed the law can be interpreted differently and plans to ask for clarification from the Legislature. [U.S. News & World Report]


CA – CRTC Addresses CASL Consent Exemptions

The Canadian Radio-television and Telecommunications Commission updated its FAQ’s for compliance with Canada’s Anti-Spam Legislation (CASL). CASL does not apply to commercial electronic messages sent to limited access, secure confidential accounts (where communications are only one way and sent by the entity who provided the account), market research or surveys with no commercial content (do not engage in commercial activity under the guise of a survey) or employment recruitment messages (unless there is an option to subscribe to notification of future opportunities). [CRTC – FAQs about CASL]

Electronic Records

US – National Institutes of Health Expands Data-Collection Efforts

The U.S. Department of Health and Human Services’ National Institutes of Health announced the All of Us Research Program has launched the Fitbit Bring-Your-Own-Device project, which will enable participants to share health information with researchers to help aid discovery and broaden the program’s data-collection efforts. “Collecting real-world, real-time data through digital technologies will become a fundamental part of the program,” All of Us Research Program Director Eric Dishman said. “This information, in combination with many other data types, will give us an unprecedented ability to better understand the impact of lifestyle and environment on health outcomes and, ultimately, develop better strategies for keeping people healthy in a very precise, individualized way.” [AllOfUs]

EU Developments

EU – Political Parties Face Fines for Data Misuse Under New EU Rules

The European Parliament and European Union member states have agreed to a new set of rules to curb the misuse of personal data to influence elections. Any political party found to have used personal information to influence voter behavior can now be fined under the new law. All EU institutions have approved the law, but Parliament and Council must still formally adopt the final text. “We expect European political parties to fully respect the rules, so that Europeans can cast their vote being fully and fairly informed during the campaign,” EU Justice Commissioner Věra Jourová said on Twitter. [Politico]

UK – UK Passes Regulations Requiring Retention of Communications Data

The UK passed The Data Retention and Acquisition Regulations 2018, related to information about communications data. The Regulations amend the current retention regime to conform to EU law, requiring telecoms and postal operators to retain and disclose communications data for purposes of national security, public safety, or to prevent death or injury; an authorisation may relate to data not yet in existence, or used by another telecom in relation to the same telecommunications system. [The Data Retention and Acquisition Regulations 2018 – 2018 No. 1123 – UK Government]

EU – German DPA Advises on Controller Restrictions for DPOs

The Data Protection Authority in the German province of Baden-Wurttemberg guidance on data protection officers, pursuant to the GDPR. Internally appointed DPOs should not have other positions with conflicts of interest (CEO, IT head, HR director, authorised signatory), do not instruct DPOs to come to specific decisions about processing activities, and they cannot be dismissed or disciplined for performing their designated duties; they should focus on more risky processing activities, and can be consulted on DPIA issues (methodology to follow, if the DPIA has been carried out correctly). [DPA Baden-Wurttemberg – DPO Practice Guide (in German)]

EU – DPA Belgium Publishes Form for Breach Reporting

The Data Protection Authority in Belgium process for reporting data breaches, pursuant to article 33 of the GDPR. A comprehensive form has been provided that can be downloaded, electronically completed, and submitted via the DPA’s web portal; form contents include breach details (nature of the breached data, processing affected, number of affected individuals), preventive measures that will be taken (e.g. remote wipe, hashing, password change), and details of the assessment used to determine risks to affected individuals’ rights and freedoms. [DPA Belgium – Notification Form of a Data Breach: Electronic Portal |  Form Instructions ]


CA – Nova Scotia Transportation Department Will Not Fulfill Tully’s Request on Ferry Operator’s Fees

The Nova Scotia Opposition Progressive Conservatives party wrote in a letter the Transportation Department will not fulfill a request made by Privacy Commissioner Catherine Tully on the management fees and bonuses paid to the private operator of a ferry that travels from Yarmouth to Maine, the Vancouver Sun reports. Deputy Minister Paul LaFleche said the department has no plans to honor Tully’s inquiry. “There is a legitimate public interest in protecting the confidential commercial information of third-party businesses,” LaFleche wrote. Tully said she was not surprised by the response. “We’ve seen a pattern as I reported in my annual report over the past year or two, where the government has been quite frequently rejecting recommendations for further disclosure that I make,” Tully said. [Vancouver Sun]

CA – Coalition Calls for Update to Nova Scotia Laws in Response to FOI Breach

The Right to Know Coalition has released a report on the Nova Scotia freedom-of-information data breach. After more information about the breach was released by the privacy commissioner of Nova Scotia, Right to Know Coalition President Michael Karanicolas said the breach had a wider scope than initially reported. Karanicolas added the province’s privacy laws need to be updated. “We have [25-year-old] laws that date from just a couple years after the commercial internet was established and have not been updated since then,” Karanicolas said. “We need a [21st-century] approach to the problem.” [Halifax Today]


US – Researchers Propose Patient-Driven Genetic Data Sharing

Forbes reports on an effort to change the way in which companies are changing ways to sell genetic tests and store patient data react to patients’ requests for their personal data. Citing instances when patients had difficulty collecting their own genetic data from companies, the article points to a study addressing a frustration among cancer researchers, which is that despite the abundance of voluntary data, most is unusable due to how data sharing is enabled across platforms. In the study, researchers troubled by finding a comprehensive data set have proposed a “patient-driven cancer genome collective to directly address this need and empower data liberation and donation to advance cancer research and patient empowerment.” [Forbes]

Health / Medical

CA – Disclosure of PHI Necessary to Defend Claim

The Ontario Human Rights Tribunal considered whether the transit company can obtain personal health information of an employee in a discrimination case. The Human Rights Tribunal ordered the disclosure of an employee’s PHI to her employer for the purpose of responding to, and defending against a discrimination claim regarding the employee’s disability; the PHI can be disclosed to potential witnesses who may be called upon to give evidence (they must be reminded to keep the information in strict confidence), and only PHI necessary to facilitate a response can be used. [Cameron v. Toronto Transit Commission – 2018 HRTO 1862 CanLII – Human Rights Tribunal of Ontario]

CA – Woman Says Sexual Trauma History Leaked in Nova Scotia FOI Data Breach

A woman who was a victim of the Nova Scotia government’s freedom-of-information website data breach has spoken out after she discovered information on the sexual trauma she suffered as a child was leaked in the incident. The woman said she originally received nearly 300 pages of documents in February 2018 about her ordeals, about two months before she was informed her data was compromised in the FOI website breach. “The nature of my request had already been about sexual trauma, so for them to post that for the world to see was further victimization,” the woman said. McInnes Cooper Partner David Fraser said Canadian courts have previously awarded financial compensation for data breach victims who have been affected by psychological harm. [CBC News]

CA – Privacy Concerns Surround Streamed Video of Newborn Seized From Winnipeg Hospital

A video was posted of child welfare officials who seized a newborn child from Winnipeg’s St. Boniface Hospital. The uncle of the mother streamed the incident as Winnipeg Child and Family Services officials took the child. Fearless R2W Coordinator Mary Burton said the video could expose the family to judgment from viewers who do not know the circumstances behind the moment; however, she notes it can also shine a light on these types of seizures. Manitoba’s Child and Family Services Act prohibits the publication of any identifying information about individuals involved with Child and Family Services. [CBC News]

CA – OIPC PEI Releases Report on Health PEI breach

Prince Edward Island Information and Privacy Commissioner Karen Rose released her report on Health PEI’s response to the discovery of a former employee who illicitly accessed the records of 353 patients. Rose noted Health PEI does carry out random audits of employees to determine how they access the system; however, she deemed the audits inadequate after it was revealed the employee continued their inappropriate behavior for three years. “There is room for improvement in their auditing process, for better detection of snooping,” Rose wrote in the report. “I recommend that Health PEI conduct a careful analysis of its auditing process.” [CBC News]

US – New York Enforces Prescription Privacy

Assembly Bill 73, amending the Public Health Law in relation to prescription privacy, has been filed for introduction in the New York Legislature: If passed, healthcare providers (including pharmacists, insurers, or drug manufacturers) shall be prohibited from disclosing, selling, transferring, providing or using any patient individual identifying information to any entity for marketing purposes; exceptions include payment or reimbursement for health care services (e.g. medical necessity or utilization review), and health research purposes (conducting clinical trials to review effects of prescribing services). if passed, the bill will take effect within 180 days. [AB 73 – An Act to Amend the Public Health Law in Relation to Prescription Privacy]

Horror Stories

US – Data Breach Exposes 7 Years of FBI Information

Three terabytes of unprotected data from the Oklahoma Securities Commission were discovered by a researcher with cybersecurity firm UpGuard. The data included millions of files, much of which contained sensitive U.S. Federal Bureau of Investigation information dating back seven years, as well as emails dating back 17 years and personally identifiable information. A spokesperson for the FBI said, “Adhering to Department of Justice policy, the FBI neither confirms nor denies any investigation.” Charles Kaiser, a spokesperson at the commission, said, “This matter is under investigation and the department has no further comment at this time.” Meanwhile, Mashable reports hackers recently exposed more than 87 gigabytes of passwords and email addresses. [Forbes]


US – House Republicans Press Telecoms for Geolocation Answers

Following the discovery of telecoms’ sale of geolocation data, House Energy and Commerce Committee ranking member Greg Walden, R-Ore., along with three other Republican committee members, has sent letters to T-Mobile, AT&T, Sprint and Verizon, asking the companies to explain the privacy policies concerning location-based information and services. The lawmakers also sent letters to Zumigo and MicroBilt, asking that the companies identify all commercial relationships with both U.S. and foreign wireless carriers. In a statement, the lawmakers said, “We are deeply troubled because it is not the first time we have received reports and information about the sharing of mobile users’ location information involving a number of parties who may have misused personally identifiable information.” [Bloomberg Law]

Online Privacy

US – NYT Changes Its Ad Game Following GDPR

The New York Times has undertook changes to its advertising practices to handle the EU General Data Protection Regulation. After the GDPR went into effect, the Times decided to prohibit the purchases of open-exchange advertisements on its European pages, as well as any form of behavioral targeting. New York Times International Senior Vice President for Global Advertising Jean-Christophe Demarta said the organization now places its efforts on contextual and geographical targeting and privacy marketplace deals. “The fact that we are no longer offering behavioral targeting options in Europe does not seem to be in the way of what advertisers want to do with us,” Demarta said. “We have not been impacted from a revenue standpoint, and, on the contrary, our digital advertising business continues to grow nicely.”[Digiday]

Privacy (US)

US – Advocacy Groups Release Proposal for New Data Protection Agency

A coalition of advocacy groups has released a proposal to create a federal data protection agency to regulate businesses’ use of personal data. The new agency would supplant the Federal Trade Commission in terms of enforcement capabilities. “Privacy advocates are fed up with the FTC and with Washington failing to reign in the immense power the big data giants hold,” Center for Digital Democracy Executive Director Jeffrey Chester said. The groups’ proposal clashes with the Information Technology and Innovation Foundation’s plan to give the FTC more enforcement authority, as well as create a single U.S. privacy law. [ABC News]

US – FTC and FCC Called on to Enforce Privacy Regulation

Public Knowledge has called on the U.S. Federal Communications Commission and the U.S. Federal Trade Commission to enforce customer privacy network information regulation following a report that found a California VoIP provider left millions of text messages and call records on an unsecured database for months. The article states that Public Knowledge Senior Vice President Harold Feld called for the FTC and FCC “to get off the privacy sidelines and into the game.” Due to the government shutdown, a spokesperson for the FCC was unable to comment, saying it was “beyond the scope of allowable activities.” [Multichannel News]

US – Opinion: DeID, Pseudonymization, Aggregation, and the CCPA

The California Consumer Privacy Act is notorious for the haste with which it was drafted. Many provisions of the statute require clarification, and the attorney general’s office is holding a series of public forums before issuing clarifying regulations. Among the concepts not well defined by the CCPA are deidentification, pseudonymization and aggregation. In this piece for Privacy Tracker, IAPP Westin Fellow Mitchell Noordyke takes a look at some of the challenges the CCPA creates with its imprecise language regarding these topics and points out some of the limited benefits the CCPA offers a business for each type of data treatment technique. [IAPP.org | Privacy, trust and My Health Record]

US – Google Asks Judge to Dismiss COPPA Lawsuit

Google asked a federal judge to dismiss a lawsuit that accuses the company of violating the federal Children’s Online Privacy Protection Act by including apps developed by Tiny Lab in Google Play’s Designed for Families program. According to papers filed last week, the company argues it is not responsible for any violation of the law, adding, “Google Play is merely a platform for the sale or distribution of apps and the Federal Trade Commission — the agency charged with rulemaking and enforcement under COPPA — has made clear that COPPA does not apply to such platforms.” [MediaPost]

US – SCOTUS Case to Look at Art. III Standing, Identifiable Information, Concrete Harm

A case currently making its way through the Supreme Court’s docket may have far-reaching implications for the future of privacy litigation. While the case, Frank v. Gaos, concerns cy pres class-action settlements and their appropriateness, another issue has captured the court’s attention: Article III standing, and, specifically, whether the plaintiffs in the case pleaded sufficient concrete harm. Mitchell Noordyke writes, “Buried in the nuanced discussions about current standing quandaries … is an important question: Are search terms plus an IP address individually identifying and is their mere disclosure a concrete harm? If the answer is yes, the implications for privacy litigation are immense.” [Privacy Tracker]


US – CEOs Identify Cybersecurity as Top Concern

A new survey from The Conference Board found that U.S. CEOs rank cybersecurity as the biggest external concern for 2019, followed by new competitors and the risk of a recession, Fortune reports. Globally, cybersecurity ranked lower, falling to the sixth largest concern in Europe, seventh in Latin America, eighth in Japan, and 10th in China. Despite cybersecurity ranking high among U.S. CEOs, compliance with privacy regulation only ranked 12th. Worldwide, fears of a recession ranked much higher among CEOs. [Full Story]

US – FINRA Identifies Effective Practices for Mobile Devices

The Financial Industry Regulatory Authority (FINRA) has released 2018 Selected Cybersecurity Practices for Mobile Devices:

  • effective practices are also available on the following topics:
  • branch control;
  • phishing;
  • insider threats; and
  • penetration testing.

Broker-dealer firms must develop policies and procedures addressing employee obligations to protect customer and firm information and bring your own devices standards for the use of personal devices for firm business; for customers, the firms must monitor mobile application markets on the dark web for malicious applications that impersonate the firm’s mobile application and require multi-factor authentication for access to customer accounts and trading applications. [FINRA – Selected Cybersecurity Practices 2018 – Mobile Devices]


US – Advocacy Groups Ask Companies to Stop Sales of Surveillance Tech to Government

The American Civil Liberties Union and more than 85 other advocacy groups have signed letters sent to Amazon, Google and Microsoft executives asking the tech company to no longer sell surveillance tech to the government. The letters were addressed to Microsoft CEO Satya Nadella and President Brad Smith, Google CEO Sundar Pichai and Senior Vice President of Global Affairs Kent Walker, and Amazon Founder and CEO Jeff Bezos and General Counsel and Senior Vice President David Zapolsky. The groups address the tech companies’ recent interactions with surveillance tech, such as Smith’s call for governments and tech organizations to address facial recognition and Amazon’s commitments to its Rekognition tool. [ACLU]





8-15 January 2019


US – NIST Test Shows Facial-Recognition Accuracy ‘Massively’ Improving

Late last year, results from the National Institute of Standards and Technology’s facial-recognition accuracy test demonstrated a marked improvement across the industry. The evaluations “not only provide valuable benchmarks for technologists hoping to gauge the accuracy of their facial recognition algorithms, but they could also inform contract decisions by policymakers and tech vendor evaluators,” AI ethics journalist Kate Kaye writes. Kaye looks into NIST’s facial-recognition accuracy test, talks with the researcher who led the testing, and explores what this means for the facial-recognition industry and companies looking to update or employ such technology. [IAPP.org]

US – Majority Supports Facial Recognition for Public Safety, Airport Screening

A survey conducted by the Center for Data Innovation found only one in four Americans believes the government should strictly limit the use of facial-recognition technology and fewer than one in five if it would come at the expense of public safety. Similarly, 54% said they would disagree with limits on government use of facial recognition during airport screening. CDI Director Daniel Castro said, “Americans have made it clear they do not want regulations that limit the use of facial recognition if it comes at the cost of public safety.” A 2018 study from the Brookings Institution found half of Americans favored limits on the use of facial recognition by law enforcement. [NextGov]

Big Data / Data Analytics / Artificial Intelligence

WW – Study: AI to Become ‘Less Artificial and More Intelligent’

An article for the Harvard Business Review looks at the expected trend of artificial intelligence capabilities over the next several years, noting that for those looking at how to invest, it is important to realize that AI will become “less artificial and more intelligent.” The authors write that this approach will mimic human intelligence and require less big data. They write that the transition to a “top-down reasoning” approach that helps enable a broad application of AI will aid in “creating opportunities for early adopters even in businesses and activities to which it previously seemed unsuited.” [HBR]


CA – OIPC Manitoba Releases Breach Notification Checklist for Public Bodies

The Manitoba Ombudsman’s privacy breach letter checklist and notification form for public bodies and trustees, pursuant to the Freedom of Information and Protection of Privacy Act (FIPPA); and Personal Health Information Act (PHIA). The breach report form must include mitigation measures taken (e.g. locks changed, computers shut down), identify the type of harm that may result (e.g. identity theft may result from breach of SIN, PHIN or debit card details), confirm if the representatives and authorities were notified (e.g. privacy officer, police or IT service providers) and reasons for not notifying them; however, no personally identifiable information shall be included in the form in any manner. [Manitoba Ombudsman – Privacy Breach Notification Letter Checklist and Form: Privacy Breach Notification Letter Checklist | Privacy Breach Notification Form | See also: Manitoba Ombudsman Releases New Privacy Breach Resources for Public Bodies and Trustees: Announcement | Key Steps


CA – Bell Canada Asks Users to Share Data for Targeted Ads

Bell Canada, the country’s largest telecommunications group, made a decision to begin asking customers for permission to track their activity in an effort to deliver tailored marketing. Teresa Scassa, who teaches law at the University of Ottawa and holds the Canada Research Chair in Information Law and Policy, said that while the company did a good job in explaining its intent, those who opt in will be providing valuable information with “little to no compensation for increased risks to their privacy and security.” Scassa also raised privacy concerns, highlighting that the data could become vulnerable and personal advertising could create “friction” for families who have their search history targeted. [The Canadian Press]


WW – Impact of Australia’s Anti-Encryption Law on Privacy Worldwide

Amber Welch reports on Australia’s recently passed Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018 and its impact on privacy worldwide. Highlighting the law’s requirements for Australian companies to build backdoors to encrypted data and communications at the request of the government, Welch notes, “European and multinational organizations would be wise to at least identify vendors headquartered or operating in Australia and watch for any further news about the effects of this law.” [The Privacy Advisor]

EU Developments

EU – Advocate-General Urges CJEU to Limit Scope of ‘Right To Be Forgotten’

In his non-binding opinion to the Court of Justice of the European Union, Advocate-General Maciej Szpunar said that in the legal case between Google and French data protection authority, the CNIL, the court should rule in favor of limiting the scope of the so-called “right to be forgotten.” Szpunar called the company’s “geo-blocking” offer sufficient and warned against complete de-referencing from the search engine. He added that the right to be forgotten must be balanced with other rights, including “legitimate public interest in accessing the information sought.” Thomas Hughes, the executive director of anti-censorship organization Article 19, said he hoped the court would follow Szpunar’s opinion, adding, “The court must limit the scope of the ‘right to be forgotten’ in order to protect global freedom of expression and prevent Europe from setting a precedent for censorship that could be exploited by other countries.” [Full Story | See also: Case-507/17 – Google v. CNIL – Opinion of the EU Advocate General – EU Court of Justice Press Release | Opinion (Available in French)]

EU – EU’s Regulatory Spotlight Focuses on Data Brokers

There is increased scrutiny of the data broker industry by European data protection authorities. Mathias Moulin, of France’s DPA, the CNIL, said, “They are all processing personal data, there is absolutely no doubt about that. They all try to say that it’s anonymous to lower the pressure from the public, but that’s not true. They know that and we know that.” U.K. Information Commissioner Elizabeth Denham said, “[We are] concerned about whether or not their practices are compliant with the laws. … We are looking at how they conduct their business and their general compliance with [the EU General Data Protection Regulation] … certainly there is a dynamic tension between the way the businesses are conducted and the principles in the GDPR.” In response, the report states, many data brokers are making organizational changes, including limiting the number of companies from which they receive data. [Financial Times]

EU – German Regulator Plans to Ban Facebook from Third-Party Data Collection

Germany’s Federal Cartel Office plans to prohibit Facebook from the collection of user data from third parties. The decision would block Facebook from any data it would obtain from “Like” buttons found on outside websites. Data would no longer be shared between WhatsApp and Instagram should the agency follow through with the ban. Concerns have manifested in Germany as to whether users understood Facebook’s data-collection practices when they signed up for the tech company’s platforms. Germany had already started its investigation into Facebook’s relationship with third parties in March 2016, before the Cambridge Analytica situation came to light. [The Washington Post]

EU – Report: Processing Scientific Health Data Under the GDPR

A workshop report is now available for “Can GDPR Work for Health Scientific Research?” Co-hosted by the Future of Privacy Forum, European Federation of Pharmaceutical Industries and Associations, and Centre for Information Policy Leadership in Brussels Oct. 22, the workshop discussed the challenges of processing personal health data in the research sector under the EU General Data Protection Regulation. In the report, participants noted a “lack of legal and regulatory harmonization is preventing the EU from taking the opportunities available from digital health” and a need for an opinion from the European Data Protection Board to help clarify the situation. [FPF]

US – GDPR Compliance May Prove More Difficult Than Under HIPAA

The recent EU General Data Protection Regulation enforcement action against a Portuguese hospital shows that complying with the GDPR might be more complicated than complying with the Health Insurance Portability and Accountability Act. Davis Wright Tremaine Privacy Attorney Adam Greene said that while the case serves to demonstrate “significant overlap between HIPAA and GDPR,” he notes that “it also shows that GDPR enforcers may expect more stringent privacy and security controls than are typically practiced under HIPAA.” Editor’s Note: Dr. Ana Menezes Monteiro recently provided an analysis of Portugal’s enforcement action for The Privacy Advisor. [BankInfoSecurity]

EU – EDPS Issues Best Practices for Breach Notification for EU Institutions

The EU Data Protection Supervisor has issued guidance for EU institutions and bodies on personal data breach notification, pursuant to the Regulation 2018-1725 implementing the GDPR. Promptly inform the DPO of personal data breaches (ensure they are involved in the management and notification process), assess breach severity on a case-by-case basis (sensitive data compromise would be high risk, but only first and last name would be low risk), keep track of all facts relating to the breach to demonstrate compliance (effects, remedial actions), and EDPS notification can be provided in phases (provide reasons for the delay in full reporting). [EDPS – Guidelines on Personal Data Breach Notification for EU Institutions and Bodies]


IN – Lawmaker Introduces DNA Bill Despite Privacy Concerns

India’s Science and Technology Minister Harsh Vardhan introduced the DNA Technology (Use and Application) Regulation Bill, 2018, to the Lower House. The move came despite a demand from Congress that the bill be sent to the Standing Committee due to privacy concerns. Congress Leader Shashi Tharoor said the bill fails to establish “procedural safeguard,” adding, “It will enable creation of a big brother state. It is not a panacea … Enacting this law before bringing in a robust data protection law will have a bearing on the right to privacy.” [BloombergQuint]

Health / Medical

WW – Wearable Device Use Expected to Increase in Health Care Industry

As wearable devices become more popular among consumers, questions surrounding privacy and security will need to be answered. A recent report from Juniper Research shows that within five years, approximately $20 billion will be spent annually on wearable devices, health trackers and remote patient monitoring devices, and 5 million people are forecasted to be monitored by health care providers. The tech industry is taking notice, increasing its growth in wearables by 8.5 percent in the past year. “It is vital that patients are made aware of how their personal data will be used. If not, making wearables ‘must have’ to provide personalised care or receive medical insurance risks a backlash from patients and heightened regulatory scrutiny; stalling the effectiveness of remote monitoring,” Research author Michael Larner said. [ZDNet]

US – New York Bill Prohibits Processing Patient Information Without Consent

Assembly Bill 230, prohibiting emergency service providers from selling patient health information without written consent, has been filed for introduction by the New York State Legislature. If passed, emergency services providers are prohibited from disclosing, selling, transferring, exchanging, providing or using any individual identifying information of patients to any entity for marketing purposes; exceptions include payment or reimbursement for health care services (e.g. coverage or medical necessity), legally consistent purposes (e.g. training, recruitment of staff), and purposes or transactions where there is specific patient consent. If passed, the bill will take effect within 180 days. [AB 230 – An Act Prohibiting Emergency Service Providers from Selling Patient Health Information Without Written Consent – New York State Legislature]

WW – Less Privacy Rules, More Data May Give China Advantage in Health Tech

Chinese-based health care startups may have various advantages over their U.S. counterparts “because patient populations are larger and the burden of privacy regulations smaller.” Infervision Chief Scientist Yufeng Deng said, “In the U.S., particularly for big academic hospitals, you have to go through so many processes and it can take a really long time to access data.” Though, according to the report, “Chinese institutions do take steps to protect patient privacy, such as anonymizing records used in research … they are not bound by as many rules and external regulatory processes.” [Wired]

Horror Stories

WW – Google Notifies Users Over 2018 Data Breach

Nearly a month after announcing its 2018 data breach, Google has emailed account users informing them of the two potential breaches. The move has left critics wondering why there was a delay in informing customers, particularly since the EU General Data Protection Regulation requires companies to notify users upon discovery of a data breach. The email sent to customers indicated the breach occurred between Nov. 7 and Nov. 13, adding that the company has removed Google+ “in reaction to the incidents.” [MediaPost]

CN – Unsecure Database Exposed 202M Chinese CVs

An unsecured MongoDB database server, containing highly detailed CVs for more than 202 million Chinese users, was recently discovered by Hacken Proof Director of Cyber Risk Research Bob Diachenko. According to Diachenko, the app appeared to have scraped the data from bj.58.com, a popular Chinese job portal, but did not rule out that other portals were included in the data, as well. Unsure of how to report the discovery, Diachenko turned to Twitter to identify the owner of the database. The database has since been secured and the GitHub repository removed. [ZDNet]

EU – Austrian Post Office Sold PII of Nearly 3M Customers

The Austrian post office sold personal information belonging to nearly 3 million customers to companies for targeted marketing. The information included people’s names, addresses, ages, genders and, in many cases, assumptions on users’ political affiliation. In a statement, the Austrian Post said, “The specified characteristics are collated in this way and are allowed to be used exclusively for marketing purposes. The use of such data is strictly limited to this purpose only,” but others argue such practices violate the EU General Data Protection Regulation. [Phys.org]

US – Unsecure ElasticSearch Server Leaked Location Data for 11K Buses

Real-time GPS coordinates for more than 11,000 Indian buses were left published on the internet for weeks. Security Researcher Justin Paine reported that the data breach resulted from an unprotected ElasticSearch server that contained aggregated data from 27 state-owned transportation agencies. Paine confirmed the server was accessible from at least 30 Nov. 2018, but said it was unclear how long the server existed prior to that date and noted that he could not determine who owned the data. [ZDNet]

CA – Resort Municipality of Whistler Confirms Data Breach

The Resort Municipality of Whistler announced that its website suffered a data breach Dec. 28 that may have impacted people’s personal information. The municipality said staff took “immediate action” to “identify, contain and resolve the issue.” According to the municipality, the attack appears to be motivated by hackers looking to redirect website visitors to other websites. Those who may have been impacted by the breach are being alerted by phone and email. [Global News]

EU – 20-Year-Old Admits to Recent German Data Breach

According to a statement from federal prosecutors, a 20-year-old German man has claimed responsibility for the recent data breach impacting many German politicians, journalists and celebrities. A statement from the Bundeskriminalamt clarifies that the man is believed to have acted alone and was motivated by his “anger over public statements made by the politicians, journalists and public figures concerned.” It also noted that the leaked data was stored on hosting services, with links published via Twitter and a hacked YouTube account. [ZDNet]

Law Enforcement

US – Court: Police Cannot Force Suspects to Unlock Phones Via Biometrics

A judge from the district court of Northern California ruled law enforcement does not have the right to force suspects to unlock their phones through biometric identifiers. Courts had previously determined biometrics were not considered “testimonial” and therefore were not granted Fifth Amendment protection. Magistrate Judge Kandis Westmore wrote in her ruling fingerprints and face scans are not the same as “physical evidence” when they are needed to unlock a device for an investigation. “If a person cannot be compelled to provide a passcode because it is a testimonial communication, a person cannot be compelled to provide one’s finger, thumb, iris, face, or other biometric feature to unlock that same device,” Westmore wrote. [Forbes | In the Matter of a Search of a Residence in Oakland California – Order Denying Application for a Search Warrant – US District Court for the Northern District of California]


US – Telecoms Plan to Scale Back Location Aggregator Services

AT&T, Sprint and T-Mobile announced they have all scaled back their data-access policies after it was discovered the companies sold location data to inappropriate parties. The decisions come after Motherboard initially reported bounty hunters purchased location data through a service powered by information that originated from the telecoms. An AT&T spokesman said the company has “decided to eliminate all location aggregation services — even those with clear consumer benefits.” T-Mobile CEO John Legere said on Twitter his company will also shut down its location aggregators. Sprint said in a statement to The Verge it does not “knowingly share personally identifiable geo-location information,” unless is it in response to a legal request. [Motherboard]

US – Senators Call for Location Data Investigation

Several Democratic senators are calling for a government investigation into the selling of users’ geolocation data. The calls come after a report by Motherboardon the sale of geolocation data. Sen. Kamala Harris, D-Calif., said, “The American people have an absolute right to the privacy of their data, which is why I’m extraordinarily troubled by reports of this system of repackaging and reselling location data to unregulated third party services for potentially nefarious purposes.” An op-ed for The Washington Post argues that Motherboard’s report is “one more piece of evidence in the case for federal privacy legislation that overhauls today’s ‘notice and consent’ regime.” Wired reports that while telecom companies claim they are “scaling back their relationships with third-party brokers,” evidence is clearly against them and asks how the U.S. Federal Communications Commission will handle the issue. [The Washington Post]

US – Investigation Finds User Location Data is for Sale on The Cheap

Some major telecom companies sell access to customers’ location data. After paying a bounty hunter $300 and providing a phone number, reporter Joseph Cox discovered that he could be tracked to within a few hundred meters. The investigation found that using data from aggregator Zumigo, credit-reporting company MicroBilt sells access to phone geolocation data “with minimal oversight” for about $12.95. Since reporting, the article states that MicroBilt has removed documents related to its mobile phone location product from its website. [Motherboard]

US – Is LA City–Weather App Lawsuit A Sign of Things to Come?

Recently, the office of the Los Angeles City Attorney filed a complaint against the company behind the Weather Channel mobile application. The complaint cites California’s Unfair Competition Law and alleges that TWC’s use of the application to collect “users’ private, personal geolocation data … throughout the day and night” is “fraudulent and deceptive” and “unfair.” IAPP Westin Research Fellow Mitchell Noordyke writes about the case for Privacy Tracker, saying, “The complaint is notable because every state has its own version of unfair and deceptive acts and practices statute, and it creates a blueprint for authorities in other states to follow.” [IAPP.org | The People of the State of California v. TWC Product and Technology LLC – Complaint for Injunctive Relief and Civil Penalties – Superior Court of the State of California County of Los Angeles]

Privacy (US)

US – ITIF Releases Proposal for Federal US Privacy Law

The Information Technology and Innovation Foundation has called for a federal U.S. privacy law in its latest report. The ITIF calls for a single privacy rule to improve requirements around transparency, give more enforcement authority to the U.S. Federal Trade Commission, and preempt both existing state laws and federal data privacy laws, such as the Health Insurance Portability and Accountability Act and Gramm-Leach-Bliley Act. Democrat lawmakers have spoken out against the ITIF proposal. “This proposal would protect no one — it is only a grand bargain for the companies who regularly exploit consumer data for private gain and seek to evade transparency and accountability,” said Sen. Richard Blumenthal, D-Conn. [ITIF]

US – Federal Preemption of State Privacy Laws: Issues that May Arise (Part II)

In this second post of a two part-series for Privacy Tracker, Peter Swire looks at the issue of preemption of state laws in potential U.S. federal privacy legislation and tackles issues that are likely to arise from actual suggested legislative texts. “More than many have realized, preemption is a technically complex subject, as well as being politically controversial.” Swire points out 13 specific issues, noting that “Put simply, to avoid unintended consequences on other areas of law, a federal privacy preemption provision will need to contemplate interactions with a much bigger range of laws than many have realized.” The first in this series is available here. [IAPP.org]

US – Lorrie Faith Cranor Named Director of CyLab

Carnegie Mellon University announced Lorrie Faith Cranor as the new director of the school’s security and privacy institute, CyLab. The institute works to bring together university leaders with the goal of “creating a world in which technology can be trusted.” Cranor said, “I look forward to supporting CyLab’s ongoing success and bolstering research aimed at making our increasingly digital world safe and trustworthy.” Currently, Cranor is the FORE Systems professor of computer science and of engineering and public policy, director of the CyLab Usable Privacy and Security Laboratory, and co-director of Carnegie Mellon’s Privacy Engineering master’s program. Previously, she served as chief technologist at the U.S. FTC in 2016 and received the IAPP’s 2018 Leadership Award. [CyLab]

Privacy Enhancing Technologies (PETs)

WW – Perspective: Why Privacy-Risk Analysis Must Not Be Harm-Focused

One of the underpinnings of U.S. privacy law is assessing harm. “For the better part of the last century, the jurisprudential focus in the U.S. has been on cognizable harms, or damages, resulting from statutory or common law privacy invasions,” writes Enterprivacy Consulting Group Principal Consultant R. Jason Cronk. “Courts almost invariably require a showing of damages — mostly financial — before a victim may be due some remedy.” But, he adds, “from a privacy-by-design perspective, harm prevention is not and should not be the primary goal. Rather, the goal should be a reduction in the incidences of privacy violations.” In this post for Privacy Perspectives, Cronk makes his case for why privacy-risk analysis must not be harm-focused. [Full Story]

WW – Report Finds Blockchain Has Yet to Become a Game-Changer

In a recent report for McKinsey & Company, “Blockchain’s Occam problem“ found that the technology has “yet to become the game-changer some expected.” The report notes that while the technology has the “potential to revolutionize business processes,” there is concern surrounding the amount of money and time invested, with little substance to show. The authors note, “Companies set on taking blockchain forward must adapt their strategic playbooks, honestly review the advantages over more conventional solutions, and embrace a more hard-headed commercial approach. They should be quick to abandon applications where there is no incremental value.” [Forbes]


UK – Commissioner Provides Tools for Assessing CCTV Surveillance

The UK Surveillance Camera Commissioner issued a self assessment tool for organizations implementing surveillance camera systems. Organisations should identify the need for surveillance, describe how information will flow through the surveillance system (how information will be collected and used, security in place to protect data, sharing and disclosure, retention periods) and implement solutions to address privacy risks; the results of the self assessment should be included on the company’s website. [Surveillance camera guidance, tools and templates]

US Government Programs

US – Petition Filed Over Proposed ‘Technological Wall’

Following House Speaker Nancy Pelosi’s suggestion of a “technological wall,” internet freedom group Fight for the Future circulated an online petition against the proposal. In lieu of a physical border wall with Mexico, Pelosi suggested efforts be made to increase the electronic security of border traffic. Fight for the Future Deputy Director Evan Greer said that while he was against both proposals, a “technological wall” could produce more harm over time. “Ubiquitous electronic monitoring of individuals, using software to determine how ‘risky’ someone is or whether they should be detained, or using flawed facial recognition programs to target people could impact millions or billions of people,” Greer said. [Fast Company]

US Legislation

US – IAPP Releases Guide to US Privacy Law Proposals

IAPP Senior Westin Research Fellow Müge Fazlioglu undertook a study of the most recent bills introduced in Congress, as well as a selection of recommendations made in comments submitted to the National Telecommunications and Information Administration, and identified several areas of broad agreement, as well as pointed disagreement regarding the nature, shape and scope of a potential U.S. federal data privacy law, with handy charts. [IAPP,org]


1-7 January 2019


US – Court Rules Google’s Face Templates Do Not Violate Illinois Privacy Law

A federal judge ruled that Google did not violate the Illinois Biometric Information Privacy Act when the company automatically created a face template for Android users who uploaded photos taken on smartphones to the company’s cloud-based photo service. “The [7th Circuit] has definitively held that retention of an individual’s private information, on its own, is not a concrete injury sufficient,” the 28-page opinion stated. Judge Edmond Chang said, “Plaintiffs cannot show — and do not argue — that Google ‘intruded into a private place’ by receiving photographs of plaintiffs voluntarily uploaded to Google photos.” Chang added the act of creating a face template did not constitute a “highly offensive” intrusion of privacy and that there was no evidence the template was used for commercial purposes. [Courthouse News Service]

US – Facial Recognition Technology Used at Taylor Swift Concert

Taylor Swift’s security team used facial recognition technology at a May 2018 Rose Bowl concert to identify known stalkers. The technology was embedded in a kiosk that was playing clips of Swift’s rehearsals; as concert-goers looked into the screen, a camera looked back at them. The captured images of concert-goers’ faces were sent to a command center to be cross-referenced against a database of known stalkers. It is not known if concertgoers were aware that the technology was in use. Use of facial recognition technology in public places at large events is gaining traction; the 2020 Summer Olympics in Tokyo plans to use the technology for staff and athlete security checks. [The Register | CNET | Rollingstone]

Big Data / Artificial Intelligence

EU – EC Releases Draft Ethics Guidelines for AI

The European Commission’s High-Level Expert Group on Artificial Intelligence released a draft of its ethics guidelines for trustworthy AI. The draft breaks down the ethical purposes for the use of artificial intelligence, ensuring those ethical uses are implemented and the requirements considered trustworthy. “This document forms part of a vision that emphasises human-centric artificial intelligence which will enable Europe to become a globally leading innovator in AI, rooted in ethical purpose,” the document states. The final version of the guidelines is expected to be published in March. Anyone who wishes to offer feedback on the guidelines can do so through the European AI Alliance. [Europa]

CA – University of Guelph to Launch AI Ethics Center

The University of Guelph announced it will launch the Centre for Advancing Responsible and Ethical Artificial Intelligence. The center seeks to address various issues related to artificial intelligence, such as the protection of individuals’ privacy rights, the elimination of algorithmic biases, and the creation of regulations and public policy related to AI ethics. “There has to be, with any AI, essentially a tradeoff between what you can do with the technology and how much you’re willing to violate people’s privacy in order to achieve those technological objectives,” Academic Director Graham Taylor said. “We’re still fully in control of these systems, so it’s the right time to drive toward human values … potentially before it’s too late, because there are things that could go wrong.” [The Canadian Press]


CA – Senate Committee Issues Report Calling for Privacy Law Updates

The Senate Committee on Banking, Trade and Commerce urged the federal government to update privacy legislation in response to Statistics Canada’s plan to obtain the financial information of 500,000 citizens. The committee’s report states the Canadian government should modernize the Privacy Act and the Personal Information Protection and Electronic Documents Act in order to bring Canada’s law closer to global legislation, such as the EU General Data Protection Regulation. “Canadians are rightly concerned about what information Statistics Canada is collecting and how it will be used,” Sen. Carolyn Stewart Olsen said in a statement. “The secretive manner in which this agency has gone about its work does not inspire confidence. Canadians need to know their information is being collected in an open, transparent and secure manner.” The committee also recommended that political parties be covered under the country’s privacy law, and recommended the OPC have additional resources to tackle “modern privacy concerns.” The agency expects the debate over whether political parties should fall under the Personal Information Protection and Electronic Documents Act to be a vital one in the 2019 elections. “We will continue to press for reform of Canada’s outdated privacy laws and expect this will be an important election issue,” OPC Senior Communications Advisor Tobi Cohen said. [GlobalNews | National Observer]

CA – Supreme Court Rules Citizens Have Right to Privacy Over Shared Devices

The Supreme Court of Canada ruled citizens have the right to privacy over the materials stored on a machine they share with other individuals. The decision stemmed from a case where the common-law spouse of Thomas Reeves consented to a police seizure of a computer they both owned after she discovered child pornography on the device. The court determined the warrantless seizure of the computer violated Section 8 of the Charter of Rights and Freedoms. “We are not required to accept that our friends and family can unilaterally authorize police to take things that we share,” Justice Andromache Karakatsanis wrote in the ruling. “The decision to share with others does not come at such a high price in a free and democratic society.” [The Canadian Press]

CA – Manitoba Proposes Change in PHI Disclosure Threshold

Manitoba has proposed an amendment to the Mental Health Act and Personal Health Information Act. If passed, the amendment would permit a disclosure of PHI to any person where there is a belief the disclosure is necessary to prevent a risk of serious harm (changed from a serious and immediate threat). The amendments come into force on the day of royal assent. [The Mental Health Amendment and Personal Health Information Amendment Act – Legislative Assembly of Manitoba]


US – Federal Judge Sides with Airbnb in NYC Data-Sharing Case

A U.S. federal judge sided with Airbnb in its case against the city of New York over data-sharing requirements. Airbnb filed a lawsuit in opposition to regulations that would have forced the company to deliver host names and addresses to the Mayor’s Office of Special Enforcement. Judge Paul Engelmayer was critical of New York City’s data-sharing plan and added it did not make a proper case for why it needed the information. “An attempt by a municipality in an era before electronic data storage to compel an entire industry monthly to copy and produce its records as to all local customers would have been unthinkable under the Fourth Amendment,” Engelmayer wrote in the ruling. [Politico]

US – GAO Report Finds Agencies Need to Improve Cybersecurity Capabilities

A report from the U.S. Government Accountability Office found that while agencies have improved their ability to prevent and detect network intrusions overall, they have “not effectively implemented the federal government’s approach and strategy for securing information systems.” As such, the GAO recommended that the Department of Homeland Security and the Office of Management and Budget help agencies improve their capabilities. Meanwhile, the National Aeronautics and Space Administration confirmed unauthorized personnel accessed one of the agency servers in October, gaining access to personal data of current and former employees. A spokeswoman for NASA said the agency does not believe any mission data was compromised in the data breach. [GAO]

Electronic Records

US – New York Court Orders Disclosure of EHR Metadata

The Court considered a request to compel a health center to produce information concerning patient records. In the context of a malpractice suit, a health center must produce metadata identifying each user’s actions in relation to an individual’s patient record; the metadata is relevant to ascertain the identity, source and timing of changes made to the medical record, as multiple versions currently exist without a credible explanation. [Spencer Miller v. Beth Saubermann et al. – 2018 WL 6413541 – Supreme Court of the State of New York, County of New York]


AU – Australia Passes New “Encryption Weakening” Law

Australia’s Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018 became law in December. It alters at least 12 different pieces of existing legislation in its efforts to weaken encryption. [What’s actually in Australia’s encryption laws? Everything you need to know | – Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018 (PDF) | Telecommunications and Other Legislation Amendment (Assistance and Access) Bill 2018 Explanatory Memorandum (PDF) | Australia’s new ‘decryption’ law and its effect on tech companies worldwide]

EU Developments

EU – Commission Finds Privacy Shield Continues to Function Correctly

The EU Commission’s second annual review of the EU-US Privacy Shield Framework. The US Department of Commerce has introduced new oversight procedures for Shield participants (random spot-checks, monitoring public reports of privacy practices), all members of the Privacy and Civil Liberties Oversight Board have been appointed, and the FTC has issued administrative subpoenas requesting information from participants as part of its proactive monitoring. [EC – Report to the EU Parliament and the Council on the Second Annual Review of the Functioning of the EU-US Privacy Shield]

EU – EDPB Issues Accreditation Best Practices for GDPR Compliance

The EU Data Protection Board (EDPB) issued draft guidance on accreditation of certification bodies under the GDPR. Certification bodies must ensure their agreements require applicants to allow full transparency to DPAs (including contractually confidential matters related to data protection), do not reduce responsibilities for GDPR compliance, and require entities to disclose significant changes in processing operations or legal situations; DPAs must be permitted to obtain evidence of a certification body’s impartiality, and order withdrawal or non-issuance of certificates. Public comments can be submitted until February 1, 2019. [EDPB – Guidelines 4/2018 on the Accreditation of Certification Bodies under Article 43 of the GDPR | Will the GDPR elicit sectoral codes of conduct?]

EU – ENISA Identifies Key Evidence for Security Audits

The European Union Agency for Network and Information Security (ENISA) issued guidelines on assessing compliance with the NIS Directive. To comply with security audits required by the NIS Directive, operators of essential services and digital service providers must formally document a security policy, key performance indicators, and policy and procedures for security assessments and testing, asset management, segregation of critical information systems, monitoring procedures, and processes relating to administrator accounts. [ENISA – Guidelines on Assessing DSP and OES Compliance to the NISD Security Requirements]

UK – Privacy Notice Language Must Be Simple for Children and Minors

The UK’s Children’s Commissioner issues a report on the collection and sharing of children’s data. The Children’s Commissioner for UK indicates that the age-appropriate design code will set standards on data minimisation, default privacy settings and language of privacy notices; it is recommended that companies state their terms and conditions using language that children can understand, explaining what data is collected and how their data will be used. [Children’s Commissioner For England – Report on The Collection and Sharing of Children’s Data] See also: DPC Ireland – Public Consultation on Processing of Children’s Personal Data and the Rights of Children as Data Subjects under the GDPR]

UK – Data Breach Reporting from Whistleblowers Rises in 2018

The number of data breach notifications sent to the U.K. Information Commissioner’s Office has risen over the course of 2018. From June to August, the ICO received 82 reports about undisclosed breaches. The figure is up from the 31 breach reports sent to the ICO from February to April, most likely due to the implementation of the EU General Data Protection Regulation. “In recent years, data protection has become a major concern not just of government and regulators, but also the general public,” RPC Partner Richard Breavington said. “It is not just disgruntled employees who act as whistleblowers, but genuinely concerned individuals.” [The Financial Times]

FR – CNIL Releases Guidance on Data Sharing

France’s data protection authority, the CNIL, has released guidance on the standards organizations must meet to share information with business partners and data brokers, with a focus on compliance under the EU General Data Protection Regulation. The CNIL guidance states companies that share data with these parties must first obtain consent from the data subject before any information is passed along, identify the third parties that will receive the data, and inform data subjects should the list of entities that will obtain the data change. [Privacy & Information Security Law Blog]

EU – EC Releases Report on Automated Decision-Making under Privacy Shield

The European Commission released a report on whether safeguards for automated decision-making are adequate for data transfers under the EU-U.S. Privacy Shield agreement. Despite its different legal structure from the EU, the report found certain U.S. laws, specifically the Equal Credit Opportunity Act and the Fair Credit Reporting Act, offer protections against adverse decisions when companies use automation to process data to make a decision against an individual. The commission states those protections are similar to the ones seen in the EU General Data Protection Regulation; however, the body advises that business models using automated decision-making should be closely monitored. [Europa]

EU – EC Publishes Second Review of EU-US Privacy Shield

In a news release, the European Commission announced it has published its second review of the EU-U.S. Privacy Shield. The annual review found that the U.S. “continues to ensure an adequate level of protection for personal data” under the framework and added that steps taken since last year’s report have served to improve the framework’s functioning. The commission noted it expects a permanent ombudsperson to be nominated by Feb. 28 and that failure to do so would result in the commission “taking appropriate measures, in accordance with the General Data Protection Regulation.” The report states, “The ombudsperson mechanism is an important element of the Privacy Shield framework and, while the acting ombudsperson continues to carry out the relevant functions, the absence of a permanent appointee is highly unsatisfactory and should be remedied as soon as possible.” [Europa]

Facts & Stats

WW – Facebook Had Data-Sharing Agreements with More Than 150 Companies

The New York Times reports Facebook allowed more than 150 companies to access more user data than it has initially disclosed. Documents obtained by the Times showed Facebook had data-sharing agreements with Microsoft, Amazon, Spotify, Netflix and Yahoo. All the data-sharing agreements were active in 2017, with some still in effect this past year. Facebook Director of Privacy and Public Policy Steve Satterfield said none of the partnerships violated the privacy of its users, as well as the tech company’s 2011 Federal Trade Commission consent decree. In response to the report, Sen. Brian Schatz, D-Hawaii, renewed calls for a federal U.S. privacy law. [Full Story]


CA – OIPC BC Issues Best Practices for Proactive Public Disclosures

The Office of the Information and Privacy Commissioner in British Columbia issued guidance on proactive disclosure of information, pursuant to section 25 of the Freedom of Information and Protection of Privacy Act. Public bodies should establish policies for employees and officers to determine when information is clearly is in the public interest, or provides information about a significant risk of harm; consider the amount, nature and sensitivity of the information, notify the OIPC and any related third parties before the disclosure is made, and ensure disclosures are made as soon as possible (accurate summaries can be given, instead of the entire record). [OIPC BC – Guidance Document – Section 25 – The Duty to Warn and Disclose]

WW – Custody and Control: Triggers and Processes for Legal Holds

Working Group 1 of the Sedona Conference drafted principles on the trigger and process of legal holds: The Sedona Conference identifies the duty to preserve arising from a summons, complaint or formal notice of litigation, or where the organization is the plaintiff, seeking legal advice, sending a cease-and-desist letter, or taking specific steps to commence litigation; organizations should establish and communicate a policy that governs preservation, identify information to be preserved (including held by third parties), and periodically monitor compliance with legal holds. Comments are required by February 8, 2019. [The Sedona Conference – Commentary on Legal Holds, Second Edition – The Trigger and The Process (WG1) – Public Comment Version]

Health / Medical

CA – OIPC SK Finds Unlawful Access of PHI

The OIPC SK investigated a privacy breach under the Health Information Protection Act. A doctor accessed many patients’ information without consent or a need to know, exchanged text messages with an individual, and disclosed the PHI of another individual; it is recommended that trustees of PHI ensure that they revoke access when users should no longer have access, train all the users on software, and track the training attendance and software usage. [OIPC SK – Investigation Report 351-2017 031-2018 143-2018 144-2018 145-2018 – eHealth Saskatchewan]

Horror Stories

US – Congressional Report on Equifax Breach Finds it Was “Entirely Preventable”

The US House Oversight and Government Reform Committee has released a report on the massive Equifax data breach that was disclosed in September 2017. The report found that the breach was “entirely preventable” and that the company “failed to implement clear lines of authority within their internal IT management structure, … allowed over 300 security certificates to expire, … [and] were unprepared to identify, alert and support affected consumers.” Nextgov | SC Magazine.com | The Register | FCW | oversight.house.gov: Key Findings | Majority Staff Report, 115th Congress, December 2018 (PDF)

EU – German Politicians’ Information Leaked in Data Breach

A Twitter account was found to have shared the information of German politicians. The account leaked politicians’ phone numbers, addresses, internal party documents and credit card information. The breach affected German Chancellor Angela Merkel and every political party with the exception of the Alternative for Germany group. The Twitter account was suspended after the posts were discovered. The German Federal Office for Information Security has worked with other agencies to investigate the scope of the breach. German celebrities, journalists and artists also had data leaked on the platform. [The Wall Street Journal]

CA – Saint John Parking Payment System Breaches Date Back to May 2017

A private cybersecurity analyst conducted an investigation into a data breach suffered by the city of Saint John’s online parking payment system. The analyst discovered unknown sources had been able to access customer information through the system as far back as May 2017. “This gives reason to believe that the breach could impact anyone who has paid a city-issued parking ticket over the past two years, from early 2017 to [Dec.] 16, 2018,” the city said in a statement. Information compromised in the breaches includes names, mailing addresses and credit card information. [CBC News]

WW – Breaches Affect Photo-Matching Tool, School District, and Password Manager

Popsugar’s popular photo-matching tool, Twinning, was uploading photos to a storage bucket hosted on Amazon Web Services that was downloadable by anyone. While the article points out that the breach only involved users’ selfies, it states that the breach serves as a good reminder that users don’t always know how secure their information will remain once released. Meanwhile, the San Diego Unified School District reports a hacker stole the personal details of 500,000 staff and students thanks to a successful phishing attempt. It is also reported that Abine, the company behind Blur password manager and the DeleteMe online privacy protection service, revealed a data breach impacting more than 2.4 million Blur users. [TechCrunch]

AU – Australian Government Worker Data Compromised

Data about 30,000 employees of the Victoria, Australia government were exposed when an unauthorized user accessed and downloaded a local directory. The compromised information includes work email addresses, job titles, and some mobile phone numbers. The data could be used to launch phishing attacks and influence campaigns. [SC Magazine I Infosecurity-magazine]

Identity Issues

US – MIT Researchers Develop Algorithm to Identify People in Anonymized Data Sets

A study published in “IEEE Transactions on Big Data” revealed how researchers were able to identify individuals from anonymized data sets produced within a city. The researchers took two data sets from Singapore that only had the time and place of each data point. The group from the MIT Senseable City Lab used an algorithm to match the users whose data was seen in both sets. Over the course of 11 weeks, the algorithm was able to match data subjects with a 95% accuracy rate. MIT Future Urban Mobility Group Postdoctoral Researcher Daniel Kondor said while it is important to work with large sets of data, people should be aware of the potential of identification in order to know the risks of sharing mobile data. [Fast Company]


US – Companies Selling More Location Data to Third Parties

The New York Times reports on the growing number of apps that collect smartphone users’ location information in order to sell it to third parties. About 75 companies obtain precise location data from smartphone apps when users enact location services. Some of those companies claim to track up to 200 million users in the U.S. The organizations turn around and sell the location information to advertisers, retailers and hedge funds that want to gain insights into consumers’ behavior. The Times examined one database that tracked the location information of individuals within the New York area. The database contained information tied to more than 1 million phones and, in some cases, updated every two seconds. [Full Story]

Online Privacy

WW – Report Finds App Developers Automatically Share User Data

Presented at the 35th Chaos Computer Congress, Privacy International introduced a report that found Android app developers are sharing data with Facebook, even if the user does not have a Facebook account. Of the 34 Android apps tested, the organization found that 23 apps sent data to Facebook, including details on app use, device information, and the user’s Google advertising ID, as well as language and time zone settings. The report also examined how such data-sharing practices may put the apps in violation of the EU General Data Protection Regulation. Facebook added that many companies offer login features and that “most websites and apps send the same information to multiple companies each time you visit them.” [ZDNet]

Privacy (US)

US – FTC Urged to Examine Developer’s Data Practices (Children)

The Federal Trade Commission was asked to investigate the unfair and deceptive practices of a developer of a marketing app. The company allegedly violated the FTC Act and COPPA by misrepresenting that their apps are appropriate for children (the app states that it targets a mixed audience), accessing location data and disclosing it to third parties without notice to or consent of parents, and failing to safeguard privacy of personal information by sending persistent identifiers to third parties in an unencrypted form. [Center for Digital Democracy and Others Complaint to the FTC – In The Matter Of Google’s Unfair and Deceptive Practices in Marketing Apps for Children]

US – NY A-G Reaches Settlements With 5 Companies Over Mobile App Security Measures

The Office of the New York State Attorney General Barbara Underwood announced it reached settlements with five companies over the security measures within their mobile apps. The attorney general’s office found the apps for Western Union, Priceline, Equifax, Spark Networks and Credit Sesame all had security flaws that gave malicious actors the opportunity to obtain users’ passwords, Social Security numbers, credit card numbers and bank account information. As part of the settlement, the five companies agreed to implement comprehensive security processes. Meanwhile, the Office of the New Jersey Attorney General Gurbir Grewal has sued EmblemHealth over its 2016 data breach. [PYMNTS]

US – FPF Announces Winners of Annual Privacy Papers for Policymakers Award

The Future of Privacy Forum announced the winners of its ninth annual Privacy Papers for Policymakers Award. A group of academics, advocates and privacy professionals from FPF’s Advisory Board selected the five winners of this year’s contest. The winners included a paper on sexual privacy harms faced by women, non-whites and sexual minorities by University of Maryland Carey School of Law Professor Danielle Keats Citron and an analysis of local privacy regulations aimed to fill gaps in federal and state privacy laws related to surveillance by New York University School of Law Adjunct Professor Ira Rubinstein. [Full Story]

US – FTC Criticized for Lack of Privacy Enforcement

The New York Times interviewed more than 40 former and current U.S. Federal Trade Commission officials, lawmakers, Capitol Hill staffers and consumer advocates to assess whether the agency has been effective with its privacy enforcement. “They have been asleep at the switch,” said Sen. Richard Blumenthal, D-Conn., who will be the ranking member of the subcommittee that oversees the agency. “It’s a lack of will even more than paucity of resources.” Criticism of the agency comes as more of Facebook’s data-sharing practices continue to emerge and as momentum for a federal privacy law, which could potentially grant the agency more enforcement authority, grows. Former FTC Consumer Protection Director Jessica Rich said, “The act creating the FTC was not designed with privacy in mind. That they’ve been able to bring hundreds of privacy and data security cases is actually quite a feat … especially recently.” [Full Story]

US – NIST Hosting Second Privacy Framework Public Workshop Feb. 27–28

The U.S. National Institute of Standards and Technology will host a second public workshop on the development of its privacy framework Feb. 27–28 at the Georgia Institute of Technology. The workshop gives attendees the opportunity to participate in breakout discussions to further the development of the framework. NIST recently announced it has extended the deadline for input on the framework until Jan. 14. Before the workshop, the agency plans to release the findings of its call for comments and a draft outline of the framework based on stakeholder feedback. Attendees of the workshop will have the chance to adjust the outline of the framework and guide the progress of a discussion draft. [NIST]


US – Dept. of Health and Human Services Releases Cybersecurity Guidance

The US Department of Health and Human Services (HHS) released cybersecurity guidance for healthcare organizations. The documents were developed to meet a requirement in the 2015 Cybersecurity Act to provide healthcare organizations with consistent cybersecurity practices to protect patient data. The HHS guidelines identify vulnerabilities in the health care industry, including lack of awareness, training and IT resources; in order to manage threats from phishing, ransomware or data loss, health care organizations must have appropriate safeguards, such as training programs, establishing cyber threat information sharing with other health organizations, and deploying anti-malware detection and remediation tools.  The Health Industry Cybersecurity Practices: Managing Threats and Protecting Patients “aims to raise awareness, provide vetted cybersecurity practices, and move organizations towards consistency in mitigating the current most pertinent cybersecurity threats to the sector.” It includes a main document, two volumes of technical information, and a resources and templates section. The guidelines are voluntary. Sources: [fcw.com: HHS releases cyber guides for healthcare orgs | Health Industry Cybersecurity Practices: Managing Threats and Protecting Patients (PDF)]

US – NIST Publishes Updates to Risk Management Framework

The U.S. National Institute of Standards and Technology (NIST) published its latest revision to its “Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy.” The new version of NIST SP 800-37 includes privacy risk management processes to help support an organization’s privacy protections. “The RMF includes activities to prepare organizations to execute the framework at appropriate risk management levels,” the NIST abstract states. “The RMF also promotes near real-time risk management and ongoing information system and common control authorization through the implementation of continuous monitoring processes; provides senior leaders and executives with the necessary information to make efficient, cost-effective, risk management decisions about the systems supporting their missions and business functions; and incorporates security and privacy into the system development life cycle.” [NIST]

US – ‘Hacking/IT Incidents’ Lead 2018 Health Care Data Breaches

An analysis of U.S. health data breach incidents in 2018 found that “hacking/IT incidents” led the tally as the most common type of breach, with “unauthorized access/disclosure” following. The article states that a snapshot from the U.S. Department of Health and Human Services’ Office for Civil Rights’ HIPAA Breach Reporting Tool website listed 353 major health data breaches for 2018, impacting more than 13 million individuals. Meanwhile, BlankMediaGames confirmed a hacker stole the personal details of 7.6 million users of the browser-based game “Town of Salem.” BMG has not notified affected individuals yet but has advised all users to update account passwords. [GovInfoSecurity]

WW – Facebook Photos Exposed to App Developers

On December 14, Facebook acknowledged yet another data privacy mistake: for a two-week period in September 2018, more than 850 third-party app developers had access to photos belonging to 6.8 million Facebook users, regardless of the permissions users had granted. Facebook says the data leak problem was fixed in September. [Wired | The Register | ZDNet | Arstechnica | Cyberscoop]

Smart Cars / Cities

WW – As Smart Cities Develop, Will Privacy be Protected?

The New York Times reports on the proliferation of smart-city projects and whether privacy protection will play a role. More than a dozen U.S. cities have recently started competing for federal grant dollars and the attention of tech companies. Kansas City, for example, “has become an unexpected destination for technology companies looking for a place to test ideas,” the report states. Some critics, however, are concerned that cities often lack privacy expertise. Harvard University’s Ben Green said, “We increasingly see every problem as a technology-related problem, so the solution is more technology. And you have cities, which are caught in this devil’s bargain, where they feel they don’t have the resources to provide the services people need, and so they make these deals with tech companies that have money, but which in the long term might not be beneficial to either them or their residents.” [Full Story]

US Legislation

US – Could 2019 Be the Year for a Federal US Privacy Law?

After a busy year for privacy regulation, Yahoo Finance reports on the prospects of a federal U.S. privacy law in 2019. Last year saw the EU General Data Protection Regulation go into effect and the passage of the California Consumer Privacy Act of 2018. In response to those bills, lawmakers have proposals for a federal privacy law; however, those potential rules need to find more common ground on several topics. Wired reports the CCPA could end up as a national standard for the U.S., while also covering tech companies’ attempts to preempt the CCPA with a bill that better suits their interest. Adweek cites Sens. Richard Blumenthal, D-Conn., and Amy Klobuchar, D-Minn., as lawmakers who will shape tech regulation in 2019. Meanwhile, the Interactive Advertising Bureau filed comments with the Federal Trade Commission stating a federal privacy law should be based on the industry’s self-regulatory principles, while the European Commissioncalled for a federal U.S. law within the review of the EU-U.S. Privacy Shield agreement. [Yahoo!]



24–31 October 2018

Big Data / Artificial Intelligence

US – “Information Fiduciaries” Must Protect Your Data Privacy

Legislators across the country are writing new laws to protect data privacy. One tool in the toolbox could be “information fiduciary” rules. The basic idea is this: When you give your personal information to an online company in order to get a service, that company should have a duty to exercise loyalty and care in how it uses that information. Sounds good, right? We agree, subject to one major caveat: any such requirement should not replace other privacy protections. The law of “fiduciaries“ is hundreds of years old. It arises from economic relationships based on asymmetrical power, such as when ordinary people entrust their personal information to skilled professionals (doctors, lawyers, and accountants particularly). In exchange for this trust, such professionals owe their customers a duty of loyalty, meaning they cannot use their customers’ information against their customers’ interests. They also owe a duty of care, meaning they must act competently and diligently to avoid harm to their customers. These duties are enforced by government licensing boards, and by customer lawsuits against fiduciaries who do wrong. These long-established skilled professions have much in common with new kinds of online businesses that harvest and monetize their customers’ personal data. First, both have a direct contractual relationship with their customers. Second, both collect a great deal of personal information from their customers, which can be used against these customers. Third, both have one-sided power over their customers: online businesses can monitor their customers’ activities, but those customers don’t have reciprocal power. Accordingly, several law professors [e.g., Jack M. Balkin – PDF, Jonathan Zitttrain – here & Neil Richards & Woodrow Hartzog – PDF] have proposed adapting these venerable fiduciary rules to apply to online companies that collect personal data from their customers. New laws would define such companies as “information fiduciaries.” EFF supports legislation to create “information fiduciary” rules. While the devil is in the details, [the remainder of this post examines what] those rules might look like. [DeepLinks Blog (Electronic Frontier Foundation) | Data Protection Bill Series: Obligations on data fiduciaries and compromises made (India)]


CA – New Data Breach Rules Come Into Effect Nov 1, But Privacy Chief Says They Don’t Go Far Enough

Changes to the Personal Information Protection and Electronic Documents Act (PIPEDA), amendment & new reporting regulations come into force Nov. 1, requiring companies to quickly disclose security data breaches. Companies will be required to keep internal records for all breaches and security safeguards for two years, and in cases where there is a risk of significant harm, companies need to report a breach to the Office of the Privacy Commissioner and to the people affected. [see OPC guidance here & Reporting Form here] But critics, including Canada’s privacy commissioner Daniel Therrien, say that the new measures still don’t go far enough to protect citizens’ privacy. As long as companies report their breaches, there are no financial penalties, which is something that Therrien isn’t thrilled about. “The odd nature of this is that there are very hefty fines for failing to report, but there are no fines for failing to have the security safeguards that would have prevented the breach from occurring There could be actions in the civil courts by individuals whose data was disclosed improperly for any damages incurred, but that of course is very costly for individuals to bring companies to court.” As such, damage to reputation is the main risk for companies that get hacked or suffer other kinds of privacy breaches. In addition, Therrien complains that while he’ll get reports from companies that suffer privacy breaches, his office has yet to be allocated any additional funding to handle those reports. And his office is limited in terms of how it can respond. “What we cannot do is order companies to improve their security posture. So companies are free to accept our recommendations or not,” he said. [Financial Post | Final Breach Reporting Guidance just released by the Office of the Privacy Commissioner of Canada (OPC) | D.O.Eh: Here’s the new privacy law Canada can’t really enforce | New data breach reporting requirements come into force this week | New security breach reporting guidelines come into force on November 1st | New Data Breach Reporting Requirements in Canada | Canada Privacy Office Issues Data Breach Reporting Guidance | Many companies not ready for new data-breach rules, experts say

CA – Privacy Commissioner ‘Surprised’ by Liberal Party Arguments Against Privacy Rules for Parties ‘Harvesting Data On People’

Privacy commissioner Daniel Therrien said he was “surprised” and unpersuaded by the Liberal Party’s arguments against making political parties abide by Canada’s existing privacy rules. At an Access to Information, Privacy and Ethics Committee hearing November 1 [ETHI , hearing 124 notice, watch on ParlVU – Commissioner Therrien’s prepared comments] Liberal Party legal advisor Michael Fenrick [of Paliare Roland] argued that bringing political parties under the jurisdiction of Canada’s privacy laws would create a chilling effect on political involvement, with prospective volunteers fearing harsh punishments in the event of privacy breaches. That argument didn’t hold water for Canada’s privacy commissioner. Therrien said there are a number of jurisdictions, including British Columbia, where parties are regulated on their privacy practices and he sees no chilling effect. In Europe, political parties must abide by the onerous new rules created by the new General Data Protection Regulation that enforces hefty penalties for malfeasance. “So far as we see, I think, democracy continues to thrive in these jurisdictions,” said Therrien. Although the government currently has an election bill making its way through Parliament [Bill C-76 here], the commissioner has said it adds nothing of substance to privacy protection even though parties are “harvesting data on people.” .At the very least, Canadians should have the right to access any data the parties hold about them, said Therrien. That’s a rule that exists in Europe, but not in Canada. “It’s something eminently reasonable and that Canadians would expect,” said Therrien. Parties will also have no independent oversight and no requirement to report any data breaches they suffer, even though revamped privacy rules now require that for Canadian companies Earlier in the committee hearing, chief electoral officer Stéphane Perrault [here] also said he was disappointed that Bill C-76 didn’t provide privacy rules for political parties. “Canadians increasingly want to understand the nature and source of communications that are reaching them. An important part of understanding that is transparency,” said Perrault. [National Post | Gap in privacy law leaves elections open to ‘misuse’ of personal information: privacy commissioner | Liberals say political parties need privacy rules, but have no immediate plans to impose them | Some MPs worry as election looms without ‘any enforceable rules’ for parties on privacy | Politicians are dragging their feet on privacy rules]

CA – Privacy Commissioner Launches Investigation into Statistics Canada

The Office of the Privacy Commissioner of Canada has received complaints related to Statistics Canada and its collection of personal information from private sector organizations and has opened an investigation. The complaints follow media reports that Statistics Canada requested several banks provide the agency with the financial transaction information of hundreds of thousands of Canadians. Privacy Commissioner Daniel Therrien welcomes the Chief Statistician’s invitation for his office to take a “deeper dive” into StatCan’s collection of data from private sector organizations. The Commissioner’s office will be seeking details regarding the information requests the agency has made to various industry sectors. StatsCan had previously consulted with the Privacy Commissioner’s office on the privacy implications related to data collection from private sector organizations, but outside the context of an investigation. A summary of those consultations is included in the Commissioner’s 2017-18 Annual Report to Parliament. The Commissioner’s office has a legal obligation to investigate the complaints fairly and impartially under the law, and the Privacy Act, Canada’s federal public sector privacy law, includes confidentiality provisions. Therefore, no additional details can be provided at this time.[ Office of the Privacy Commissioner of Canada | Privacy Commissioner of Canada launches investigation into StatCan over controversial data project | Privacy commissioner launches investigation into Statscan’s efforts to obtain banking records | Privacy commissioner investigating StatCan’s attempt to get banking info| StatCan scooped up 15 years of personal financial data from Canadian credit bureau  | Scheer opposed to StatCan plan to collect personal-banking data | StatsCan has already seized reams of our private financial info | Conservatives blast Trudeau government over StatCan collection of personal financial data | Toronto’s no fan of StatsCan info grab | Trudeau defends Statistics Canada move to collect banking info of 500,000 Canadians | Big Brother Liberal government wants your private banking info | ANALYSIS: StatCan’s push to scoop payment data on 500,000 Canadians deserves scrutiny | EXCLUSIVE: Stats Canada requesting banking information of 500,000 Canadians without their knowledge | Globe editorial: StatsCan needs to own up to its data breaches | NDP joins Conservatives, asks Trudeau Liberals to shut down controversial StatCan projects]

CA – NWT Privacy Report Laments Slow Pace of Change, City’s Issues

Elaine Keenan Bengts, Information and Privacy Commissioner of the Northwest Territories, had harsh words for the Department of Justice in an annual report tabled in the Legislative Assembly [66 pg PDF – the report was submitted in August and covers the period of April 1, 2017 – March 31, 2018]. The report stated the government was moving too slowly to update the Access to Information and Protection of Privacy Act. The commissioner’s office completed 18 review reports over the 2017-18 fiscal year and opened 53 files under the Act (down from 61 in the previous year). Fifteen of this year’s files fell under requests for review regarding access to information, while there were nine review requests relating to privacy issues. Keenan Bengts also drew attention to the many privacy issues plaguing the City of Yellowknife over the past year, from a possible email theft shared with local media to allegations that a senior employee was misusing City cameras to watch women. She said she offered assistance to help the City develop a stronger privacy policy but received no response. “This is not the first time I have offered to work with the City on privacy concerns. The non-response has, however, been consistent,” she charged. Keenan Bengts also took aim at the Health Information Act [here], pointing out that many breaches involved misdirected faxes or unencrypted emails. “I simply cannot understand the apparent reluctance of the health sector to adopt the better technology,” she said. She also noted there has been little progress made to ensure the public has a say in who can access their health information. She explained that while system-wide standards, policies, and procedures were issued in May 2017, they do not appear to be publicly available yet. She called for better public consultation regarding policies that directly affect the public. On October 29, the same day the annual report was presented to the legislature, Bill 29: “An Act to Amend the Access to Information and Protection of Privacy Act” was read for the first time [see debate in Hansard, October 30, 2018 – at PDF pg 62 (text pg 56) of 84 pg PDF here]. CABIN Radio


CA – Senate Banking Committee Advises Greater Privacy Watchdog Powers

The Senate Committee on Banking Trade and Commerce released a study of cyber fraud and cyber security titled “Cyber Assault — It Should Keep You Up at Night” [see here and Executive Summary which recommends that Parliament give Canada’s privacy commissioner new authority to make binding orders and impose hefty fines on companies that fail to keep the private data of Canadians secure The importance of the privacy commissioner’s role is growing with the number of massive data breaches, and other cybersecurity threats facing Canadians, the senator suggested. The committee makes 10 sweeping recommendations, including calls for new powers for federal officials, including the privacy commissioner; the creation of national cybersecurity standards and guidelines so businesses know what they are supposed to be doing to protect their customers’ data; as well as co-ordinated sharing within the private sector and with governments of sensitive private information related to cybersecurity and cyberthreats The Senate committee criticized the Trudeau government for its “timid responses” so far to the “real and rising online threats” that have affected millions of Canadians who have been defrauded online, or who have had their personal data stolen and exploited as a result of corporate data breaches — many of which were not publicly revealed until months or years later. In addition to recommending that governments prioritize and fund cybersecurity education, both for businesses and citizens, the senators recommended the creation of:

  1. a federal cybersecurity task force to propose a national cybersecurity strategy establishing Canada as a global leader on cybersecurity;
  2. a consistent set of leading cybersecurity standards that are harmonized with the highest international standards and that would apply to all entities participating in 10 critical infrastructure sectors (e.g. health, food, water, communications and government);
  3. standards to protect consumers, business and governments from threats emanating from the Internet of Things that connects such smart digital devices as phones, TVs, cars and medical implants; and
  4. tax incentives for investment in cybersecurity (such as accelerated capital cost allowance deductions for businesses). [The Lawyers Daily | Government failing to protect Canadians from cyber threats, says Senate report

US – FTC Issues Paper on Informational Injury Workshop

The FTC recently issued a paper outlining key takeaways from its December 2017 workshop examining injuries consumers may suffer from privacy and data security incidents. The paper indicates that the FTC convened the workshop to better understand consumer injury for the following two purposes: 1) To allow the FTC to effectively weigh the benefits of governmental intervention against its costs when making policy determinations; and 2) To identify acts or practices that “cause or are likely to cause substantial injury” for purposes of bringing an enforcement action under the FTC Act for an “unfair” act or practice The paper discusses the examples of informational injuries given by participants. These examples involve injuries that may result from medical identity theft, doxing (i.e. the deliberate and targeted release of private information about an individual with the intent to harass or injure), exposure of personal information, and erosion of trust (i.e. consumers’ loss of trust in the ability of businesses to protect their data). The paper also reports that “there was some discussion of whether the definition of injury should include risk of injury [from certain practices]” and shares opposing arguments made by participants. The issue of whether informational injuries that may result from alleged statutory violations are sufficient to provide a consumer in a private action with Article III standing under the U.S. Supreme Court’s Spokeo standard continues to be litigated. In Spokeo, the Supreme Court [see 27 pg PDF decision here] indicated that, to satisfy the “injury-in-fact” requirement for Article III standing, a plaintiff must show that he or she suffered “an invasion of a legally protected interest” that is both “concrete” and “particularized.” To be particularized, an injury must affect the plaintiff “in a personal and individual way.” To be concrete, an injury must “actually exist;” it must be “real.” However, the Supreme Court also acknowledged that intangible injuries can satisfy the concrete injury standard and that in some cases an injury-in-fact can exist by virtue of a statutory violation. (The Spokeo standard does not apply to government enforcement actions.) [The National Law Review ]


CA – Statscan Must Justify Request for Personal Banking Data: Former Chief

Former Statistics Canada chief statistician Wayne Smith who resigned two years ago said in an interview that banking records are second only to health files in terms of Canadians’ most sensitive personal information and that Statistics Canada needs to clearly explain why it is seeking access to people’s banking records. He said Statistics Canada should be able to say: ‘Okay, here’s the purpose and here’s why it’s important enough to justify this intrusion,’ he said. “If they don’t have an answer, they should stop now.” In 2016 Smith resigned in protest over the agency’s decision to house its computer servers with Shared Services Canada, a move he said could weaken the agency’s ability to protect its data. “By moving the data into the Shared Services Canada data centres, we moved it into a data centre that does have connections to the Internet and therefore it opens up the potential of hacking, however secure they might make it” His comments come as Canada’s statistics agency is at the centre of a political controversy after informing Canada’s banks that they will be required to provide consumer records, such as individual credit- and debit-card purchases, starting in January. The move caught Canada’s banking sector by surprise and bank officials are in discussion over how to respond. Royal Bank of Canada, Bank of Montreal, CIBC, National Bank and Scotiabank all confirmed via e-mail that they have not provided any client information to Statistics Canada, echoing a recent statement by the Canadian Bankers Association. While TD Bank told customers via Twitter that it has not agreed to share client data with Statscan. Senior Statscan officials held an online chat and were asked why banking records are now required. “It is becoming increasingly difficult to capture household expenditures by relying on traditional surveys,” the officials wrote. “People do not want to respond to hour-long surveys or keep a diary of their daily expenditures over a two-week period. Financial transaction data will give us more timely data at a greater level of granularity.” Federal Privacy Commissioner Daniel Therrien announced that his office will launch a formal investigation into Statscan’s plans. He told reporters that his investigation is unlikely to be finished by January. He said Statscan informed him about the agency’s general direction, but did not share specific plans related to banking data. He said he advised the agency to only ask for private data that has been anonymized, meaning no individual names are included. However, he acknowledged it is unclear to him at this stage how such a practice could comply with all relevant laws. For a fourth day in a row, Conservative MPs attacked the Liberal government in the House of Commons over Statscan’s plans. The critics say Statscan should not be receiving data that identifies individual Canadians without their consent. [The Globe and Mail ]

US – ‘Vote with Me’ App Reports Party Affiliation and Vote Record of Contacts

Social media platforms have been harvesting your personal data to fuel hyper-targeted advertising for years, but there hasn’t been a handy way for the average user to check the political gang affiliation in their social circle — until now, for better or worse. An app called “VoteWithMe,” claims to be an effort to get people to the polls. But VoteWithMe has a trick up its sleeve that we’re not used to seeing. It can tell you the party affiliation of everyone in your contacts list, and it can tell you what elections your contacts have voted in historically voter registration information like party affiliation and voting history is technically public information, there is no handy website where a person can ordinarily look this information up, at least until now. Having this information in an app that’s snooping on your phone contacts is not only potentially invasive but potentially incredibly divisive, given how far apart Democrats and Republicans have become. And as you might imagine, the app’s reach doesn’t stop with your social circle. Someone using VoteWithMe can now look up the affiliations and records related to any phone number that was used to register to vote, for free, with a few taps. The potential for voter intimidation reaches new heights, only days before the election. [The Download Blog (CNET) | | A new app called Vote With Me lets you see the voting record and political party of every contact in your phone | Vote With Me is a creepy new app that checks your contacts’ voting historyOr should we say, soon-to-be-ex-contacts | New app reveals your contacts’ voting history


US – DHS Urged to Apply Encryption to Web Browsing

Senator Wyden expresses concerns to the Department of Homeland Security (DHS) about web browsing privacy. Metadata revealing specific federal website visits is currently transmitted over the internet without encryption; a U.S. Senator asks that DHS consider requiring federal agencies to use 1 of 2 encryption methods to prevent hackers from learning what particular website a user is visiting. [Letter to DHS Regarding Government Web Browsing – Senator Ron Wyden]

EU Developments

WW – 40th ICDPPC Sets the Way Forward for Ethics, Data Protection and Privacy

As the 40th ICDPPC Annual Meeting in Brussels October 21-26 came an end, the Closed Session, which gathered this year 236 delegates from 76 countries, has released the outcome of its two-day discussions, paving the way for the future of data protection and privacy at global [see 9 pg PDF road map]. The Closed session also adopted a landmark text, the ICDPPC Declaration a Declaration on ethics and data protection in artificial intelligence [6 pg PDF], in order to contribute to the global discussion on this matter. The declaration endorses six guiding principles, as core values to preserve human rights in the development of artificial intelligence. These principles first of all build upon data protection elements, but also expand to ethical considerations which are inextricably linked to the development of artificial intelligence. The Closed session also adopted three other resolutions on e-learning platforms [31 pg PDF], on the Conference Census [2 pg PDF] and on collaboration between Data Protection Authorities and Consumer Protection Authorities [3 pg PDF]. Two new members of the ICDPPC Executive Committee [here] have been elected: the Philippines’ National Privacy Commission (NPC) and the Office of the Australian Information Commissioner (OAIC). With the mandate of Isabelle Falque-Pierrotin [President/Commissioner of the CNIL] coming to an end, the ICDPPC has elected Elizabeth Denham, the UK Information Commissioner (ICO) as new Chair of the Executive Committee. [CNIL News (Commission Nationale de l’Informatique et des Libertés – France)]

UK – ICO Fines Facebook Over Data Privacy Scandal, EU Seeks Audit

The Information Commissioner Office slapped Facebook with a fine of 500,000 pounds ($644,000) [see ICO PR & Penalty Notice & Commissioner’s video statement] — the maximum possible — for failing to protect the privacy of its users in the Cambridge Analytica scandal. The OPC found that between 2007 and 2014, Facebook processed the personal information of users unfairly by giving app developers access to their information without informed consent. The failings meant the data of some 87 million people was used without their knowledge. The fine amounts to a speck on Facebook’s finances. It will take less than seven minutes for Facebook to bring in enough money to pay for the fine. But it’s the maximum penalty allowed under the law at the time the breach occurred [the UK’s Data Protection Act 1998]. Had the scandal taken place after new EU data protection rules [General Data Protection Regulation (GDPR)] went into effect this year, the amount would have been far higher — including maximum fines of 17 million pounds or 4% of global revenue, whichever is higher. Under that standard, Facebook would have been required to pay at least $1.6 billion, which is 4% of its revenue last year. Also, European Union lawmakers demanded an audit of Facebook to better understand how it handles information, reinforcing how regulators in the region are taking a tougher stance on data privacy compared with U.S. authorities. They said Facebook should agree to a full audit by Europe’s cyber security agency and data protection authority “to assess data protection and security of users’ personal data.” The EU lawmakers also call for new electoral safeguards online, a ban on profiling for electoral purposes and moves to make it easier to recognize paid political advertisements and their financial backers. [CTV News ]

EU – Top ENISA-Reported Incidents Involved E-Signature and E-Seal Creation

The EU Agency for Network and Information Security (“ENISA”) has published a report on security incidents reported by trust services providers (“TSPs”), pursuant to article 19 of Regulation 910/2014. Other affected services included certificate revocation lists, verification of e-signatures and seals, and e-signature devices; root causes were third party and system failures, human error and malicious actions (incidents varied in severity from significant (54%), severe (23%) to disastrous (23%)). ENISA – Annual Report Trust Services Security Incidents 2017

Facts & Stats

AU – 245 Breach Notifications to Australian DPA from July to September

Breach notifications received by the Office of the Australian Information Commissioner (“OAIC”) from July 1 – September 30, 2018. Malicious or criminal attacks caused 57% of reported breaches (phishing, credential theft, brute-force, insider threats, social engineering), 37% resulted from human error (PI sent to the wrong recipient, loss of device or paperwork, insecure disposal, failure to redact), and system faults caused 6%; PI compromised included contact, identity and health information, tax file numbers and financial details. [OAIC – Notifiable Data Breaches Quarterly Statistics Report – 1 July – 30 September 2018]

EU – CNIL Publishes Statistical Review of Data Breaches Since GDPR

On October 16, the French Data Protection Authority (the “CNIL”) [here & wiki here] published a statistical review of personal data breaches during the first four months of the EU General Data Protection Regulation’s (“GDPR”) entry into application [in French here & Google English text translation here]. Between May 25 and October 1, 2018, the CNIL received 742 notifications of personal data breaches that affected 33,727,384 individuals located in France or elsewhere. Of those, 695 notifications were related to confidentiality breaches. The accommodation and food services sector is the sector in which the highest number of breaches were observed, with 185 notifications. This is due to a specific case, where a booking service provider was affected by a data breach. More than half of the notified breaches (421 notifications) were due to hacking via malicious software or phishing. 62 notified breaches were related to data sent to the wrong recipients, 47 notified breaches were due to lost or stolen devices, and 41 notified breaches were due to the unintentional publication of information. The CNIL also reported that it will adopt an aggressive approach when the data controller does not comply with its obligation to notify the breach within 72 hours after having become aware of it. Failure to comply with that obligation may lead to a fine of up to €10 million or 2 percent of the total worldwide annual revenues. Conversely, if the CNIL receives the notification in a timely manner, the CNIL will adapt an approach that aims at helping the professionals involved take all the necessary measures to limit the consequences of a breach. When necessary, the CNIL will contact organizations for the purposes of: 1) Verifying that adequate measures have been taken before or after the breach; and 2) Assessing the necessity to notify affected data subjects. [Privacy & Information Security Law Blog (Hunton Andrews Kurth) ]


CA – Credit Card Purchases for Cannabis May Not Be Private

Federal legislation makes it legal for Canadians to enjoy cannabis in the privacy of their homes. That legislation, however, does not necessarily offer privacy protection for cannabis purchasers. Legal experts have raised concerns that credit card data may not be stored in this country and may be accessible to prying eyes in other countries. Credit card and other purchasing information stored outside of Canada, particularly in the United States, may be accessible by law enforcement there. This issue was recently raised in a new [October 16] guidance document released by the Office of the Information and Privacy Commissioner for British Columbia [see PR & PDF]. According to the guidance document, access to the personal information of cannabis users may be used by some countries to deny entry. It also points out that, in a digital age, the privacy issue is not moot. “Keep in mind,” it notes, “that storing data in the Cloud or in proprietary software means there is likely disclosure of that personal information outside of Canada. It is much more privacy protective to store personal information on a server located in Canada to prevent access by unauthorized third parties.” Mark Hayes, founder of Hayes eLaw LLP, an IP and technology firm in Toronto says “Until questions about the potential risks of using credit cards for cannabis purchases are resolved, purchasers may want to pay cash for cannabis purchases in provinces which allow in-person shopping and consider using only anonymous prepaid credit cards or gift certificates for online purchases.” [Canadian Lawyer | Warning: This Is Why You Should Never Buy Marijuana With A Credit Card In Canada | Cannabis IQ: Everything you need to know about pot and the border | Province pulls electronic ID scanners from cannabis stores | Privacy commissioner investigating personal data collection at cannabis stores | Think about your privacy before you purchase pot: federal watchdog | Can we Implement Random Cannabis Drug Testing? – (5 pg PDF here) | Cannabis Is Legal: Top Tips for Employers | Marijuana in the workplace: What your boss can and can’t do | Understanding Cannabis Rules for Employees who Travel to Canada or the United States for Business – (5 pg PDF here)


WW – Treating ‘Genetic Privacy’ Like It’s Just One Thing Keeps Us from Understanding People’s Concerns

“Genetic privacy” is a complicated concept, and a new study published today in the journal PLOS One finds that decoding how people feel about the idea is equally complex. Researchers analyzed 53 studies (covering over 47,000 participants) that looked at how the general public, professionals, and patients viewed genetic privacy. The results paint a complex picture, says study author Ellen Clayton, a professor of law and health policy at Vanderbilt University. If you ask people “are you worried about genetic privacy?” most will say yes. But if you ask a patient whose genetic data was collected for medical testing about a more specific situation, like “are you concerned about sharing data with third parties?” the answers can vary widely. It’s simplistic to claim either that people “are” or “aren’t” concerned about genetic privacy when it’s an multifaceted term that can cover different (and often conflated) concepts like confidentiality, security, and control. “To get insight into how people actually feel, it’s important to ask them what particular outcomes they’re worried about,” says Clayton. For future studies and surveys, she recommends that instead of asking about “genetic privacy,” researchers should break the issue into different parts. That way, we’ll all have a better idea of what people want, so we can better respect their wishes. [The Verge | Genetic Privacy, Data Use by Employers, Insurers, Government Concern Study Participants | You Should Be Worried About Your DNA Privacy | How 23andMe thinks about genetic privacy in the age of forensic genealogy and Facebook’s woes | You don’t have to sequence your DNA to be identifiable by your DNA | New File Type Improves Genomic Data Sharing While Maintaining Participant Privacy | How your third cousin’s ancestry DNA test could jeopardize your privacy]

Health / Medical

US – FDA Issues Draft Guidelines for Medical Device Manufacturers

The US Food and Drug Administration provides draft nonbinding recommendations regarding the security of medical devices. The US FDA recommends devices be designed to protect critical functionality (even when security has been compromised), “deny by default” (i.e., generally reject all unauthorized connections), and detect, log and notify users of a potential cybersecurity breach; risk assessments must include a description of testing conducted on controls, and evidence of security effectiveness. When final, the guidance will supersede recommendations issued in 2014 [FDA – Content of Premarket Submissions for Management of Cybersecurity in Medical Devices – Draft Guidance for Industry and FDA Staff]

US – Risk Assessment: HHS Improves Security Analysis Tool

The US Department of Health and Human Services (HHS) announced changes to its security risk assessment (SRA) tool. Covered entities and business associates can use the tool to meet their obligation to conduct a thorough and accurate assessment of potential risks and vulnerabilities to the confidentiality, integrity and availability of ePHI; the assessment should cover all lines of business, and facilities and locations of the organization. [HHS – ONC and OCR Bolster the Security Risk Assessment (SRA) Tool with New Features and Improved Functionality]

US – New Guidance on Preparation and Response to Medical Device Cybersecurity Incidents

Recently, the MITRE Corporation, in collaboration with the U.S. Food and Drug Administration (FDA) announced the release of the Medical Device Cybersecurity Regional Incident Preparedness and Response Playbook [38 pg PDF]. The Playbook was designed to provide “tools, references, and resources” for Healthcare Delivery Organizations (HDOs) to better prepare for and respond to medical device cybersecurity incidents. The Playbook provides detailed insight and guidance for HDOs on, among other topics, how to prepare for, detect, analyze, contain, eradicate, and recover from “threats or vulnerabilities that have the potential for large-scale, multi-patient impact and raise patient safety concerns.” It’s not an official FDA rule, regulation, or guidance However, HDOs and medical device companies would do well to familiarize themselves with the Playbook and, wherever feasible, incorporate its recommendations into existing cyber incident response plans [since it is a real possibility that in the future] the agency [may] consider failure to abide by the Playbook an aggravating factor warranting a less favorable administrative outcome. [DBR on Data Blog (DrinkerBiddle)]

Horror Stories

US – ERS Online Coding Error Exposes 1.25M Users to Health Data Breach

Recently reported health data breaches include those from: The Employee Retirement System of Texas, Yale University, North Carolina’s Catawba Valley Medical Center, The Children’s Hospital of Philadelphia and Texas-based FirstCare Health Plans: 1) In a statement on its website, The Employee Retirement System (ERS) of Texas [here] explained that a coding error on its password-protected ERS Online portal allowed certain members who logged in with their username and password to view other members’ information. ERS said that members would have to use a specific function to input search criteria in order to view other members’ information. ERS reported to OCR [here] information on potentially 1.25 million people may have been exposed. Information that might have been exposed included first and last names, Social Security numbers, and ERS member identification numbers; 2) Yale University reported to OCR [here] on Oct. 17 an unauthorized paper disclosure that exposed PHI on 1,102 individuals. No additional information was provided. This comes after Yale admitted in July that a data breach occurred between 2008 and 2009 affecting 119,000 facility, staff, and alumni. In a release, Yale said that attackers gained access to a database stored on a Yale server; 3) North Carolina-based Catawba Valley Medical Center (CVMC) announced Oct. 12 it suffered a phishing attack. CVMC told OCR [here] that 20,000 individuals may have been impacted by the breach between July 4 and Aug. 17. Information that might have been compromised included patient names, dates of birth, health information about services, health insurance information, and, for some, Social Security numbers; 4) The Children’s Hospital of Philadelphia (CHOP) reported to OCR [here] on Oct. 23 an email hacking incident that put PHI on 5,368 at risk. In a press release, the hospital said that it discovered two email breaches that exposed PHI, including patient name, date of birth, and clinic information related to neonatal and/or fetal care provided at CHOP or at the Hospital of the University of Pennsylvania. The first breach, discovered on Aug. 24, occurred when an unauthorized user gained access to a CHOP physician’s email account. A second breach, discovered on Sept. 6, identified unauthorized access to an additional email account on Aug. 29. CHOP sent letters to potential victims on Oct. 23; and 5). Texas-based FirstCare Health Plans reported to OCR on Oct. 12 an email error exposing e-PHI on 8,056 individuals. In a press release, FirstCare said the breach may have compromised member name, identification number, treatment description, procedure costs, authorization number, and treating provider name. [Health IT Security]

US – Yahoo Agrees to $50M Settlement Package for Users Hit by Massive Security Breach

One of the largest consumer internet hacks has bred one of the largest class action settlements after Yahoo agreed to pay $50 million to victims of a security breach that’s said to have affected up to 200 million U.S. consumers and some three billion email accounts worldwide. [see wiki here] In what appears to be the closing move to the two-year-old lawsuit, Yahoo — which is now part of Verizon’s Oath business — has proposed to pay $50 million in compensation to an estimated 200 million users in the U.S. and Israel, according to a court filing [35 pg PDF here]. In addition, the company will cover up to $35 million in lawyer fees related to the case and provide affected users in the U.S. with credit monitoring services for two years via AllClear, a package that would retail for around $350. There are also compensation options for small business and individuals to claim costs for losses associated with the hacks. That could include identity theft, delayed tax refunds and any other issues related to data lost at the hands of the breaches. Finally, those who paid for premium Yahoo email services are eligible for a 25 percent refund. The deal is subject to final approval from U.S. District Judge Lucy Koh [wiki here] of the Northern District of California at a hearing slated for November 29. [TechCrunch | Yahoo Agrees to Pay $85M to Settle Consumer Data Breach Class Actions | Yahoo to pay $50M, other costs for massive security breach | Yahoo agrees to pay $50M in damages over biggest security breach in history | Yahoo must pay $50 million in damages for data breaches]

Internet / WWW

EU – Commission Provides Guidance on New Geo-Blocking Regulation

In the European Commission’s plan to create a unified “Digital Single Market” [see here & HL overview here & wiki here] In late September it updated its detailed guidance [download 45 pg PDF Q&A here] on the Geo-blocking Regulation 2018/302 [goes into effect December 3, 2018 – see here, 15 pg PDF here & overview here]. The underlying aim of the regulation is to ban unjustified geo-blocking [wiki here] and other forms of discrimination based on customers’ nationality, place of residence or place of establishment within the digital internal market. The term geo-block as such relates to the phenomenon that Internet users are either re-directed to another website or provided with differing terms & conditions because of their IP address or other identifiers. Accordingly, the Geo-Blocking Regulation includes provisions addressing online discrimination occurring when one accesses a website (Article 3), buys goods or services online (Article 4), or wants to do shopping abroad using means of payment labelled in Euros (Article 5). The Q&A list provided by the Commission covers a comprehensive range of aspects. It is intended for traders seeking to comply with the new rules, as well as for consumers seeking more information about their new rights, and Member States authorities who will have to apply and enforce the dispositions. The questions cover both the substantive provisions and the enforcement tools put in place to achieve the goal of the Regulation. The guidance also features a section putting the regulation in context by exploring how the new piece of legislation fits in the broader e-commerce framework, including its interaction with the Regulation (EU) 2018/644 on cross-border parcel delivery services, the Consumer Rights Directive 2011/83/EU and specific VAT rules. [Global Media and Communications Watch (Hogan Lovells) | The Geo-blocking Regulation]

Online Privacy

WW – Google Will Now Take You Through Your Privacy Settings Step-by-Step

On October 31, Google introduced a handful of new security measures, starting with a risk assessment feature that requires JavaScript [wiki here – to be activated] to run. It has also leveled up its Security Checkup feature, so that once you’ve signed in, it will ask you to delete any apps it thinks is harmful and to cut off any devices you don’t use anymore and now also let you know whenever you share any of your Google data with third-party apps. Finally, if the tech giant believes that your account has been compromised, it will automatically trigger a process that prompts you to perform a series of verifications. You’ll need to verify your settings and make sure nobody can access your account via a recovery phone number or email address, which means you have to secure your other accounts, as well. Google will then ask you to check your financial activities to make sure nobody made unauthorized charges to your credit card or Google Pay account. Finally, the company will ask you to review your Gmail and Drive data to check if anybody accessed or misused it. The process could save even those who aren’t that tech-savvy from getting their identities stolen. And by putting together a step-by-step process, Google is making it easier for even those who are tech-savvy to ensure that all aspects of their accounts are secure. [engadget ]

WW – Privacy by Proxy? VPN Extensions Aren’t As Secure As Users Think

New research published this week by an ethical hacker named “File Descriptor” [Twitter] examined some of the most popular VPN extensions [difference between VPN & VPN extensions here] available to download, including ZenMate [here], uVPN [here], and DotVPN [here] File Descriptor claims that “After several pentests and personal researches on VPN extensions almost all VPN extensions are vulnerable to different levels of IP leaks and DNS leaks. Ironically, although most of them are results of extensions’ misconfigurations, browsers are also responsible as there are a lot of pitfalls and misleading documentations on proxy configurations.” The researcher also noted that a number of VPN extensions were vulnerable to IP and DNS leaks through issues with misusing helper functions, whitelisting hostnames, unencrypted proxy protocols, and Chrome’s DNS prefetching. Ariel Hochstadt of VPNMentor echoed File Descriptor’s findings, telling The Daily Swig that extensions are “not safe as standalone software. Many times what VPN companies call ‘VPN extension’ is merely a limited proxy, and users should be concerned with that I would say that if you are looking for a quick, one-click solution to change your IP to watch blocked content, for example, you can use an extension. But if it is privacy that you are worried about, it is not suffice.” [The Daily Swig (PortSwigger)]

WW – Study of Google Data Collection Comes amid Increased Scrutiny Over Digital Privacy

According to research by Douglas C. Schmidt, Cornelius Vanderbilt Professor of Engineering at Vanderbilt University, if you use an Android device with the Chrome browser running, Google knows whether you are traveling by foot or car, where you shop, how often you use your Starbucks app and when you’ve made a doctor’s appointment [and a whole lot more]. His study commissioned by Digital Content Next looked at Google’s data collection practices under a “day in the life” scenario of an Android phone user [see overview here & 55 pg PDF here]. It detailed data mining over a 24-hour period from an idle Android phone with Chrome running in the background. In the study’s scenario, a researcher created a new Google account as “Jane” and carried a factory-reset Android mobile phone with a new SIM card throughout a normal day. The smartphone running Google’s Android operating system and Chrome sent data to the company’s servers an average of 14 times an hour, 24 hours a day. “These products are able to collect user data through a variety of techniques that may not be easily graspable by a general user,” Schmidt concluded in the paper “A major part of Google’s data collection occurs while a user is not directly engaged with any of its products.” Schmidt wrote, “Google collected or inferred over two-thirds of the information through passive means. At the end of the day, Google identified user interests with remarkable accuracy.” What qualifies as passive data? With Chrome running and location enabled, an Android phone is “pinged” throughout the day by other wireless networks, hot spots, cell towers and Bluetooth beacons. During a short 15-minute walk around a residential neighborhood, for example, Jane’s phone sent nine location requests to Google. The requests collected 100 unique identifiers from public and private Wi-Fi access points. Schmidt also studied data gathering from all Google platforms and products, such as Android mobile devices, the Chrome browser, YouTube and Google Photos, plus the company’s publishing and advertising services, such as DoubleClick and AdWords. The study also compared data collection from an idle Android phone running Chrome with an idle iPhone running Apple’s operating system and the Safari browser. “I found that an idle Android phone running the Chrome browser sends back to Google nearly 50 times as many data requests per hour as an idle iOS phone running Safari,” Schmidt said. “I also found that idle Android devices communicate with Google nearly 10 times more frequently as Apple devices communicate with Apple servers. These results highlight the fact that Android and Chrome platforms are critical vehicles for Google’s passive data collection.” After the study’s release, Google questioned its credibility. “This report is commissioned by a professional lobbyist group, and written by a witness for Oracle in their ongoing copyright litigation with Google. So, it’s no surprise that it contains wildly misleading information,” the company said in a statement. Schmidt replied: “Google has not been able to identify any specific aspects of my report’s methods or conclusions as erroneous.” [Vanderbilt News (Vanderbilt University)]

Other Jurisdictions

US – The Rise of Digital Authoritarianism: Fake news, data collection and the challenge to democracy

Freedom on the Net 2018: The Rise of Digital Authoritarianism [see overview & interactive map], the latest edition of the annual country-by-country assessment of online freedom, released today by Freedom House. The report assesses internet freedom in 65 countries that account for 87% of internet users worldwide. The report focuses on developments that occurred between June 2017 and May 2018, though some more recent events are included. Online propaganda and disinformation have increasingly poisoned the digital sphere, while the unbridled collection of personal data is breaking down traditional notions of privacy. At the same time, China has become more brazen and adept at controlling the internet at home and exporting its techniques to other countries. These trends led global internet freedom to decline for the eighth consecutive year in 2018. Beijing took steps during the year to remake the world in its techno-dystopian image. Chinese officials have held trainings and seminars on new media or information management with representatives from 36 out of the 65 countries assessed by Freedom on the Net. China also provided telecommunications and surveillance equipment to foreign governments and demanded that international companies abide by its content regulations even when operating abroad. A proliferation of data leaks has underscored a pressing need to improve protections for users’ information and privacy. Both democracies and authoritarian regimes are instituting changes in the name of data security, but some initiatives actually undermine internet freedom and user privacy by mandating data localization and weakening encryption. In India, a massive data breach affecting 1.1 billion citizens [see coverage at ZDNet] reiterated the need for reforms to the country’s data protection framework, beyond an ineffective government proposal to require that data be stored locally. Over the past 12 months, false claims and hateful propaganda helped to incite jarring outbreaks of violence against ethnic and religious minorities in Myanmar, Sri Lanka, India, and Bangladesh. One of the steepest declines in internet freedom occurred in Sri Lanka [see coverage at engadget], where authorities shut down social media platforms after rumors and disinformation sparked vigilante violence that predominantly targeted the Muslim minority. In India, internet users experienced an unprecedented number of shutdowns due in part to the spread of rumors on WhatsApp. In Egypt, a Lebanese tourist was sentenced to eight years in prison for “deliberately broadcasting false rumors” after she posted a Facebook video describing the sexual harassment she experienced while visiting Cairo [see coverage at Reuters]. In Rwanda, blogger Joseph Nkusi was sentenced to 10 years in prison for inciting civil disobedience and spreading rumors, having questioned the state’s narrative of the 1994 genocide and criticized the lack of political freedom in the country [see Human Rights Watch coverage here]. Key findings include:

  1. Declines outnumber gains for the eighth consecutive year;
  2. Internet freedom declines in the United States;
  3. Citing fake news, governments curb online dissent;
  4. Authorities demand control over personal data;
  5. More governments manipulate social media content;
  6. Internet freedom declines coincided with elections;
  7. Governments disrupted internet services for political and security reasons; and
  8. Digital activism fuels political, economic, and social change.

[Press Releases (Freedom House) | The global threat of China’s digital authoritarianism | China’s Web Surveillance Model Expands Abroad | Chinese-style ‘digital authoritarianism’ grows globally: study | China’s Internet Censorship Is Influencing Digital Repression Around the World, Report Warns]

Privacy (US)

US – FTC Announces Hearings on Consumer Privacy and Data Security

As part of its Hearings Initiative, the Federal Trade Commission will hold four days of hearings in December and February to examine the FTC’s authority to deter unfair and deceptive conduct in data security and privacy matters. The December hearings will focus on data security and will take place December 11-12 [Notice], 2018. Tthey will include five panel discussions and additional discussion of research related to data breaches and data security threats. The first day’s panel discussions will examine incentives to invest in data security and consumer demand for data security. Discussions on the second day will focus on data security assessments, the U.S. framework related to consumer data security, and the FTC’s data security enforcement program. Staff has already begun developing the agenda for the December data security hearing. The hearings on consumer privacy will take place in the same venue on February 12-13, 2019 [Notice]. They will provide the first comprehensive re-examination of the FTC’s approach to consumer privacy since 2012. The FTC is seeking comments from the public on what the agenda should include. To be included for consideration, the FTC is seeking comment by December 21 on specific questions to be discussed at the February event. In addition, FTC staff welcomes comments on both the data security and privacy hearings until March 13, 2019. [FTC Press Release | FTC Announces PrivacyCon 2019 and Calls for Presentations | Advocates push to beef up privacy regulator | FTC Commissioner Chopra Calls for Greater (and More Expensive) Enforcement

Privacy Enhancing Technologies (PETs)

WW – New Signal Privacy Feature Removes Sender ID from Metadata

Signal app [here] is testing a new technique called “sealed sender” that’s designed to minimize the metadata that’s accessible to its servers. A beta release announced Monday will send messages that remove most of the plain-text sender information from message headers. It’s as if the Signal app was sending a traditional letter through the postal service that still included the “to” address but has left almost all of the “from” address blank. Like most messaging services, Signal has relied on the “from” address in message headers to prevent the spoofing of user identities and to limit spam and other types of abuse on the platform. Sealed sender, which puts most user information inside the encrypted message, uses two new devices to get around this potential privacy risk: 1) Senders periodically retrieve short-lived sender certificates that store the sender’s phone number, public key, and expiration timestamp; and 2) Delivery tokens derived from the sender’s profile key are used to prevent abuse. Users who want to receive sealed-sender messages from non-contacts can choose an optional setting that doesn’t require the sender to present a delivery token. This setting opens a user up to the possibility of increased abuse, but for journalists or others who rely on Signal to communicate with strangers, the risk may be acceptable. Even under the sealed sender, observers said, Signal will continue to map senders’ IP addresses. That information, combined with recipient IDs and message times, means that Signal continues to leave a wake of potentially sensitive metadata. Still, by removing the “from” information from the outside of Signal messages, the service is incrementally raising the bar. [Ars Technica | Signal rolls out a new privacy feature making it tougher to know a sender’s identity | Apple’s 2018 MacBooks come a chip that protects against eavesdropping | Apple’s new T2 security chip will prevent hackers from eavesdropping on your microphone | Apple’s T2 security chip disconnects a MacBook’s microphone when users close the lid  | Apple says its T2 chip can prevent hackers from eavesdropping through your MacBook mic

WW – Accelerating the Future of Privacy through SmartData Agents

Imagine a future where you can communicate with your smartphone – or whatever digital extension of you exists at that time – through an evolved smart digital agent that readily understands you, your needs, and exists on your behalf to procure the things and experiences you want. What if it could do all this while protecting and securing your personal information, putting you firmly in control of your data? George Tomko Ph.D, Expert-in-Residence at Privacy, Security and Identity Institute at the University of Toronto [here], Adjunct Professor in Computer Science at Ryerson University, and Neuroscientist, believes the time is ripe to address the privacy and ethical challenges we face today, and to put into place a system that will work for individuals, while delivering effective business performance and minimizing harms to society at large. I had the privilege of meeting George to discuss his brainchild, SmartData: the development of intelligent agents and the solution to data protection. The evolution of Privacy by Design, which shifts control from the organization and places it directly in the hands of the individual (the data subject) [see here] These ideas were published in SmartData: Privacy Meets Evolutionary Robotics, co-authored with Ann Cavoukian, former 3-term Privacy Commissioner in Ontario and inventor of Privacy by Design [wiki here]. This led to his current work, Smart Data Intelligent Agents, the subject of this article. [Forbes]


CA – Gov’t Failing to Protect Canadians from Cyber Threats: Senate Report

The federal government is failing to protect Canadians from increasingly sophisticated cyber attacks that have already victimized millions, according to a scathing Senate Committee on Banking, Trade and Commerce report released October 29 that calls for the creation of a minister of cyber security [see here: Exec Summary & 34 pg report]. In 2017 alone, over 10 million Canadians had their personal information compromised through targeted attacks and — more often — through cyber operations directed against businesses that hold Canadians’ private information, says the executive summary. Among the committee’s recommendations: 1) all levels of government must prioritize cyber security education as part of the national cyber security strategy. there should be a national cyber literacy program, led by the newly-created Canadian Centre for Cyber Security, to educate consumers and businesses about how to protect themselves; 2) Ottawa should create a new national centre of excellence in cyber security and expand two existing centres to promote university-level research and encourage Canadians to pursue careers in cyber security-related fields. The centres of excellence should be the Canadian Institute for Cybersecurity at the University of New Brunswick, the Cybersecurity and Privacy Institute at the University of Waterloo, a third yet to be chosen in Western Canada. They would join the Montreal-based Smart Cybersecurity Network (SERENE-RISC) already receives funding as a centre of excellence; 3) the federal government should modernize PIPEDA, including empowering the Office of the Privacy Commissioner to make orders and impose fines against companies that fail to protect their customers’ information, and to allow information sharing about cyber threats within the private sector and between the private sector, government and relevant international organizations; 4) businesses should be given incentives to invest in cyber security improvements, for example, by making these investments tax deductible; 5) a new federal minister of cyber security should be created to co-ordinate cyber security efforts across all levels of government. The minister would have responsibility for the new Canadian Centre for Cyber Security — now overseen by the Defence department — and the RCMP’s National Cybercrime Co-ordination Unit; 6) Ottawa should create a federal expert task force on cyber security to provide recommendations regarding the national cyber security strategy that would establish Canada as a global leader in cyber security. The government released an update to its national security strategy in June; 7) the federal government develop standards to protect consumers, businesses and governments from threats related to the Internet of Things devices; and 8) it should also develop a consistent set of leading cyber security standards that are harmonized with the highest international standards and would apply to all entities participating in critical infrastructure sectors. “Governments, businesses and individual Canadians each have a role to play in protecting the country from this cyber scourge,” says the report. “It should keep you up at night.” [IT World Canada]

US – Best Practices for Preventing and Responding to Incidents

The US Department of Justice, Cybersecurity Unit updates existing best practices on how to prevent falling victim to a cyber incident. The US DoJ Cybersecurity Unit recommends prior to an incident, organizations educate senior management on risks, identify company assets, and have appropriate authorizations in place; following an incident, organizations should enact their incident response plan, notify stakeholders (internal personnel, regulators and potential victims), and avoid using a compromised system for further communication. [Best Practices for Victim Response and Reporting of Cyber Incidents – Department of Justice, Cybersecurity Unit]

US Legislation

US – Sen. Ron Wyden Proposes Bill That Could Jail Executives Who Mishandle Consumer Data

Sen. Ron Wyden (D-OR) released an early draft of legislation today that would create substantially stiffer guidelines for the misuse of consumers’ data. Among other provisions, the bill suggests creating a penalty of 10 to 20 years imprisonment for senior executives who fail to follow new rules around data use. called the “Consumer Data Protection Act” [see PR, 1 pg PDF hsummary, 2 pg PDF section overview & full 38 pg PDF text] it would give the FTC more authority and resources to police the use of data by adding a total of 175 new staff. Under the proposal, the FTC would also be allowed to fine companies up to 4% of revenue for a first offense. It would create a centralized Do Not Track list meant to let consumers stop companies from sharing their data with third parties, or from using it for targeted advertising. It would allow companies to block users who opt out and offer a paid version of the service in place of the tracking. Consumers could also ask to review and challenge the information collected on them. Companies that make more than $1 billion in revenue and that handle information on more than 1 million people, or smaller companies that handle information on more than 50 million people, would also be required to submit regular reports to the FTC that describe any privacy lapses. Failure to comply with the measure could lead to jail time. Wyden is accepting feedback on the bill at PrivacyBillComments@wyden.senate.gov. [The Verge | Senator Wyden wants to jail execs who don’t protect consumer data | Senator’s data privacy law draft could put CEOs in jail for lying | Sen. Ron Wyden Introduces Bill That Would Send CEOs to Jail for Violating Consumer Privacy | Oregon senator proposes privacy regulations with prison and revenue penalties for data misuse | California Consumer Privacy Act of 2018 – Full Text

Workplace Privacy

UK – Court Confirms Vicarious Liability for Rogue Employee

The UK Court of Appeal considers the liability of Morrisons Supermarket PLC (“Morrisons”) for the actions of an employee. The employee’s actions (stealing employee data and disclosing it to third parties) were within the field of activities assigned by the company (accessing payroll information, copying data to a USB stick), and the company had an obligation to reasonably ensure the reliability of any employee with access to personal data. [WM Morrison Supermarkets PLC and Various Claimants – 2018 EWCA Civ 2339 – England and Wales Court of Appeal]



16-23 October 2018


WW – Israel’s Fingerprint Identification System Failing Border and Police Checks

A committee reviewing Israel’s Biometric Database Law says that the fingerprint identification system is suffering high rates of failure when used both at the country’s borders and by police. The committee produced a report which includes findings from a review of data provided by the Interior Ministry’s Population Registry, the National Biometric Database Authority, and police from mid-2017 to mid-2018. The Population and Immigration Authority responded that fingerprint identification failures are not causing problems, as face matching is working at the predicted rates, and passport holders are identified face to face by a border control employee in the event that biometric identification fails. It also noted that some of the failures are caused by incorrect finger placement. The report also found that the Biometric Database Authority has been deleting data when required to do so by law. The management of the database has been contested in court after it was allegedly performed by a private contractor for two years, in violation of the laws establishing it. Meanwhile, police were blocked from direct access to the database by a court ruling earlier this year. [Biometric Update]

US – TSA Proposes Increased Use of Biometrics for Security

A 23-page report released by the U.S. Transportation Security Administration outlines proposals to how passengers are screened before boarding and states that, among the changes, biometric technology will eventually replace passports and other forms of identifications. In the report, TSA Administrator David Pekoske said, “In addition to addressing key operational needs, implementing the Biometrics Roadmap will secure TSA’s position as a global leader in aviation security and advance global transportation security standards.” The New York Times

CN – Shanghai Airport Introduces Fully Automated Facial-Recognition Kiosks

Shanghai Hongqiao International Airport unveiled self-service kiosks fueled by facial-recognition technology for flight and baggage check-in, security clearance and boarding. The system is being touted as the first fully automated operation in China but has raised privacy concerns for some. Maya Wang, senior China researcher for Human Rights Watch, said, “Authorities are using biometric and artificial intelligence to record and track people for social control purposes,” adding, “We are concerned about the increasing integration and use of facial recognition technologies throughout the country because it provides more and more data points for the authorities to track people.” The system is currently available to those with a Chinese identity card, and it is expected that the system will be introduced to Beijing and Nanyang city. The Province

Big Data / Data Analytics

CA – StatsCan Promises More Detailed Portrait of Canadians with Fewer Surveys

Canadians are increasingly shunning phone surveys, but they could still be providing Statistics Canada with valuable data each time they flush the toilet or flash their debit card. The national statistics agency laid out an ambitious plan to overhaul the way it collects and reports on issues ranging from cannabis and opioid use to market-moving information on unemployment and economic growth. Statscan is reaching agreements with other government departments and private companies in order to gain access to their raw data, such as point-of-sale information. According to agency officials, such arrangements reduce the reporting paperwork faced by businesses while creating the potential for Statscan to produce faster and more reliable information. Other examples of how Statscan is focusing on the databases of other organizations includes a partnership with the Canada Border Services Agency, where border-crossing photos of vehicle licence plates and traveller declarations of items that have been purchased now inform Statscan’s tourism statistics. The officials said Statscan works closely with Canada’s Privacy Commissioner as it seeks new sources of data, and they said the agency has always gone to great length to ensure that no information is released that could identify individual Canadians. However, some companies have expressed concern about Statscan’s request for customer data such as phone records, credit bureau reports and electricity bills, according to Tobi Cohen, a spokesperson for the Privacy Commissioner. Ms. Cohen said the office is in ongoing discussions with Statscan about this direction. [The Globe and Mail]

WW – ICDPPC Establishes Working Group on Ethics and Data Protection in AI

At the 40th International Conference of Data Protection and Privacy Commissioners in Brussels this week, the French data protection authority, the CNIL, the European Data Protection Supervisor and Italian DPA, the Garante, co-authored a new declaration on ethics and data protection in artificial intelligence. Along with the declaration’s six principles, the ICDPPC, “in order to further elaborate guidance to accompany the principles,” will establish “a permanent working group addressing the challenges of artificial intelligence development,” an ICDPPC release states. The working group “will be in charge of promoting understanding of and respect for the principles of the present resolution, by all relevant parties involved in the development of [AI] systems, including governments and public authorities, standardization bodies, [AI] systems designers, providers and researchers, companies, citizens and end users” of AI systems. [ICDPPC Resolution]

WW – IAF, Hong Kong DPA Release Ethical Accountability Report, Framework

The Information Accountability Foundation, together with Hong Kong Privacy Commissioner for Personal Data Stephen Kai-yi Wong, has released an “Ethical Accountability Framework for Hong Kong China,” as well as “a model assessment and oversight process framework for the cascading of ethics from shared values to workable business process,” according to a blog post from the IAF’s Martin Abrams. Wong commissioned the IAF to work with nearly two dozen Hong Kong–based businesses to develop the report and framework. “The challenge,” Abrams writes, “was to create a compelling and implementable framework for doing the right thing, for all stakeholders, in a legal system with an ombudsman structure for data protection.” Though the project was conducted in Hong Kong, Abrams says the “framework has practical use and implications for all privacy regimes.” The IAF is also looking for feedback on the documents. Full Story

WW – IAPP, UN Release Joint Report on Building Ethics into Privacy Frameworks

This week, privacy and data protection commissioners from more than 100 countries congregated in Brussels for the 40th annual International Conference of Data Protection and Privacy Commissioners. The debate will focus on digital ethics, including topics that exceed the traditional remit of privacy professionals. Indeed, privacy officers and data protection regulators are increasingly called upon to serve as a moral compass for their organizations and societies. Today, the IAPP releases a joint report with the United Nations Global Pulse titled “Building Ethics Into Privacy Frameworks For Big Data & AI.” IAPP.org

WW – Intel Releases Paper on Privacy and AI

Intel released a paper titled “Protecting Individuals’ Privacy and Data in the Artificial Intelligence World“ during the 40th International Conference of Data Protection and Privacy Commissioners. As tech companies use automation to make decisions in real time, Intel released its observations about what that could mean for privacy. The company states increased automation should not result in fewer privacy protections, and companies should place a focus on transparency with its algorithms. Intel offered six policy recommendations for privacy and artificial intelligence, such as new comprehensive legislative and regulatory initiatives that are tech neutral and support the free flow of data and an emphasis on risk-based accountability approaches. Intel Blog

WW – Google Releases Training Module on Machine-Learning Fairness

Google has released a 60-minute training module designed to help machine-learning practitioners consider fairness as they develop machine-learning models. The module was created by Google’s engineering-education and machine-learning fairness teams and is part of the tech company’s Machine Learning Crash Course. The course teaches users on the different types of human biases that can appear in machine-learning models through data, the best ways to spot human biases in data before a model is trained, and methods to evaluate a model’s predictions for both over performance and bias. Google Blog

WW – Analyst Names Digital Ethics and Privacy Among Top Trends for 2019

A Gartner analyst has named digital ethics and privacy as one of the top 10 strategic technology trends for 2019. While familiar technology trends also topped the list, including artificial intelligence–driven development, blockchain and autonomous things, digital ethics and privacy cut across trends. Gartner noted digital ethics and privacy have become a “growing concern for individuals, organisations and governments,” and wrote, “People are increasingly concerned about how their personal information is being used by organisations in both the public and private sector, and the backlash will only increase for organisations that are not proactively addressing these concerns.” TechCrunch

WW – 2018 IAPP-EY Privacy Governance Report

The IAPP and EY released the fourth annual IAPP-EY Privacy Governance Report, the authoritative look at how the job of privacy is done, with documentation of average budgets, staff sizes, program priorities, and much more. This year, the responses include a much greater proportion from the EU, and the report focuses on the response to the EU General Data Protection Regulation, which has had every bit the impact many predicted. The data shows organizations expect to spend an average of $3 million in building compliance programming and adapting products and services. Further, the average privacy team has grown to 10 full-time staffers. In total, the data and analysis stretch to 132 pages, presented in an easy-to-digest format and brimming with benchmarking data you can use to guide your own privacy program. IAPP.org

US – ITI Releases New Privacy Framework

The Information Technology Industry Council has released its “Framework to Advance Interoperable Rules (FAIR) on Privacy.” The framework offers recommendations to give data subjects more control and a stronger understanding of the ways their information is used. Measures are also included to ensure companies focus on responsible data use and transparency. “This framework moves us toward that goal by enhancing transparency, increasing individual control, establishing company accountability, promoting security, and fostering innovation. We expect this framework will continue to take shape as we work alongside lawmakers and consumers to develop meaningful privacy legislation in the United States and across the world,” ITI President and CEO Dean Garfield said. ITIC.org


CA – Supreme Court Asked to Tell Toronto Police to Respect Rights of Minorities

The Supreme Court of Canada is being asked to tell police to respect the privacy rights of minorities in poor neighbourhoods in a case in which three Toronto officers entered a public-housing backyard without permission and found a man, Tom Le, who is Asian-Canadian, with a gun and cocaine. Mr. Le was convicted of gun and drug offences at trial, and the Ontario Court of Appeal upheld his conviction in a 2-1 ruling, saying that even if the police had no legal right to enter the backyard, Mr. Le had no “reasonable expectation of privacy” as a guest, and his rights therefore were not violated. He appealed to the Supreme Court, which heard the case on Friday morning. To Mr. Le’s lawyers, and several intervenors, the case raises questions about police conduct toward minorities, while to the Ontario government’s prosecutors, it is about protecting communities. [The Globe and Mail]

CA – OPC Goes to Court to Determine if Canada Can Force Google to Delete History

Following public consultations, the OPC has taken the view that PIPEDA provides for a right to de-indexing – which removes links from search results without deleting the content itself – on request in certain cases. This would generally refer to web pages that contain inaccurate, incomplete or outdated information. However, there is some uncertainty in the interpretation of the law. In the circumstances, the most prudent approach is to ask the Federal Court to clarify the law before the OPC investigates other complaints into issues over which the office may not have jurisdiction if the court were to disagree with the OPC’s interpretation of the legislation. A Notice of Application, filed in Federal Court, seeks a determination on the preliminary issue of whether PIPEDA applies to the operation of Google’s search engine. In particular, the reference asks whether Google’s search engine service collects, uses or discloses personal information in the course of commercial activities and is therefore subject to PIPEDA. It also asks whether Google is exempt from PIPEDA because its purposes are exclusively journalistic or literary. [Free Speech/Techdirt]

CA OPC Shares Views on Bill C-58, An Act to amend the Access to Information Act and the Privacy Act

The Privacy Commissioner of Canada, Daniel Therrien, appeared before the Standing Senate Committee on Legal and Constitutional Affairs [notice here, watch here] to discuss Bill C-58 [here], An Act to amend the Access to Information Act and the Privacy Act and to make consequential amendments to other Acts. In his remarks, he shares his concerns about the bill in its current form and how it disrupts the current balance between access and privacy. He claims “by granting order-making powers to the Information Commissioner [here], including in respect of personal information, Bill C-58 risks giving access pre-eminence over privacy.” he concludes his remarkes by saying: “In my view, the best way to ensure a balance between access to information and privacy rights would be to grant me order-making powers, as my colleague will have. However, in the absence of equal powers, the solutions we have jointly proposed represent a step towards maintaining this balance. [Office of the Privacy Commissioner of Canada]

CA – OPC Wants “Under the Hood” of the Communications Industry

The Standing Senate Committee on Transport and Communications [here] met [October 16] for the fifth time [here & watch here] to continue its examination of how the three federal communications statutes (the Telecommunications Act [here], the Broadcasting Act [here], and the Radiocommunication Act [here]) can be modernized to account for the evolution of the broadcasting and telecommunications sectors in the last decades. With the testimony of the Privacy Commissioner, Daniel Therrien [remarks here], it became obvious, to us anyway, that the many consultations undertaken by government lately, including the Broadcasting and Telecommunications Legislative Review (BTLR) [here] will necessitate significant co-ordination when they will be writing the recommendations leading to the various legislations involved. With the end of the National Digital and Data Consultations on October 12, 2018, it is likely that one outcome will be to bring changes to the Personal Information Protection and Electronic Documents Act (PIPEDA) [here & wiki here]. These changes could impact the Telecommunications Act Of course, the Telecommunications predates the PIPEDA, but as the Privacy Commissioner motioned, PIPEDA is a general application law that applies to all sectors and it allows the CRTC to go further in its applications, for example for express content as opposed to implied content. Therrrien also said “Under the current laws, all regulatory agencies are prohibited from sharing information with others, including our sister regulatory agencies, which somewhat impedes the completeness of the studies that we make. We can have discussions at the broad policy level with the CRTC and the Competition Bureau, but when we investigate specific complaints we cannot share with them—although it would be very productive—the product of our investigations because we are legally prohibited … So, the information for which I would like more flexibility—and I think the sister agencies are in agreement with that—would be information that we collect in the course of our work.” Finally, the Commissioner recommended that he be given more powers that would apply to the communications industry. [CARTT]

CA Democratic Institutions Minister Rejects Call to Subject Political Parties to Privacy Laws

The federal government is rejecting opposition calls urging it to accept amendments to the electoral reform bill to subject political parties to federal privacy laws. Democratic Institutions Minister Karina Gould [here] appeared Monday as the last scheduled witness [here, watch here] before the Procedure and House Affairs Committee [here] dives into a marathon effort this week to review and vote on more than 300 amendments to Bill C-76, which makes changes to the Canada Elections Act. While Ms. Gould did not state categorically that she opposes such a change to the bill, it was strongly implied in her comments. “I would like to see a broader study of privacy and political parties. I think that it’s something that is really important,” she said. “I think it does require a deeper dive.” Both the federal Privacy Commissioner [here, PR here] and the head of Elections Canada [here] have called for privacy laws to apply to political parties. The Commons access to information, privacy and ethics committee [here] made an all-party recommendation [here & 56 pg PDF here] in June that extending privacy laws to political parties was “urgently” required. [The Globe and Mail | MPs begin deliberating hundreds of amendments to key elections bill ] See also: Ireland’s Data Protection Commission has published guidance on elections and canvassing activities.

CA Nova Scotia Accepts Some of Ombudsman’s Recommendations Over Minister’s Use of Private Email

Nova Scotia’s Health Department has accepted five of the six recommendations made by the province’s information and privacy commissioner [here] in a report [Review Report 10-05] blasted the government for violating its own Freedom of Information and Protection of Privacy (FOIPOP) Act after it failed to assist then-Global News reporter Marieke Walsh in her multiple FOIPOP requests for emails sent by Leo Glavine during his time as Minister of Health and Wellness. It also criticized the office of Premier Stephen McNeil for appearing to interfere in Commissioner Tully’s attempts to interview Glavine’s former executive assistant. A letter filed with the Office of the Information and Privacy Commissioner by the health department and the Department of Internal Services reads in part: “The Departments appreciate the opportunity to clarify the efforts taken in this case and, through the response to the recommendations, demonstrate its continued commitment to transparency” The province says they agree with the recommendation that emails sent to or from personal email accounts that reside on government servers are within the custody of the province but dispute that it means staff should be able to request a minister search his personal email for records relating to government business. [Global News]


US – U.S. to Help Define New Int’l Standard for Consumer Privacy by Design

A coalition of U.S. tech companies and government agencies is joining forces with 11 other countries to develop consumer privacy-by-design international standards as part of ISO Project Committee 317. The U.S. will work with the U.K., China, Canada and other countries to create the global standard and will be represented by its Technical Advisory Group. OASIS See also: the National Telecommunications and Information Administration has extended its comment period for feedback on an approach to consumer privacy. The deadline for comment is now extended to Nov. 9. More | The European Data Protection Supervisor published an opinion on a recent legislative package, “A New Deal for Consumers,” Giovanni Buttarelli said the EU needs to adopt a big-picture approach to addressing harms and called for cooperation between consumer law and data protection rules. Europa.eu

WW Apple Lets You Download All Your Data

Apple fulfilled its promise to offer a data download service for its users the U.S., and now you can find out what Apple’s got on you with a few simple clicks [Apple privacy portal here – Apple ID required]. Here’s how you take a peek at what Apple retains about you: 1) Go to the Apple privacy portal and sign in. You’ll have to enter an authentication code if you’ve enabled 2FA. If you don’t have two-factor authentication, you should; 2) Once you’re in, you’ll see a few options and you want to click “Obtain a copy of your data.” You can choose which services you want to request the data from but you might as well just grab it all’ and 3) It’s possible that you might have to do some extra verification and answer some questions but mostly you’ll just have to wait. It can take up to seven days for the information to be compiled and sent in a zip file to your email address. Along with the EU and the U.S., the tool should now be available in Australia, Canada, Iceland, Liechtenstein, New Zealand, Norway, and Switzerland. Otherwise, you can submit a form request for your data here. Apple uses encryption to anonymize your data for its own product analysis. What you’re requesting is the data that can specifically be tied to your account and device. The new portal accompanies some routine changes to Apple’s Privacy Policy that gets everything in line with the improvements to iOS 12 and macOS Mojave. It’s a good policy and worth perusing if you want to see how it should be done. Just always remember Apple is not your friend. [GIZMODO | Apple Launches Portal for U.S. Users to Download Their Data | How to download your data from Apple | Apple enables data downloads for US customers

WW – Apple Launches Privacy Website that Lets You Find All the Data the Company Has On You

Apple is moving forward several privacy upgrades, including launching a portal that allows customers to search and see what kind of data the company has kept on them. The privacy portal was already tested in the European Union in May, coinciding with the EU’s launch of restrictive privacy legislation called the General Data Protection Legislation (GDPR). The information collected may include data such as calendar entries, photos, reminders, documents, website bookmarks, App Store purchases or support history of repairs to your devices, among other items. The search function, which provides customers a report on their tracked data, fits into a broader narrative as Apple seeks to differentiate itself as a company that makes its money from selling hardware, rather than targeted ads based on the data of its customers. In addition to the search portal, Apple has launched several enhanced privacy initiatives with its new website and new iOS 12 operating system for iPhones and iPads. The company is touting its “Intelligent Tracking Prevention” technology, essentially a way to stop the kind of data collection that causes consumers to see ads for products related to their recent purchases or web searches. Apple has also made changes standardizing certain settings to prevent so-called “machine fingerprinting” or “browser fingerprinting,” a way that a person’s individual device can be identified using its unique settings and preferences, like special fonts, even if the customer has blocked other forms of data tracking. There are future plans for privacy as well, according to the company, including end-to-end encryption for its Group FaceTime video chat product, which will launch soon and will allow up to 32 people to join a group conversation. Encryption will also protect the new “Screentime” feature, so users will be able to keep information about how often they use their devices private. [CNBC] See also: Apple says ‘dangerous’ Australian encryption laws put ‘everyone at risk’

WW – Apple Introduces Privacy Portal to Give Users Access to Their Data

Apple rolled out several privacy upgrades, including a privacy portal that will allow customers to understand what personal data the company has stored. Apple also introduced enhanced privacy initiatives, including the new iOS 12 operating system for iPhones and iPads and Intelligent Tracking Prevention technology. The company also announced plans to add encryption to its group FaceTime video chat and the new Screentime feature. Apple CEO Tim Cook is scheduled to keynote this year’s International Conference of Data Protection and Privacy Commissioners, which takes place next week in Brussels. CNBC

WW – Google, Mozilla Each Roll Out New Privacy Features

Google announced it has been working on a new tool that will allow users to understand what data the company collects and the options available to control it. The new feature, called Your Data, will allow users to better understand why and what Google collects. To start, Google will introduce privacy controls within its search, allowing users to directly review and delete the activity log. Meanwhile, Mozilla is experimenting with a new privacy offering. While the company generates money through search-ad deals, Mozilla is offering a virtual private network service to users who want more privacy, offsetting lost revenue with a $10 monthly fee. (Registration may be required to access this story.) WIRED

US – Cook Endorses US Federal Privacy Law at ICDPPC

While the theme of this year’s 40th annual International Conference of Data Protection and Privacy Commissioners may be “Debating Ethics,” Apple CEO Tim Cook centered his keynote address firmly around privacy law: “We at Apple are in full support of a comprehensive federal privacy law in the United States.” It was part of a pointed and definitive endorsement of privacy by the world’s largest company, which drew a round of applause from the collected data protection authorities and observers. Apple CEO and long-time data privacy advocate Tim Cook has made an impassioned speech calling for new digital privacy laws in the US. At a privacy conference in Brussels, Cook said that modern technology has resulted in a “data-industrial complex” where personal information is “weaponized against us with military efficiency,” and in a way that doesn’t just affect individuals but whole sections of society. “Platforms and algorithms that promised to improve our lives can actually magnify our worst human tendencies,” said Cook. “Rogue actors and even governments have taken advantage of user trust to deepen divisions, incite violence, and even undermine our shared sense of what is true and what is false. This crisis is real. It is not imagined, or exaggerated, or crazy.” Cook praised Europe’s “successful implementation” of privacy law GDPR, and said that “It is time for the rest of the world … to follow your lead. We at Apple are in full support of a comprehensive federal privacy law in the United States.” He outlined four key areas that he believes should be turned into legislation: the right to have personal data minimized; the right for users to know what data is collected on them; the right to access that data; and the right for that data to be kept securely. “Technology’s potential is and always must be rooted in the faith people have in it.” He then followed up his speech with a tweet that asked, “It all boils down to a fundamental question: What kind of world do we want to live in?” [Source]


CA Cannabis IQ: Everything You Need To Know About Pot and the Border

Recreational marijuana is now legal in Canada, but what does that mean for our relationship with our closest neighbour and biggest trading partner? Issues linked to the Canada-United States border were top of mind for many officials as Canada prepared for legalization. How would it affect screening at the crossings? What about pot tourists coming here from the states? And how would American officials react to Canadians who admitted they’d consumed the drug in the past? In most of the U.S., the recreational use and possession of marijuana remains illegal. Here’s a rundown of everything Canadians need to know about marijuana and the border, drawn from nearly two years of in-depth reporting by Global News. Border Security Minister Bill Blair, who until recently served as the government’s point-person on pot, advised Canadians to always be truthful with border guards, even if it means being turned away and banned for life. But that’s “dangerous” advice, according to U.S. lawyer Len Saunders. He recommended that Canadians refuse to answer questions about past marijuana use. That will probably lead to them being turned back to Canada on that one occasion, but won’t lead to more serious consequences. There are also concerns surrounding what could happen if consumer data (credit card purchase records, online ordering to a home address, etc.) ends up on servers in the United States. There’s little to stop that data from making its way to U.S. border officials, privacy experts say, which could lead to some uncomfortable questions at the border crossing. In fact, how a credit card marijuana purchase will appear on your bank statement could put you in an impossible position — admit to marijuana use and be banned for that, or deny it and be banned for lying. And, as always, trying to bring the drug over either side of the border is a big no-no. [Global News See also: Think about your privacy before you purchase pot: federal watchdog | Can we Implement Random Cannabis Drug Testing? – (5 pg PDF here) | Cannabis Is Legal: Top Tips for Employers | Marijuana in the workplace: What your boss can and can’t do | Understanding Cannabis Rules for Employees who Travel to Canada or the United States for Business – (5 pg PDF here)


US BTA & PCSP introduce the Educator Toolkit for Teacher and Student Privacy

Schools provide a rich opportunity to trade educational programs and assistance for a rich deposit of data in students (and teachers) who may not even be aware that Big Data is gathering data points by the bushel from their simplest activities. Industry leading Summit Learning, a provider of computer-run education programs, has admitted that they share the data they gather with 18 “partners.” The practice is not new; generations of test takers filled out the personal information pages with the PSAT [wiki]; in 2013, the College Board and ACT were sued over the practice of selling student information [see 12 pg PDF]. One of the items that sent West Virginia teachers out on strike last year was a rule that all teachers would carry a device [Humana Go365 App here] that monitored their movement and activity [the requirement was abolished in April 2018 – here]. Last year saw incidents of a hacker group holding school district data hostage, and backing up their demands by sending threatening emails to parents. In response to all of this, the Badass Teachers Association [here – really, that’s what their called] and the Parent Coalition for Student Privacy [here] have issued the Educator Toolkit for Teacher and Student Privacy [see PR here & 55 pg PDF here]. There is a full chapter laying out the pertinent privacy laws as they currently stand (if you think you know FERPA [here & wiki here], you may be unaware of the loopholes that have been added over the past decade). Is that data wall in your child’s classroom legal? Probably not. If the teacher wants to use a free app to monitor student behavior and communicate with families, is that okay? The answer turns out to be complicated. And what are the rules for that survey that the school just handed out to all students? The laws have become complex, and most parents and teachers did not go to law school. The toolkit provides some simple guidance. The toolkit offers ten teacher rules for using social media (or not) [and] also provides practical tips for protecting privacy, and for advocating for better protections for all. An appendix shows the results of a survey given to teachers about technology in their schools. Almost half of those responding said their school uses an online app or program to track student behavior. And well over half reported that their school requires them to use certain computer based programs and materials. The toolkit certainly doesn’t have all the answers, but if you are a teacher or a parent, particularly one who’s just starting to realize there’s something to worry about in our new data era, this is a good place to start. The toolkit was supported by grants from the Rose Foundation for Communities and the Environment, the American Federation of Teachers, and the NEA Foundation. [Forbes | Stronger Data-Privacy Protections for Students and Teachers Needed, Report Argues | Teachers And Privacy And Telling Tales Out Of School | Stephen Miller’s Old Teacher Suspended for Calling Him a ‘Loner’ Who Ate Glue

EU – Irish DPC Launches Data Privacy Education Pilot Program for Children

The Irish Data Protection Commission has launched pilot data privacy education modules in three schools within the country. The modules are targeted toward three different age groups: 9 to 10, 14 to 15, and 16 and older. Data Protection Commissioner Helen Dixon said she plans to host a public consultation later this year to discuss the benefits of data privacy education in schools and that initial feedback to the pilot program could inform a national lesson plan. “We want to see if children understand the concept of personal data; how they engage with online services, the notices they are given and what they understand the risks to be; whether they understand the rights they have and can they exercise them by themselves,” Dixon said. Siliconrepublic.com


WW – The Threat of Quantum Computers for the Internet

An article for The Economist examines how quantum computers will impact the internet and when they will become available. While some venture to guess such a computer will be available sometime between 2030 and 2040, the National Institute of Standards and Technology has already begun a competition to devise quantum-resistant proposals, with conclusions expected in 2024. The article states, “All this means that quantum-proofing the internet is shaping up to be an expensive, protracted and probably incomplete job.” (Registration may be required to access this story.) The Economist

EU Developments

EU – Jourová, Ross Release Joint Statement on Privacy Shield Review

EU Justice Commissioner Věra Jourová and U.S. Secretary of Commerce Wilbur Ross released a joint statement on the second annual review of the EU-U.S. Privacy Shield agreement. The EU and U.S. noted three new members have been confirmed to the Privacy and Civil Liberties Oversight Board and Manisha Singh’s designation as Privacy Shield ombudsperson. “In the wake of recent privacy incidents involving the personal data of Europeans and Americans, the U.S. and EU reaffirm the need for strong privacy enforcement to protect our citizens and ensure trust in the digital economy,” the statement reads. “The Commerce Department will revoke the certification of companies that do not comply with Privacy Shield’s vigorous data protection requirements.” The European Commission is expected to publish a report on Privacy Shield by the end of the year. Europa.eu Coverage: US taking privacy shield deal seriously, EU officials say | EU-U.S Privacy Shield: U.S making serious efforts to comply with data pact | Commerce chief Ross, after review with EU, says U.S. to focus on appointment of privacy ‘ombudsperson’ (subscribers only) | FTC’s Chopra Seeks Privacy Shield Probes in Data Enforcement | Privacy Shield review: Prepare for the worst | U.S. making serious efforts to comply with EU data rules: EU officials | Ding ding! Round Two: Second annual review for transatlantic data flow deal Privacy Shield | Outcome of Privacy Shield Review Uncertain, Despite U.S. Steps Toward Compliance | Europe and US lock horns on transatlantic privacy

EU CNIL Publishes Initial Analysis on Blockchain and GDPR

Many questions surround the Blockchain’s [wiki & beginners guide] compatibility with EU General Data Protection Regulation (GDPR). The French Data Protection Supervisory Authority (the CNIL) has recently published its initial thoughts on this topic [PDF in French], providing some responses and practical recommendations on how the usage of a blockchain may be compatible with GDPR and more generally Data Protection Law, taking into account the “constraints” imposed by such technology. The guidance covers the four following topics: 1) What solutions for a responsible use of Blockchain involving personal data?; 2) How to minimize risks for data subjects when the processing of their personal data relies on a blockchain?; 3) How to ensure the effective exercise of the data subjects’ rights?; and 4) What are the security requirements? Although this is a preliminary analysis of the CNIL, it is certainly interesting to know its position on this topic, and to see that its approach is rather pragmatic and takes into account the constraints imposed by the Blockchain technology. The CNIL will continue its reflection on Blockchain and is likely to publish additional guidelines in the future. The CNIL has already announced that it will work on this topic with the other authorities in order to adopt a solid and common approach. It will also liaise with other national regulators such as the AMF in order to lay the foundation of an inter-regulation, which will allow the different stakeholders to have a better understanding of the various requirements applicable to Blockchain. [Privacy Matters Blog (DLA Piper) and CNIL Publishes Initial Assessment on Blockchain and GDPRPrivacy & Information Security Law Blog (Hunton Andrews Kurth)

WW – Denham Named Chair of ICDPPC

U.K. Information Commissioner Elizabeth Denham has been named the new chair of the International Conference of Data Protection and Privacy Commissioners. Denham said the current age of borderless data flows has made this a critical time for global unification on data protection and privacy. “My vision for the ICDPPC is to lead a decade of global data protection,” Denham said. “A decade when data protection and privacy by design become mainstream aspects of the digital economy, safeguarding democratic governance and ensuring protection for society’s vulnerable groups, including young people.” ICO.uk


CA Nova Scotia Set to Replace Compromised FOIPOP Website

Nova Scotia is set to replace a program behind a data breach that exposed personal information such as social insurance numbers, birth dates and personal addresses to the general public. The Freedom of Information and Protection of Privacy Portal (FOIPOP) website, which was originally breached between March 3 and March 5, was taken down on April 5 when officials with the Department of Internal Services — which is responsible for the FOIPOP website — were first informed by a provincial employee that it was possible to inadvertently access documents through the portal. More than 7,000 documents were inappropriately downloaded as a result of the breach, and 369 of the documents contained “highly sensitive” personal information such as social insurance numbers, birth dates and personal addresses. While a portion of Nova Scotia’s FOIPOP system was brought back online on Sept. 5 — 152 days after being taken offline — it only included the ability to download previously completed FOI requests [see PR here & coverage here]. The provincial government says it expects to issue a Request for Procurement (RFP) in the first quarter of 2019 for an AMANDA 7 [see here & here] replacement. Internal emails between the government and Unisys, the company company tasked with maintaining the government’s online services, indicated that extensive changes were needed to fix the core code of AMANDA 7 and remove the possibility of another security breach. [Global News ]

Health / Medical

US – OCR to Propose Rulemaking on ‘Good Faith’ Disclosure for Patient Care

The U.S. Department of Health & Human Services’ Office for Civil Rights is drafting a notice of proposed rulemaking on “good faith” disclosures of patient data by health care providers in patient emergencies. Speaking as the keynote at the Safeguarding Health Information: Building Assurance Through HIPAA Security conference, OCR Director Roger Severino explained such disclosure could be done so without patient consent. Meanwhile, the Centers for Medicare & Medicaid Services announced that Healthcare.gov suffered a data breach that may have impacted as many as 75,000 people who were receiving help with getting health insurance coverage. HealthITSecurity | Aetna has reached settlements with several state attorneys general for disclosing the HIV statuses of 12,000 patients in violation of HIPAA. HealthITSecurity

US – Health Care CISO: Start Protecting Patient Privacy at Home

Health care professionals should think beyond merely protecting the organization and start protecting patients’ privacy at home. “At some point, I’m going to have [to] start thinking about how to protect patients in their home,” Christiana Care Health System Chief Information Security Officer Anahi Santiago said. “My information security program is not going to just be about the data center or the cloud but an extension into the patients’ homes. So, we can be responsible for protecting them wherever they use technology.” She added, “The patients are going to be driving the decision when it comes to their care, how they communicate, and the technology they want to use.” [HealthITSecurity]

CA – Former Employee Snooped on Health Records of More than 1,400 People

Alberta may need new ways of preventing information in electronic health records from falling into the wrong hands, the province’s privacy commissioner says in a new report written by Chris Stinner, a manager of special projects and investigations from the Office of the Information and Privacy Commissioner [PR & 26 pg report] The report concluded that Alberta Health Services (AHS) failed to ensure privacy training and proper oversight of a former typist and medical secretary at a psychiatric hospital who improperly looked at the medical records of 1,418 patients over 12 years. “The findings from this investigation suggest it is well past time to consider whether the current approach to safeguarding health information made available through Netcare, as implemented by AHS in co-operation with Alberta Health, is adequate,” information and privacy commissioner Jill Clayton wrote in a preamble to the report. Clayton is now considering whether she should instigate a wider review of Alberta Netcare, which is an electronic medical record system that gives 48,946 health-care workers access to diagnoses, treatment, and medical images for patients’ physical and mental health. In August 2015, AHS terminated the Alberta Hospital employee who broke the privacy rules. However, Stinner’s report said her coworkers four times reported her suspected misuse of the Netcare system to AHS managers in the 17 months before she lost her job. The first three times, managers neglected to check Netcare data logs to see how the worker was using the system, Stinner said. Stinner recommended AHS review privacy training for all staff, review how it investigates privacy breaches, and revisit how it audits employees’ use of Netcare. The law requires AHS to monitor how employees use the electronic records. [Edmonton Journal | AHS blamed for breach that saw thousands of patient files improperly accessed | AHS failed to protect health information, privacy commissioner finds | Alberta Health Services rebuked for failing to protect health records

US – FDA Issues Draft Guidance for Cybersecurity Management in Medical Devices

A draft of updated premarket guidance from the U.S. Food and Drug Administration shows that manufacturers should prepare a “cybersecurity bill of materials” before marketing medical devices. The requirement would require manufacturers to produce a list of the components that could be susceptible to vulnerabilities. FDA Commissioner Scott Gottlieb said, “Because of the rapidly evolving nature of cyber threats, we’re updating our premarket guidance to make sure it reflects the current threat landscape so that manufacturers can be in the best position to proactively address cybersecurity concerns when they are designing and developing their devices.” GovInfoSecurity

AU – My Health Record Privacy Amendments ‘Woefully Inadequate’: Labor

After carefully reading and considering 31 public submissions on the My Health Record privacy amendments, as well as three further documents from community health organisations, the Senate Community Affairs Legislation Committee’s report on its inquiry has made just one solitary recommendation. “The committee recommends the Bill be passed,” it reads. The committee didn’t come up with a single suggested improvement to the hastily written My Health Records Amendment (Strengthening Privacy) Bill 2018, which is intended to allay the privacy concerns which have lead to 900,000 Australians opting out of the centralised digital health records system. The Bill only addresses the two most prominent concerns, however. If passed, it would grant individuals the ability the delete their records completely rather than merely making them inactive, and tighten the restrictions on law enforcement agencies being able to access an individual’s health records. While the Labor senators on the committee supported the recommendation to pass the Bill, they’ve also called it “woefully inadequate”. … [and] the inquiry has revealed a range of serious flaws in the current legislation that are not addressed by the government’s Bill,” they wrote. The Labor senators said that they would move further amendments, including to assure that My Health Record can never be privatised or commercialised, or used by private health insurers; that employees’ right to privacy is protected in the context of employer-directed healthcare; and that vulnerable children and parents such as those fleeing domestic violence are protected. Originally scheduled to have ended in mid-October, the opt-out period for My Health Record has been extended to November 15. Labor has called for it to be extended indefinitely. The comprehensive Senate inquiry into the My Health Record system is now due to report this Wednesday October 17. ZDNet | With one month left to opt out of My Health Record privacy concerns remain

Horror Stories

WW Facebook Says Hackers Accessed Sensitive PII on 29 Million Users

Last month, Facebook disclosed a massive security vulnerability that it claimed affected some 50 million login tokens. Facebook now believes the number of accounts impacted to be closer to 30 million [blog notice]. For 400,000 of the accounts, which these attackers used to seed the process of gathering login tokens, personal information, such as “posts on their timelines, their lists of friends, Groups they are members of, and the names of recent Messenger conversations” and, in one instance, actual message content, were compromised. Of the 30 million ensnared in the attack, Facebook believes that for around half, names and contact information—meaning phone numbers, email addresses, or both—were visible to the attackers; 14 million of that pool had that same information scraped as well as myriad other personal details, which Facebook believes could contain any of the following: “[U]sername, gender, locale/language, relationship status, religion, hometown, self-reported current city, birthdate, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches” Facebook believes only 1 million of the total compromised accounts had no personal information accessed whatsoever. [Gizmodo | Facebook hackers accessed more private information than previously revealed | Chilling new details reveal intimate personal data stolen by Facebook hackers | How Facebook Hackers Compromised 30 Million Accounts | How to find out if yours was one of the unlucky hacked Facebook accounts]

US – Yahoo Agrees to Pay $85M to Settle Data Breach Lawsuits

Yahoo has agreed to pay $85 million to settle class-action lawsuits related to its 2013 and 2014 data breaches. Yahoo users will receive $50 million, and $35 million will go toward legal fees. U.S. and Israeli Yahoo users who had an account between Jan. 1, 2012, and Dec. 31, 2016, are eligible for the settlement. Small-business owners with Yahoo accounts can also claim money from proceedings. “We are pleased that we were able to reach a settlement with Yahoo, which would provide relief to impacted users and ensure that Yahoo improves its security practices going forward,” Morgan & Morgan Lead Counsel John Yanchunis said in a statement. U.S. District Court Judge Lucy Koh is expected to rule on the settlement Nov. 29. The Mercury News

US – Millions of Phones Numbers, Strategy Documents Exposed by Data Leak Affecting Tea Party Super PAC

Internal documents belonging to the Tea Party Patriots Citizens Fund, a Republican super PAC, were publicly exposed online as a result of a misconfigured database, including material involving the 2016 U.S. presidential race and call lists containing the names and phone numbers of more than a half-million people, a cybersecurity firm said. Upguard, a Silicon Valley-based cybersecurity firm that made the discovery, said its researchers came across a publicly available database in August containing over 2 gigabytes of files belonging to the Tea Party Patriots Citizens Fund, or TPPCF, a federal super PAC that raised and spent millions for conservative causes since its founding in 2013. Among the trove of exposed files were calls lists containing the names and phone numbers for more than 527,000 individuals, as well as “strategy documents, call scripts, marketing assets and other files revealing a focused effort to politically mobilize U.S. voters,” according to the Upguard. The files were found on a misconfigured Amazon Web Services S3 “bucket,” or cloud server, and were eventually made private after being brought to the super PAC’s attention. The Washington Times]

WW – Dating App Leaks Users’ Data

The entire user database of Donald Daters, a new online dating app for supporters of U.S. President Donald Trump, has leaked online. Touting it wants to help “make America date again,” the app’s database, which included usernames, profile pictures, device type, and private messages, as well as access tokens, was accessible from a public data repository. After being alerted to the issue, Emily Moreno, founder of the app and former aide to Sen. Marco Rubio, R-Fla., said, “We have taken swift and decisive action to remedy the mistake and make all possible efforts to prevent this from happening again.” TechCrunch

Identity Issues

CA – No Privacy Concerns with ID Scanners at P.E.I. Cannabis Stores, Says Government

The P.E.I. government is defending its use of ID scanners at its new cannabis retail stores, insisting they’re not being used to collect private information from customers and are an important tool to flag fake IDs. P.E.I. Information and Privacy Commissioner Karen Rose said the commission had received information from a member of the public “concerned about the collection of personal information by the recently opened cannabis outlets.” Rose said she will ask the liquor commission what personal information is being collected by these outlets, and their authority for collecting such information. She will also inquire “about their compliance with the FOIPP Act, including the security measures which are in place to protect personal information.” The P.E.I. Cannabis Management Corporation said that they do not retain any data and that the ID scanners are not connected to any sort of internet and are essentially standalone devices. The scanner is an industry standard used in other jurisdictions to validate a wide variety of national and international identification cards. The practice of confirming valid ID cards for everyone, even people who appear to be much older than the legal age for purchasing cannabis, will continue as well. [CBC News] See also: Privacy commissioner investigating personal data collection at cannabis stores]

CA Update: PEI Pulls Electronic ID Scanners from Cannabis Stores

Cannabis stores across the province of Prince Edward Island will no longer use electronic ID scanners. The decision comes after some customers questioned what information was being collected, and how it was being used. The concerns prompted an ongoing investigation from P.E.I.’s information and privacy commissioner. Cannabis Management Corporation said the scanners were meant to safeguard against underage purchases and fake IDs. “They were not meant to retain or track any data, but an IT specialist examined the scanners today and found some data was being kept for 24 hours inside the device,” it said. “This data was immediately wiped and settings were changed so as not to keep data in the future.” Finance Minister Heath MacDonald said the scanners will be gone for good, unless there’s some other need or reason to bring them back. [CBC News]

Internet / WWW

WW Gartner Picks Digital Ethics and Privacy as a Strategic Trend for 2019

A Gartner Group Analyst Gartner has put businesses on watch that as well as dabbling in the usual crop of nascent technologies organizations need to be thinking about wider impacts next year — on both individuals and society. Digital ethics and privacy has been named as one of Gartner’s top ten strategic technology trends for 2019 [PR here, blog post here & infographic here]. It writes: “Any discussion on privacy must be grounded in the broader topic of digital ethics and the trust of your customers, constituents and employees. While privacy and security are foundational components in building trust, trust is actually about more than just these components. Trust is the acceptance of the truth of a statement without evidence or investigation. Ultimately an organisation’s position on privacy must be driven by its broader position on ethics and trust. Shifting from privacy to ethics moves the conversation beyond ‘are we compliant’ toward ‘are we doing the right thing.” [TechCrunch]

Law Enforcement

US – Police Officers Rely on Social Media to Conduct Covert Investigations

Kashmir Hill reports on the growing use of undercover police officers to comb through unsuspecting social media profiles to discover illegal activity. Hill writes that little has been done to curtail law enforcement’s use of undercover online surveillance. A Freedom of Information Act request to more than 50 police departments across the U.S. showed that while many have social media policies, most fail to address covert investigations. Hill writes, “the unregulated nature of undercover social media police work leads to disturbing outcomes,” adding that while it may be a useful tool, officers “shouldn’t be secretly friending people for indefinite amounts of time, keeping them under permanent suspicion.”
The Root

US – Law Enforcement Robots Begin Patrolling New York Neighborhoods

Law enforcement robots have started patrolling several neighborhoods in New York, as well as at LaGuardia Airport. Each robot contains five cameras, one of which uses thermal-imaging technology. The data gathered by the patrol is communicated to an internet-based portal accessed by local security and law enforcement. The robots can also observe pedestrians on sidewalks, record license-plate numbers, and detect cellphone serial numbers. Knightscope CEO William Santana Li said, “This is a crazy combination of artificial intelligence, self-driving autonomous technology, robotics, and analytics in something that’s actually useful for society.” CBS Local

CA – K9 Drug Units Must Adapt to New Pot Laws

As legalization looms, K9 units across the country are facing a problem: their dogs are outdated. Drug-sniffing dogs undergo training from a very young age to be able to detect a wide variety of drugs, including cannabis, which will be legal in Canada on Oct. 17. And while some have been forced into early retirement, many will remain in their jobs, raising questions for legal experts concerned that law-abiding citizens might be stopped and searched by police based on an alert for a perfectly legal substance. Some organizations said they’ll be totally unaffected by legalization. Since crossing the border with cannabis will remain illegal without a permit, the Canadian Border Services Agency said all their drug-sniffing dogs will remain in the same role. When cannabis was illegal, police had reasonable grounds to search a person if a dog smelled cannabis on them. Now, Lewin said, though cannabis-related offences will still exist, the waters are muddied. Since dogs don’t distinguish their alerts based on specific drugs, police won’t know whether a dog is alerting them to the presence of fentanyl or a joint. Toronto cannabis lawyer Harrison Jordan said he expects to see court challenges where dogs alert their handler for the presence of a drug that turns out to be legal cannabis, and the cop finds a different illegal item, like a handgun — will that charge hold up in court, since the initial search was for a legal substance? “It really depends on the reasonable grounds that they have,” Jordan said. Toronto Star


US – Facebook, Google Hit With Lawsuits for ‘Secret’ Location Tracking

Facebook and Google have both been hit with lawsuits claiming that the Silicon Valley giants secretly track their users’ locations against their will and use the information to pad its advertising business. The class action complaint against Facebook, which was filed by Brett Heeger in San Francisco federal court, said the social network tracks its users even after they’ve opted out of its “Location History” feature. “Facebook secretly tracks, logs, and stores location data for all of its users–including those who have sought to limit the information about their locations that Facebook may store in its servers by choosing to turn Location History off,” the suit said. “Because Facebook misleads users and engages in this deceptive practice, collecting and storing private location data against users’ expressed choice, Plaintiff brings this class action on behalf of himself and similarly situated Facebook users.” Facebook in pushed back against the lawsuit, saying its location tracking policy has always been transparent. The lawsuit follows a similar complaint against Google, which was filed on Oct. 12. in San Francisco federal court. The suit claims that Google “intentionally provided inaccurate instructions” for its users to turn off its own “Location History” feature. “Google explicitly represented that its users could prevent Google from tracking their location data by disabling a feature called ‘Location History’ on their devices. Google stated: ‘With Location History off, the places you go are no longer stored.’ This statement is false,” the lawsuit claimed. “Turning off the ‘Location History’ setting merely stops Google from adding new locations to the ‘timeline’ accessible by users. In secret, Google was still tracking, storing, and monetizing all the same information.” Instead, users have to navigate a labyrinth to reach the correct “Web & Activity” page to turn off location tracking — a page “Google’s instructions intentionally omit all references to,” according to the class action complaint. The suit points to an Aug. 13 report from the Associated Press that brought Google’s tracking policies into question. Following the AP’s report, Google updated its location tracking policy to “make it more consistent and clear,” the company told TheWrap in August. [The Wrap]

Online Privacy

EU – Twitter Faces Investigation by Privacy Watchdog Over User Tracking

Ireland’s Data Protection Commission has launched an investigation into Twitter after it refused a request inquiring into its collection of location data from users. The social media company is alleged to have turned down the request for information about the data Twitter collects from shortened Twitter links. When users tweet a link to a web address on the platform, Twitter applies its own technology to shorten the URL into a t.co format. Mr Veale believes the company could be collecting information such as timestamps and the devices being used. It could be possible to figure out a person’s location with this data. Ireland’s privacy watchdog received the complaint in August. Twitter’s circumvention of Mr Veale’s request to access information could count as a violation of the European Union’s General Data Protection Regulation – a new privacy framework that came into effect in May. Twitter says it uses the t.co domain to allow the sharing of long URLs without exceeding the 280 character limit of a tweet, to measure information such as how many times a link has been clicked and “as a quality signal for surfacing relevant, interesting tweets.” It is also part of the company’s service to protect users from malicious sites, with the URL-shortening application used to assess the link against a list of potentially dangerous web pages. The Telegraph

Other Jurisdictions

WW – ICDPPC Releases Road Map on Future of Conference

The International Conference of Data Protection and Privacy Commissioners has released the “Resolution on a Roadmap on the Future of the International Conference.” The ICDPPC launched a survey in 2017 on the future size and membership of the conference. After roundtables and public consolations were held, the ICDPPC released the road map to highlight the key trends and demands in order to transform the conference from “an annual meeting to an effective network of privacy and data protection authorities.” The road map is co-authored by the Office of the Privacy Commissioner of Canada and France’s data protection authority, the CNIL. Co-sponsors of the road map include DPAs from Italy, Albania, Argentina, Germany and Poland. PrivacyConference2018

Privacy (US)

US – FTC Looks Back on 20 Years of COPPA

In a blog post marking the occasion, the U.S. Federal Trade Commission looks back at 20 years of the Children’s Online Privacy Protection Act, noting, “Many of the kids the law was originally designed to protect are now parents themselves.” The FTC’s Peder Magee notes the agency “continues to be committed to rigorous COPPA enforcement” and that the law has “responded to developments in technology.” Magee says the FTC also “seeks new ways to ensure verifiable parental consent” and that self-regulation plays an important role in the ecosystem. “So far,” he points out, “the FTC has approved seven Safe Harbor Programs that companies can work with to ensure their practices are up to scratch.” FTC.gov

Privacy Enhancing Technologies (PETs)

WW – Privacy by Design Guidance Book Published

This year’s IAPP Privacy Engineering Section Forum was sold out. As part of a dedicated response to covering the topic, the IAPP released “Strategic Privacy by Design,” a new book by R. Jason Cronk. Written from a practitioner’s perspective, this is the first IAPP book to get into the details of how privacy by design works, with dozens of sample scenarios, workflows, charts, and tables. While this book is not written specifically for the GDPR, it can be used as a process for data protection by design and default, and it is invaluable for building better processes, products, and services that consider privacy as a design requirement. IAPP Store See the Privacy Engineering Program website and Kicking off the NIST Privacy Framework: Workshop #1 – video | commentary: A Framework for Online Privacy and

WW Tim Berners-Lee’s Launches New Project on Data Privacy

Tim Berners-Lee, the inventor of the World Wide Web, is trying to figure out how to keep your private information from advertisers’ prying eyes. He teamed up with a group of experts, including folks from MIT, and started Inrupt, a start-up whose open-source project, Solid [here], should achieve that lofty goal. Solid accumulates all your data into what its creators call a “Solid POD,” a repository of all the personal information you want to share with advertisers or apps, with a clear and understandable permission system. You can decide which app gets your data and which do not. Furthermore, when using apps that support Solid (say, your fitness app), you won’t need to enter any data — just allow or disallow access to the Solid POD, and the app will do the rest on its own. While this is helpful, and it’s really cool that it simplifies personal-data management, the truth is that another, much more potent solution already exists. Every day this solution gains traction in the developer community, and many of its features are already being embedded in financial and other institutions worldwide. It’s called distributed ledger technology (DLT). [wiki here] DLT-based apps (also called “dApps”) are superior to Solid for 3 reasons: 1) dApp data is spread across hundreds, if not thousands, of different nodes (users, servers, etc.); 2) dApps enable users not only to decide with whom to share data, but also to earn value by doing that, thanks to a special utility token system that rewards users with tokens for specific actions; and 3) Since your data on DLT is encrypted, you can share segments of it with advertisers and service providers, while still remaining anonymous if you so choose. My biggest complaint remains its centralization. DLT circumvents that by providing a robust, nearly unhackable system, where personal data can truly remain hidden and untouched by those without necessary permissions — be they hackers or greedy advertisers. I may sound like I’m bashing Berners-Lee’s brainchild, but it’s quite the opposite. I want both of them to succeed. In fact, I propose a Solid POD based on DLT. That way, it would definitely be decentralized. That way it would have all the capabilities decentralized ledger technology has to offer, and the potential to upgrade the internet to 2.0. [MarketWatch]


CA StatsCan Survey: One in Five Businesses Hit by Cyberattacks Last Year

More than one in five Canadian companies say they were impacted by a cyberattack last year, with businesses spending $14 billion on cybersecurity — $8 billion on cybersecurity staff and contractors, $4 billion on related software and hardware and $2 billion on other prevention and recovery measures — as they confront greater risks in the digital world, according to a new Statistics Canada survey [see blog post here, report here & Infographic here]. The most common suspected motive was an attempt to steal money or demand a ransom payment, according to the survey. Theft of personal or financial information was less typical — less than one-quarter of the cyberattacks — though it was the most cited reason for investing in cybersecurity, StatCan said. Only 10% of businesses affected by a cyberattack reported it to law enforcement agencies last year, StatCan said. Large businesses — those with 250 or more employees — were more than twice as likely as small ones — between 10 and 49 employees — to be apparent targets, according to the report. It said the attacks resulted in an average of 23 hours of “downtime” per company in 2017. Data for the survey — the first of its kind in Canada — were collected between January and April 2018, with a sample size of 12,597 businesses and a response rate of 86%. [The Toronto Star]

US Security Leaders Will Need to Protect Patient Privacy at Home

Healthcare security leaders need to think beyond protecting the organization to protecting patient privacy and data security at home in the coming years, observed Christiana Care Health System [here] CISO Anahi Santiago [here]. “At some point, I’m going to have start thinking about how to protect patients in their home. My information security program is not going to just be about the data center or the cloud but an extension into the patients’ homes. So, we can be responsible for protecting them wherever they use technology,” Santiago told a panel at the HIMSS Healthcare Security Forum. “The patients are going to be driving the decision when it comes to their care, how they communicate, and the technology they want to use,” she said. Healthcare information security is a patient safety issue, she stressed. “As we think about the next generation of security, we need to bake security into the fabric of the organization, as opposed to putting it in after the fact,” she added. Santiago said that providers need to automate more of their security tasks to keep up with threats. Organizations should automate menial tasks that take up a lot of time, such as researching phishing attacks. [HealthIT Security]

CA Three Quarters of Canadian SMBs Don’t Have Patching Policy: Survey

Small and mid-sized Canadian companies still have a long way to go to beef up their defences against cyber attacks if a newly-released Canadian Internet Registry Authority (CIRA) survey of people with responsibility over IT security decisions is representative [see PR here & survey report here]. Of the 500 business owners and employees who manage information technology questioned, 71% said their firm did not have a formal patching policy. In addition, only 54% of small businesses said their firm provides cybersecurity training for employees. CIRA staff noted that 82% of respondents from mid-sized firms (over 250 employees) said their company has a training program. Meanwhile, 78% of respondents were confident in their level of cyber threat preparedness. 40% of respondents said their firm experienced a cyber attack that staff had to respond to in the last 12 months. 10% experienced 20 or more attacks. The survey also showed 67% of respondents outsource at least part of the cybersecurity footprint to external vendors. Almost 90% (88%) of respondents were concerned with the prospect of future cyber attacks, and 28% suggested they will add cybersecurity staff in the next year. Among respondents, 24% said no one in their firm has primary responsibility over cyber security. Another 18% said their firm has one person responsbile for those functions. [ITWorld Canada]

CA – 38% of Canadian Businesses Unaware of PIPEDA: CIRA

Within its Fall 2018 Cybersecurity Survey, the Canadian Internet Registration Authority revealed 38 percent of Canadian businesses said they are unfamiliar with the Personal Information Protection and Electronic Documents Act. The lack of PIPEDA awareness comes as 59 percent of businesses state they store customers’ personal information, and 40 percent claim they have suffered a cyberattack within the past 12 months. The CIRA survey also found 54% of small businesses provide cybersecurity training to their employees, and 88% of respondents were concerned about future cyberattacks. “Training and awareness are critical to ensuring your business is cyber-secure,” CIRA Chief Security Officer Jacques Latour said in a statement. “No matter how great your IT team is, anyone with a network-connected device can be the weak point that brings your business down.” MobileSyrup

WW – Study Expects 247% Increase in Third-Party Attacks Over Next Two Years

Opus, a provider of global compliance and risk management solutions, partnered with research firm ESI ThoughtLab, WSJ Pro Cybersecurity, and other cybersecurity organizations to launch The Cybersecurity Imperative to benchmark the cybersecurity practices and performance of more than 1,300 organizations globally. The study found attacks on and through third-party partners, customers and vendors to present the fastest-growing threat and predicts such attacks on partners and vendors will grow 284% over the next two years, while attacks through vendors will increase 247%. Opus Vice President of Innovation and Alliances Dov Goldman said that as companies turn to rely on vendors, they expose themselves to increased cybersecurity risk, adding, “Companies must support digital innovation with the tools and business practices to manage rising information security and privacy risks, especially those from third parties.” The Associated Press

US Convinced a Cyber Attack is Looming, Many Firm Still Don’t Prepare

Many organizations think experiencing a security cyber attack is inevitable, but a majority are not taking adequate steps to protect themselves, according to a report from insurance firm The Travelers Companies Inc. [here] The firm commissioned Hart Research [here] to conduct a national online survey of 1,201 business decision makers in June 2018, and found more than half (52 percent) of respondents think suffering an attack is inevitable [see PR here & 1 pg PDF infographic here]. Despite this, 55 percent have not completed a cyber risk assessment for their businesses; 62 percent have not developed a business continuity plan; 63 percent haven’t completed a cyber risk assessment on vendors who have access to their data; and 50 percent do not purchase cyber insurance. The percentage of respondents who think the current business environment is more risky continues to decrease, according to the report. In 2018 it was 36%, compared with 41% in 2016 and 48% in 2014. Overall, the concern about cyber security risk is second only to medical cost inflation. But it’s the top concern of large businesses and in sectors such as technology, banking and professional services. [Information Management]

US FTC Published New Materials on Cybersecurity for Small Business

The FTC launched new cybersecurity resources for small businesses – you’ll find them at FTC.gov/SmallBusiness. This new national cybersecurity education campaign grew out of discussions we had last year with small business owners across the country about cybersecurity challenges. The campaign is co-branded with the National Institute of Standards and Technology (NIST), the Department of Homeland Security (DHS), and the Small Business Administration (SBA). The new materials include fact sheets, videos and quizzes on these topics:

  • Cybersecurity Basics – [here];
  • Understanding the NIST Cybersecurity Framework – [here];
  • Physical Security – [here];
  • Ransomware – [here];
  • Phishing – [here];
  • Business Email Imposters – [here];
  • Tech Support Scams – [here];
  • Vendor Security – [here];
  • Cyber Insurance [here];
  • Email Authentication – [here];
  • Hiring a Web Host – [here]; and
  • Secure Remote Access – [here].

The simple format delivers information in a way that will make it easy for you to talk about cybersecurity with your employees, vendors, and others involved in your business. [FTC Blog]

CA – Canadian Businesses Spent $14B on Cybersecurity in 2017: Survey

A survey from Statistics Canada finds Canadian businesses spent $14 billion on cybersecurity in 2017. The survey also reveals more than one in five businesses suffered a cyberattack last year, but only 10% of those businesses reported the incident to law enforcement. Of the $14 billion Canadian businesses devoted to cybersecurity, $8 billion went to the addition of staff and contractors, $4 billion went to software and hardware, and $2 million was spent on recovery and prevention measures. Statistics Canada polled 12,597 businesses for the report and received responses from 86 percent of their targets. Financial Post

Smart Devices / IoT

US – IoT Security Bill Requires Devices to Ship with Unique Passwords

On September 28, 2018, the Information Privacy: Connected Devices bill was signed into law. Effective January 1, 2020, the law requires that Internet of Things (IoT) devices ship with unique passwords instead of a common default password.

  • nextgov: In California, It’s Going to Be Illegal to Make Routers With Weak Passwords
  • engadget: California bans default passwords on any internet-connected device
  • scmagazine: Weak passwords outlawed out West, California law aims to secure IoT devices
  • legislature.ca.gov: SB-327 Information privacy: connected devices.

CA Privacy Expert Steps Down From Advisory Role With Sidewalk Labs

Ann Cavoukian, a leading privacy expert and former Information and Privacy commissioner of Ontario, quit her advisory role with Sidewalk Labs, Google’s sister company, which is preparing to build a data-driven neighbourhood at Quayside on Toronto’s waterfront – here]. Saying in her resignation letter that the proposed protection of personal data [see blog post update here & 41 pg PDF here] “is not acceptable.” Cavoukian believes the plan for the Quayside smart-city development does not adequately protect individual privacy, and data collected from sensors, surveillance cameras and smartphones must be de-identified at source. Writing “Just think of the consequences: If personally identifiable data are not de-identified at source, we will be creating another central database of personal information (controlled by whom?), that may be used without data subjects’ consent, that will be exposed to the risks of hacking and unauthorized access As we all know, existing methods of encryption are not infallible and may be broken, potentially exposing the personal data of Waterfront Toronto residents! Why take such risks?” Cavoukian’s resignation (the fourth adviser to resign from the project citing privacy concerns – here) came less than a week after Sidewalk Labs published its digital governance proposals, a 41-page document that sought to put people’s privacy fears to rest by detailing how data collected in Quayside would be managed by an independent civic data trust, and not owned or controlled by Google. While Sidewalk Labs said it would de-identify data, it couldn’t guarantee what third parties would do. When the plan was recently intoduced to the Quayside’s digital advisory panel, Cavoukian realized “de-identification at source” was not a guarantee. “When Sidewalk Labs was making their presentation, they said they were creating this new civic data trust which will consist of a number of players — Sidewalk, Quayside, Waterfront Toronto and others — and that Sidewalk Labs would encourage them to de-identify the data involved that was collected but it would be up to the group to decide.” “That’s where I just said no.” David Fraser, a privacy lawyer advising Sidewalk Labs, was surprised Cavoukian’s resignation came when it did. “Her resignation seems to me a little premature because she would be very influential with (the civic data trust) [see here starting at pg 12] once it’s established,” he said. Fraser said the proposal to establish a civic data trust is “revolutionary.” Chantal Bernier [here & wiki here], legal adviser to Waterfront Toronto (and former interim Privacy Commissioner of Canada), said the project is sparing no effort to identify and address privacy issues. “We are still identifying every privacy risk to which we will apply every privacy protection available to us,” Bernier said in an email. On the other hand, Fenwick McKelvey, an associate professor in communication Studies at Concordia University said “Sidewalk Labs is at the centre of a debate about data and data protection. The resignation of Cavoukian is clear evidence that we don’t have proper regulatory infrastructure to deal with these new smart city initiatives Her resignation, especially because she was participating in good faith, is a major blow to the legitimacy of the project.” The Toronto Star | Ontario’s former privacy commissioner resigns from Sidewalk Labs | ‘Not good enough’: Toronto privacy expert resigns from Sidewalk Labs over data concerns | Privacy expert Ann Cavoukian resigns from Sidewalk Toronto smart-city project: ‘I had no other choice’ – (subscribers only) | Privacy expert Ann Cavoukian resigns as adviser to Sidewalk Labs | Waterfront Toronto, advisory panel want Quayside master plan delayed and see also: Facing privacy backlash, Sidewalk Labs proposes giving data to a public trust | An Update on Data Governance for Sidewalk Toronto | Sidewalk Labs unveils draft data and privacy plans for high-tech Toronto project | Sidewalk Labs promises not to control data collected in Quayside’s public spaces | Waterfront Toronto ‘not shying away’ from Sidewalk Toronto data privacy questions, senior official says | Sidewalk Labs use of cellphone data in proposed U.S. deal raises concern in Toronto | Sidewalk Labs requires detailed safeguards for its own employees’ data | ‘Public good’ not ‘properly represented’ by Sidewalk Labs: former RIM CEO | Sidewalk Labs unveils draft data and privacy plans for high-tech Toronto project


US Secret Government Report Shows Gaping Holes in Privacy Protections from U.S. Surveillance

In response to Freedom of Information Act requests, a federal privacy watchdog [Privacy and Civil Liberties Oversight Board] released an important report [finalized in December 2016 – here & 28 pg PDF here ] about [the 2014 Obama Presidential Policy Directive “PPD-28” – here] on how the U.S. government handles people’s personal information that it sweeps up in its surveillance. The report addresses government agencies’ implementation of the policy directive on government spying and the treatment of “personal information,” which includes communications like emails, chats, and text messages. The report makes clear that PPD-28’s protections are weak in practice and rife with exceptions. And it will likely only add to concerns European regulators already have about the ways in which U.S. surveillance harms the privacy rights of Europeans — jeopardizing an important transatlantic data-sharing agreement. Here are three key takeaways: 1) The report confirms just how modest the directive’s privacy protections are; 2) There has been significant uncertainty — and inconsistency — among agencies about what spying activities the directive covers; and 3) There are reasons to be concerned about the NSA’s information-sharing practices and other agencies’ exploitation of intercepted communications. … In short, the U.S. government is exploiting the personal information it gathers using these spying activities more broadly than ever, but the report reveals just how anemic PPD-28’s protections are in practice. It also raises serious questions about whether the directive has been implemented fully and consistently across the intelligence community. [Speak Freely Blog (ACLU) | US Intelligence Privacy Policies Inadequate – Federal Report

WW – Study Finds 88% of Android Apps Share Info with Alphabet

A study conducted by University of Oxford researchers analyzed nearly 1 million Android apps to determine how smartphone data is collected and shared. The researchers found the average app transferred data to 10 third parties. For one in every five apps, the number would exceed 20 third parties. Of all the apps examined, 88% was designed to send information back to Google’s parent company, Alphabet, with 43% able to send information to businesses owned by Facebook. University of Oxford Computer Scientist and Project Lead Reuben Binns said the surge in data sharing is a result of app developers’ reliance on advertisements rather than sales. (Registration may be required to access this story.) FT.com

CA – Digital Displays in Condos Used to Target Ads

Residents at a Liberty Village condo have learned that video screens placed in their condo elevators are equipped with cameras that are collecting data for advertisers. The screens, installed by Visio Media, scan the faces of residents as they watch ads, and share the information with advertisers, providing statistics on gender and age. The cameras can detect the presence of people in the elevator and display ads that are catered to their demographic. According to Visio Media’s website, that even includes ads that appeal to children, like the one below for a root beer flavoured tooth polish. 680 News

Telecom / TV

CA U.S. Lawmakers Warn Canada to Keep Huawei Out of Its 5G Plans

In a letter dated October 11 addressed to Canadian Prime Minister Justin Trudeau. U.S. Senators Mark Warner and Marco Rubio make a very public case that Canada should leave Chinese tech and telecom giant Huawei [here & wiki here] out of its plans to build a next-generation mobile network [5G networks]. The outcry comes after the head [Scott Jones] of the Canadian Centre for Cyber Security dismissed security concerns regarding Huawei in comments last month [here & text here – in his September 20 testimony the Standing Committee on Public Safety and National Security – here]. As part of the Defense Authorization Act, passed in August, the U.S. government signed off on a law that forbids domestic agencies from using services or hardware made by Huawei and ZTE. A week later, Australia moved to block Huawei and ZTE from its own 5G buildout. Next generation 5G networks already pose a number of unique security challenges. Lawmakers caution that by allowing companies linked to the Chinese government to build 5G infrastructure, the U.S. and its close allies (Canada, Australia, New Zealand and the U.K.) [the Five Eyes alliance – wiki here] would be inviting the fox to guard the henhouse. [TechCrunch Additional coverage at: engadget and see also: Ottawa probes Huawei equipment for security threats | Ottawa launches probe of cyber security | U.S. intelligence officials question Canada’s ability to test China’s Huawei for security breaches | New cybersecurity chief defends Canadian approach to Huawei security rumours | U.S. intelligence officials question Canada’s ability to test China’s Huawei for security breaches]

Workplace Privacy

WW 3 Out of 4 Employees Pose a Security Risk

According to MediaPRO’s [here] third annual State of Privacy and Security Awareness Report [see PR here, blog posts here & here, 1 pg PDF infographic here and flip book here] some 75% of employees today pose a moderate or severe risk to their company’s data and 85% of finance workers show some lack of data security and privacy knowledge. MediaPRO surveyed more than 1,000 employees across the United States to quantify the state of privacy and security awareness in 2018. More people fell into the risk category this year than in 2017 – and that number had nearly doubled since the inaugural survey, he says. MediaPRO based its study on a variety of questions that focus on real-world scenarios, such as correctly identifying personal information, logging on to public Wi-Fi networks, and spotting phishing emails. Based on the percentage of privacy and security-aware behaviors, respondents were assigned to one of three risk profiles: risk, novice, or hero. Here’s a thumbnail of some other notable findings: 1) Employee performance was worse this year across all eight industry verticals measured. Respondents did much worse in identifying malware warning signs, knowing how to spot a phishing email and social media safety; 2) Managers showed riskier behaviors than lower-level employees. Management performed worse than their entry- and mid-level counterparts when asked how to respond to a suspected phishing email. Only 69% of managers chose the correct answer vs. 86% of lower-level employees. And nearly one in six management-level respondents – 17% – chose to open an unexpected attachment connected to a suspected phishing email; 3) Finance sector employees performed the worst. Of the seven vertical industry sectors examined, financial employees got the lowest scores. 85% showed some lack of cybersecurity and data privacy knowledge. And, 19% of finance workers thought opening an attachment was an appropriate response to a suspected phishing email; and 4) Too many employees could not identity phishing emails. 14% of employees could not identity a phish, a notable increase from 8% in 2017. And, 58% could not define business email compromise. Oonly 81% say they would report the suspected phishing email to their IT department. [DARKReading | Why 75% of your employees could end up costing you millions | It’s Everyone’s Job to Ensure Online Safety at Work

US – Survey: 75% of Employees Display Insufficient Cyber Knowledge

A survey performed by MediaPro found 75% of U.S. employees put their companies at risk of a cyberattack. For the survey, more than 1,000 staff members were asked about their cybersecurity knowledge and awareness. Financial organizations have had the most difficulty with their staff members in this area, as 85% of employees in the industry displayed an insufficient understanding of data privacy and security. Management positions displayed riskier behavior compared to their employees. Of the respondents polled, 77% of executives showed a lack of privacy and security recognition compared to 74% of other employees. [TechRepublic]




1-15 October 2018


US – Feds Force Suspect to Unlock an Apple iPhone X With Their Face

A child abuse investigation unearthed by Forbes [PDF] includes the first known case in which law enforcement used Apple Face ID facial recognition technology to open a suspect’s iPhone. That’s by any police agency anywhere in the world, not just in America. It happened on August 10, when the FBI searched the house of 28-year-old Grant Michalski, a Columbus, Ohio, resident who would later that month be charged with receiving and possessing child pornography [see August 24 DoJ PR]. With a search warrant in hand, a federal investigator told Michalski to put his face in front of the phone, which he duly did. That allowed the agent to pick through the suspect’s online chats, photos and whatever else he deemed worthy of investigation. Whilst the feds obtained a warrant, and appeared to have done everything within the bounds of the law, concerns remain about the use of such tactics. “Traditionally, using a person’s face as evidence or to obtain evidence would be considered lawful,” said Jerome Greco, staff attorney at the Legal Aid Society. “But never before have we had so many people’s own faces be the key to unlock so much of their private information.” Thus far, there’s been no challenge to the use of Face ID in this case or others. But Fred Jennings, a senior associate at Tor Ekeland Law, said they could come thanks to the Fifth Amendment, which promises to protect individuals from incriminating themselves in cases. [Forbes  Additional coverage at: Naked Security (Sophos), The Verge and Ars Technica]


CA – Draft Guidance Released Regarding Mandatory Breach Reporting Under PIPEDA

On September 17, 2018, the Office of the Privacy Commissioner of Canada (OPC) released draft guidance regarding PIPEDA’s new mandatory security and privacy breach notification requirements, which come into force on November 1, 2018. This guidance contains helpful information regarding how and when to report breaches of security safeguards to the OPC, the corresponding notice that must be provided to individuals, and record-keeping obligations associated with such breaches. Of particular note, this guidance provides the following key pieces of information and clarification:

  • Not all breaches must be reported to the OPC. Only those breaches that create a “real risk of significant harm” to an individual are the subject of mandatory reporting obligations;
  • Reporting should commence as soon as possible once the organization determines that a breach creates a real risk of significant harm;
  • The obligation to report resides with the organization in control of the personal information that is the subject of the breach;
  • A report made to the OPC must contain information regarding the date of the breach, the circumstances of the breach, personal information involved, number of individuals affected;
  • When a breach creates a real risk of significant harm, the individuals whose personal information was the subject of the breach must also be notified of the breach;
  • If a breach may also be mitigated or the risk of harm reduced via notification of other government institutions or organizations, then notification of these bodies must also occur; and
  • The obligation to maintain records regarding breaches is not limited to only those breaches that are reportable to the OPC.

The draft guidance includes a PIPEDA breach report form, which can be used by organizations to report security and privacy breaches to the OPC following the effective date of the breach notification requirements. The draft guidance and breach report form are consultation documents, and as such, the OPC invited stakeholders to provide feedback on both documents by October 2, 2018. The final versions of both documents will be published in time for November 1, 2018. [Mondaq]

CA – OPC Seeks Federal Court Determination on Key Issue for Canadians’ Online Reputation

The Office of the Privacy Commissioner of Canada (OPC) is turning to the Federal Court to seek clarity on whether Google’s search engine is subject to federal privacy law when it indexes web pages and presents search results in response to queries of a person’s name. The OPC has asked the court to consider the issue in the context of a complaint involving an individual who alleges Google is contravening PIPEDA [OPC guidance] by prominently displaying links to online news articles about him when his name is searched. The complainant alleges the articles are outdated, inaccurate and disclose sensitive information about his sexual orientation and a serious medical condition. By prominently linking the articles to his name, he argues Google has caused him direct harm. Google asserts that PIPEDA does not apply in this context and that, if it does apply and requires the articles to be de-indexed, it would be unconstitutional. Following public consultations, the OPC took the view [see position paper] that PIPEDA provides for a right to de-indexing – which removes links from search results without deleting the content itself – on request in certain cases. This would generally refer to web pages that contain inaccurate, incomplete or outdated information. However, there is some uncertainty in the interpretation of the law. In the circumstances, the most prudent approach is to ask the Federal Court to clarify the law before the OPC investigates other complaints into issues over which the office may not have jurisdiction if the court were to disagree with the OPC’s interpretation of the legislation. A Notice of Application [see here], filed today in Federal Court, seeks a determination on the preliminary issue of whether PIPEDA applies to the operation of Google’s search engine. In particular, the reference asks whether Google’s search engine service collects, uses or discloses personal information in the course of commercial activities and is therefore subject to PIPEDA. It also asks whether Google is exempt from PIPEDA because its purposes are exclusively journalistic or literary. While Google has also raised the issue of whether a requirement to de-index under PIPEDA would be compliant with s. 2(b) of the Charter, the OPC has decided not to refer this issue to the Court at this stage. The Charter issue may not need to be addressed depending on how the reference questions are answered. The Charter issue is also highly fact based and would require an assessment of the facts of the complaint, making it inappropriate for a reference. Investigations into complaints related to de-indexing requests will be stayed pending the results of the reference. The Privacy Commissioner’s office will also wait until this process is complete before finalizing its position on online reputation. [Office of the Privacy Commissioner of Canada] | Coverage at: Will Canadians soon have the ‘right to be forgotten’ online? Here’s what you need to know | Privacy czar asks Federal Court to settle ‘right to be forgotten’ issue | Privacy watchdog asks Federal Court to rule on Google de-indexing question]

CA – B.C. Political Parties Face Personal Data Collection Investigation

How British Columbia’s political parties harvest and use personal information from social media will be subject to an Office of the Information and Privacy Commissioner investigation within the next month, Commissioner Michael McEvoy said Sept. 28 in his comments in Vancouver to B.C. Information Summit 2018 delegates. McEvoy said reviews of how parties use information has already led to auditing in the United Kingdom, where he has assisted the work of that country’s information commissioner, his B.C. predecessor. “That is something we are going to be doing in British Columbia,” he said. “Politicians realize that uses, misuses and abuses of data in a personal context can change elections,” University of Victoria political science professor Colin Bennett [here] said. “Political affiliation is something that should only be captured with individual consent.” He said political parties “are the major organizations that fall between the cracks of a privacy regime that is either federal or provincial or is corporate or government.” Political parties identifying their voter bases can vacuum up personal information shared on social media. And that can start with something as simple as an election voters’ list readily available to political parties. Bennett said use of the list is excluded from no-phone-call regulations of the Canadian Radio-television and Telecommunications Commission designed to prevent nuisance calls. As well, Bennett explained, parties are not covered by federal anti-spam legislation. He said the proposed federal Election Modernization Act [Bill C-76 here] sections supposed to deal with privacy are “basic and incomplete.” Further, Bennett said, parties do have privacy policies but those are vague and don’t necessarily mesh with each other. Other speakers said greater oversight is needed over how Canadian political parties collect and use voters’ personal information. [Kamloops Matters]

US – U.S.-Mexico-Canada Pact Covers Data Privacy, Local Storage Rules

The U.S., Canada, and Mexico would have to adopt data protection measures under a deal aimed at replacing the North American Free Trade Agreement. Those measures should include provisions on data quality, collection restrictions, and transparency, according to text of the U.S.-Mexico-Canada Agreement released by the U.S. Trade Representative’s Office. Under the deal, governments would have to publish information on how businesses can comply with the rules and the remedies that individuals can pursue. The agreement reflects an increased awareness of data protection issues following the EU’s adoption of new privacy rules and the Cambridge Analytica scandal involving Facebook Inc. data. It would direct the three countries’ governments to exchange information on data protection policies and work together to promote digital trade. The agreement also would ban rules requiring data to be stored locally and prohibit restrictions on data flows for business purposes. Lawmakers in all three countries must approve the deal for it to take effect. Tech industry groups supported the pact’s digital trade and data privacy provisions. [Bloomberg BNA See also: Key takeaways from the new U.S.-Mexico-Canada Agreement

CA – USMCA Falls Short on Digital Trade, Data Protection and Privacy: Geist

The United States-Mexico-Canada Agreement (USMCA) is more than just an updated version of the North American Free Trade Agreement. With the inclusion of a digital trade chapter, the deal sets a new standard for e-commerce that seems likely to proliferate in similar agreements around the world. The chapter raises many concerns, locking in rules that will hamstring online policies for decades by restricting privacy safeguards and hampering efforts to establish new regulation in the digital environment. For example, the USMCA includes rules that restrict data localization policies that can be used to require companies to store personal information within the local jurisdiction. Jurisdictions concerned about lost privacy in the online environment have increasingly turned to data localization to ensure their local laws apply. These include the Canadian provinces of British Columbia and Nova Scotia, which have data localization requirements to keep sensitive health information at home that may be jeopardized by the agreement. It also bans restrictions on data transfers across borders. That means countries cannot follow the European model of data protection that uses data transfer restrictions as a way to ensure that the information enjoys adequate legal protections. In fact, countries could find themselves caught in a global privacy battle in which Europe demands limits on data transfers while the USMCA prohibits them. The chapter fails to reflect many global e-commerce norms, and may ultimately restrict policy flexibility on key privacy issues will have been quietly established as the go-to international approach. [The Washington Post | Experts say USMCA frees Canadian data — but with unknown risks


WW – Privacy Advocates Face Negative Stereotyping Online

New research from HideMyAss! has revealed that people around the world perceive privacy advocates as untrustworthy, paranoid, male loners with something to hide despite their own views towards privacy.[PR, blog post & report] The security software firm partnered with Censuswide to survey 8,102 people from the UK, US, France and Germany to compile its new report. Even though two fifths of those surveyed (41%) agreed that privacy is an indispensable human right, 80% believed their online history could be accessed without their knowledge by governments, hackers, police and partners. The research also highlighted a general apathy towards protecting privacy as more than one in five admitted they take no action to protect it. Of those who do take action, 78% rely on some form of password protection as their many privacy measure. More than half (56%) of respondents claim to never share their password with anyone and 22% do not save passwords on their browsers or devices. HideMyAss! also found that while there is overwhelming support for people using the Internet privately for legal actives (74%), 26% of respondents believe that people who aren’t willing to divulge what they do online have something to hide with 24% expecting them to be untrustworthy and more than a fifth (22%) of the opinion they are more likely to have a criminal record. When it comes to the particular traits of privacy advocates, respondents said they could be paranoid (52%), loners (37%) or people partial to spying on their neighbours (36%).  TechRadar


US – DOJ Releases “Best Practices for Victim Response and Reporting of Cyber Incidents,” Version 2.0

On September 27, 2018, the U.S. Department of Justice Computer Crime and Intellectual Property (CCIPS) Cybersecurity Unit released Version 2.0 of its “Best Practices for Victim Response and Reporting of Cyber Incidents“ [PDF] Originally issued in 2015, the updated guidance seeks to help organizations better equip themselves to be able to respond effectively and lawfully to cyber incidents. The updated version distills insights from private and public sector experts, incorporating new incident response considerations in light of technical and legal developments in the past three years. While the guidance is designed to mostly be applicable to small- and medium-sized businesses, it may be useful to larger organizations as well. Similar to Version 1.0 [PDF] (see previous analysis here), the updated guidance is divided into several parts, advising companies on steps to take before, during, and after a cybersecurity incident. While the document is not intended to have any regulatory effect, the guidance is a useful tool for organizations seeking to make sure their data security policies align with today’s best practices. [Privacy & Data Security Blog (Alston & Bird)]

Electronic Records

CA – Clinical Trial Data Not Quite Confidential: Federal Court

On July 9, 2018, the Federal Court released its decision ordering Health Canada to provide the results of certain clinical trials, including participant level datasets, to an American researcher: Doshi v Canada (Attorney General), 2018 FC 710 [PDF]. Health Canada requires researchers to sign a standard confidentiality agreement in order to release clinical trial data for the purpose of research. On the basis of the researcher’s refusal to sign the standard confidentiality agreement, Health Canada unsuccessfully attempted to keep confidential the requested reams of clinical trial data. .At issue was the interpretation of subsection 21.1(3) of the Protecting Canadians from Unsafe Drugs Act (“Vanessa’s Law”) [Overview & FAQ]. The case is interesting not only because it was the first time the court was called upon to apply Vanessa’s Law, but also because the court was required to decide other important ancillary issues, such as the confidential nature of clinical trial data and the bearing such nature may have on freedom of expression under section 2(b) of the Canadian Charter of Rights and Freedoms. In light of administrative law principles concerning the exercise of discretionary powers, Justice Grammond held that it was unreasonable for Health Canada to impose a confidentiality requirement as a condition for the disclosure of the data requested by Dr. Doshi (para 87). Following the Federal Court decision, Health Canada indicated that it is working on regulations to publicly release a large amount of information in clinical trial reports for a wide range of medications. Stakeholders should watch out for new developments on this front. [CyberLex Blog (McCarthy Tetrault)]

EU Developments

EU – CNIL Publishes Initial Assessment on Blockchain and GDPR

Recently, the French Data Protection Authority (“CNIL“) published its initial assessment of the compatibility of blockchain technology with the EU General Data Protection Regulation (GDPR) and proposed concrete solutions for organizations wishing to use blockchain technology when implementing data processing activities [see 11 pg PDF in French]. The CNIL made it clear that its assessment does not apply to (1) distributed ledger technology (DLT) solutions and (2) private blockchains. In its assessment, the CNIL first examined the role of the actors in a blockchain network as a data controller or data processor. The CNIL then issued recommendations to minimize privacy risks to individuals (data subjects) when their personal data is processed using blockchain technology. In addition, the CNIL examined solutions to enable data subjects to exercise their data protection rights. Lastly, the CNIL discussed the security requirements that apply to blockchain. The CNIL made a distinction between the participants who have permission to write on the chain (called “participants”) and those who validate a transaction and create blocks by applying the blockchain’s rules so that the blocks are “accepted” by the community (called “miners”). According to the CNIL, the participants, who decide to submit data for validation by miners, act as data controllers when (1) the participant is an individual and the data processing is not purely personal but is linked to a professional or commercial activity; and (2) the participant is a legal personal and enters data into the blockchain. According to the CNIL, the exercise of the right to information, the right of access and the right to data portability does not raise any particular difficulties in the context of blockchain technology (i.e., data controllers may provide notice of the data processing and may respond to data subjects’ requests of access to their personal data or data portability requests.) However, the CNIL recognized that it is technically impossible for data controllers to meet data subjects’ requests for erasure of their personal data when the data is entered into the blockchain: once in the blockchain system, the data can no longer be rectified or erased. The CNIL considered that the security requirements under the GDPR remain fully applicable in the blockchain.  In the CNIL’s view, the challenges posed by blockchain technology call for a response at the European level. The CNIL announced that it will cooperate with other EU supervisory authorities to propose a robust and harmonized approach to blockchain technology. [Privacy & Information Security Law Blog (Hunton Andrews Kurth) with coverage at: JDSUPRA and PaymentsCompliance]

Facts & Stats

WW – Data Breaches Compromised 4.5 Billion Records in the First Half of 2018

According to the latest figures from the Gemalto Breach Level Index, 4.5 billion records were compromised in just the first six months of this year [PR, infographic & download report] . The US comes out the worst, with 3.25 billion records affected and 540 breaches — an increase of 356% in the last month and 98% over the same period in 2017. A total of six social media breaches accounted for over 56% of total records compromised. Of the 945 data breaches, 189 (20% of all breaches) had an unknown or unaccounted number of compromised data records. Europe was well behind America seeing 36% few incidents, but there was a 28% rise in the number of records breached indicating growing severity of attacks. The United Kingdom was the worst hit in its region suffering 22 data incidents. [Information Age | Disclosure laws lead to spike in reported data breaches: Gemalto | A Massive Bump In Data Breaches Is Stoking Bot-Driven Attacks On Retailers | What Drives Tech Internet Giants To Hide Data Breaches Like The Google+ Breach


CA – More Than a Dozen Federal Departments Flunked Credit Card Security Test

The Canada Revenue Agency, the RCMP, Statistics Canada and more than a dozen other federal departments and agencies have failed an international test of the security of their credit card payment systems. Altogether, half of the 34 federal institutions authorized by the banking system to accept credit-card payments from citizens and others have flunked the test — risking fines and even the revocation of their ability to accept credit and debit payments. Those 17 departments and agencies continue to process payments on Visa, MasterCard, Amex, the Tokyo-based JCB and China UnionPay cards, and federal officials say there have been no known breaches to date. These institutions all fell short of a global data-security standard PCI DSS, for “Payment Card Industry Data Security Standards.” It was established by five of the big credit-card firms. That’s meant to foil fraud artists and criminal hackers bent on stealing names, numbers and codes for credit and debit cards. Federal departments must self-assess against the standard annually. CBC News obtained the briefing note, to the deputy minister of Public Services and Procurement Canada (PSPC), under the Access to Information Act. The document suggests the main culprit is Shared Services Canada (SSC), the federal IT agency created in 2011 that operates and maintains data systems for 13 of the 17 non-compliant institutions. Eleven of the 13 SSC clients who fell short of the credit card security standard say the agency itself has not fixed the security problems. The institutions that failed the credit card security checks are: Health Canada, RCMP, Industry Canada, Transport Canada, National Research Council, Canada Border Services Agency, Natural Resources Canada, Immigration Refugees and Citizenship, Statistics Canada, Fisheries and Oceans, Canada Revenue Agency, Canada Food Inspection Agency and Library and Archives Canada, all of which depend on SSC for their IT. The Library of Parliament, National Defence, the National Film Board of Canada and the Canadian Centre for Occupational Health and Safety are also non-compliant, but are responsible for the security of their own IT systems. [CBC News]


CA – Bowing to Pressure, Feds Urge Senate to Change Access to Information Bill

After pushback from Indigenous groups and the information commissioner, the federal government is backing down on a number of changes proposed to the Access to Information Act that critics have called “regressive” that part of Bill C-58 that required access to info requesters to describe the document time period, subject, and type. Witnesses had warned that level of detail, particularly with First Nations attempts to get land-claim records, would limit access to records where such detail is not known and almost certainly lead to departments denying requests. Information commissioner Caroline Maynard also successfully convinced the government to give her order-making power when the bill reaches royal assent and is formally approved, rather than a year after the bill becomes law, as it’s currently written. Critics hav also raised alarms about adding the ability for government departments and agencies to decline “vexatious,” or overly broad requests. At a Senate committee Oct. 3, Treasury Board President Scott Brison closed the door on removing that power from the bill, noting the government had already accepted changes from the House Ethics Committee to address fears it would limit access and “address any concerns” of “inappropriate” use. The House passed the changed bill in December 2017. Now, agencies won’t be able to give a request that label unless they have approval from the information commissioner at the beginning of the process. The Access to Information Act lets Canadians pay $5 to request government documents, but critics for years have said it’s dysfunctional, too slow, and allows for big loopholes that limit the information released. [The Hill Times]

CA – Privileged Records and Access to Information Reviews: When to Produce?

Solicitor-client privilege is intended to foster candid conversation between a client and legal counsel in order to ensure that the client receives appropriate legal advice and can make informed decisions. It protects the solicitor-client relationship. By comparison, litigation privilege attaches to records that are created for the dominant purpose of preparing for litigation. It offers protection for clients to investigate and prepare their case. Both privileges are vital to an effective legal system. Enter access to information legislation. Legislation in each Atlantic province provides some form of exception to disclosure for privileged records. In New Brunswick, see The Right to Information and Protection of Privacy Act, SNB 2009, c R-10.6 at s 27 [here]; in Newfoundland and Labrador, see Access to Information and Protection of Privacy Act, 2015, SNL 2015 c A-1.2 at s 30 [here]; in Nova Scotia, see Freedom of Information and Protection of Privacy Act, SNS 1993, c 5 at s 16 [here]; and in Prince Edward Island, see Freedom of Information and Protection of Privacy Act, RSPEI 1988, c 15.01 at s 25 [here]. But a public body’s application of access to information legislation is overseen by a statutory office in every jurisdiction. What happens when the public body’s application of the exception for privileged records is challenged? That question gave rise to the Supreme Court of Canada’s well-known decision in Alberta (Information and Privacy Commissioner) v University of Calgary [here] In that case, a delegate of the Alberta Information and Privacy Commissioner issued a notice to the University to produce records over which the University had claimed solicitor-client privilege. The majority of the Court agreed with the University and determined that the University was not obligated to produce solicitor-client privileged records to the delegate for review. The University of Calgary decision received a great deal of attention when it was released. But little attention has been paid to the Majority’s closing comments regarding the appropriateness of the Alberta OIPC’s decision to seek production of records over which solicitor-client privilege was claimed the Supreme Court emphasized that “even courts will decline to review solicitor-client documents to ensure that privilege is properly asserted unless there is evidence or argument establishing the necessity of doing so to fairly decide the issue” [see note 2 at para 68 here]. The Court was mindful of the fact that the University had identified the records in accordance with the practice in civil litigation in the province, and found that in the absence of evidence to suggest that the University had improperly claimed privilege, the delegate erred in determining that the documents had to be reviewed. While civil litigation practice can – and does – vary from province to province, should you find yourself in a positon where the Commissioner is seeking review of records over which you have claimed solicitor-client or litigation privilege, the Supreme Court’s commentary and the Alberta approach may provide a means by which to have the Commissioner resolve the claim without risking privilege and requiring production of the records in issue. [Mondaq]


WW – How Researchers Are Using DNA to Create Images of People’s Faces

Advancements in facial recognition and DNA sequencing technology have allowed scientists to create a portrait of a person based on their genetic information [A process called DNA phenotyping – wiki]. A study published last year and co-authored by biologist Craig Venter [wiki], CEO of San Diego-based company Human Longevity, showed how the technology works. The research team took an ethnically diverse sample of more than 1,000 people of different ages and sequenced their genomes. They also took high-resolution, 3D images of their faces and measured their eye and skin color, age, height and weight. This information was used to develop an algorithm capable of working out what people would look like on the basis of their genes. Applying this algorithm to unknown genomes, the team was able to generate images that could be matched to real photos for eight out of ten people. The success rate fell to five out of ten when the test was restricted to those of a single race, which narrows facial differences. The authors of the paper said the research has ‘significant ethical and legal implications on personal privacy, the adequacy of informed consent, the potential for police profiling and more’. Researchers have already produced images of faces based on genetic material or genome. For example, earlier this year, investigators in Washington State unveiled an image of a suspect created from DNA in the 30-year-old murder case of young Victoria (BC)-area couple Tanya Van Cuylenborg, 18, and Jay Cook, 20. [coverage here] And in Calgary in February police released a high-tech image they said was a likeness of the mother of a baby girl found dead in a dumpster on Christmas Eve. [CTV News]

Health / Medical

US – Fitbit Data Leads to Arrest of 90-Year-Old in Stepdaughter’s Murder

On Saturday, 8 September, at 3:20 pm, Karen Navarra’s Fitbit recorded her heart rate spiking. Within 8 minutes, the 67-year-old California woman’s heart beat rapidly slowed. At 3:28 pm, her heart rate ceased to register at all. She was, in fact, dead. Two pieces of technology have led the San Jose police police to charge Ms. Navarro’s stepfather, Anthony Aiello, with allegedly having butchered her. Besides the Fitbit records, there are also surveillance videos that undercut Aiello’s version of the events. When police compared the dead woman’s Fitbit data with video surveillance from her home, they discovered that Aiello’s car was still there at the point when her Fitbit lost any traces of her heartbeat. Later, police found bloodstained clothing in Aiello’s home. If Aiello turns out to be guilty, he certainly won’t be the first to learn a harsh lesson in how much of the quotidian technology that surrounds us these days can be used to contradict our version of events. One example was in April 2017, when a murder victim’s Fitbit contradicted her husband’s version of events. In another case, we’ve seen pacemaker data used in court against a suspect accused of burning down his house. The title of a paper by Nicole Chauriye says it all: Wearable devices as admissible evidence: Technology is killing our opportunity to lie. [Naked Security (Sophos) coverage at: The Mercury News, The New York Times, The Independent and Los Angeles Times]

US – Despite Patient Privacy Risks, More People Use Wearables for Health

Despite the patient privacy risks that collecting health data on insecure wearable devices could pose, the number of US consumers tracking their health data with wearables has more than doubled since 2013, according to the Deloitte 2018 Survey of US Health Care Consumers [PR – also see blog post]. The use of wearables and other tools for measuring fitness and health improvement goals jumped from 17 percent in 2013 to 42% in 2018. Of those who used wearables in the past year, 73 percent said they used them consistently. Sixty percent of the 4,530 respondents said they are willing to share PHI generated from wearable devices with their doctor to improve their health. 51% of respondents are comfortable using an at-home test to diagnose infections before seeing a doctor. More than one-third (35%) of respondents said they are interested in using a virtual assistant to identify symptoms and direct them to a caregiver. Close to one-third (31%) are interested in connecting with a live health coach that offers text messaging for nutrition, exercise, sleep, and stress management. “For health systems that are collecting this information, it is important that they safeguard the privacy of that information,” Sarah Thomas, managing director of Deloitte’s Center for Health Solutions, told HealthITSecurity.com. “If it is about their personal health, then it is clear that the information needs to be safeguarded and subject to HIPAA” [wiki here] she added. [HealthIT Security Additional coverage at: Health Populi, For The Record and Patient Engagement HIT]

WW – Study Finds Medical Records Are Breached Worryingly Often

A new study by two physicians from Massachusetts General Hospital has concluded that breaches to people’s health data are alarmingly frequent and large scale. Writing in the Journal of the American Medical Association [Temporal Trends and Characteristics of Reportable Health Data Breaches, 2010-2017], Dr Thomas McCoy Jr and Dr Roy Perlis state that 2,149 breaches comprising a total of 176.4 million records occurred between 2010 and 2017. Their data was drawn from the US Health and Human Services Office for Civil Rights breach database [last 24 months here & archive of earlier brecahes], where all breaches of American patient records must be reported under US law. With the except of 2015, the number of breach events has increased every year during that period paper and film-based information were the most commonly compromised type of medical record, with 510 breaches involving 3.4 million records, but the frequency of this type of breach went down across the study period and the largest share of breached records – 139.9 million – came from infiltration into network servers storing electronic health records (EHRs). The frequency of hacking-based breaches went up during the study period. The majority of breaches occurred due to the actions of health care providers, though compromised systems in health plan companies accounted for more total records infiltrated. The authors write that “Although networked digital health records have the potential to improve clinical care and facilitate learning [in] health systems, they also have the potential for harm to vast numbers of patients at once if data security is not improved” [IFLScience! Additional coverage at: Reuters and Healthcare Infomatics]

US – Eight Healthcare Privacy Incidents in September

Eight privacy incidents at healthcare organizations captured public attention last month. While media outlets reported on the following breaches in September, healthcare organizations experienced breaches as early as 2014. Here are the eight incidents presented in order of number of patients affected: 1) The Fetal Diagnostic Institute of the Pacific in Honolulu notified 40,800 patients about a potential data breach after it fell victim to a ransomware attack in June; 2) Blue Cross Blue Shield of Rhode Island notified 1,567 members that an unnamed vendor responsible for sending members’ benefits explanations breached their personal health information; 3) An employee at Kings County Hospital’s emergency room stole nearly 100 patients’ private information and sold it through an encrypted app on his phone; 4) Claxton-Hepburn Medical Center in Ogdensburg, N.Y., terminated an undisclosed number of employees after hospital officials identified breaches of patient health information during a recent internal investigation; 5) Reliable Respiratory in Norwood, Mass., discovered unusual activity on an employee’s email account in July, which may have allowed hackers to access an undisclosed number of patients’ protected health information; 6) Independence Blue Cross in Pennsylvania notified an undisclosed number of plan members about a potential compromise of their protected health information after an employee uploaded a file containing personal data to a website that was publicly accessible for three months; 7) Nashville, Tenn.-based Aspire Health lost some patient information to an unknown cyberattacker who gained access to its internal email system in September, federal court records filed Sept. 25 show; and 8) Lutheran Hospital in Fort Wayne, Ind., canceled all remaining elective surgeries Sept. 18 after its IT team discovered a computer virus on its systems. [Becker’s Hospital Review]

Horror Stories

WW – Google Exposed User Data, Feared Repercussions of Disclosing to Public

Google exposed the private data of hundreds of thousands of users of the Google+ social network and then opted not to disclose the issue this past spring, in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage, according to people briefed on the incident and documents reviewed by The Wall Street Journal. As part of its response to the incident, the Alphabet Inc. unit on Monday announced [see blog post] a sweeping set of data privacy measures that include permanently shutting down all consumer functionality of Google+. A software glitch in the social site gave outside developers potential access to private Google+ profile data including: full names, email addresses, birth dates, gender, profile photos, places lived, occupation and relationship status between 2015 and March 2018, when internal investigators discovered and fixed the issue. A memo prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica. Chief Executive Sundar Pichai was briefed on the plan not to notify users after an internal committee had reached that decision. The question of whether to notify users went before Google’s Privacy and Data Protection Office, a council of top product executives who oversee key decisions relating to privacy. In weighing whether to disclose the incident, the company considered “whether we could accurately identify the users to inform, whether there was any evidence of misuse, and whether there were any actions a developer or user could take in response. None of these thresholds were met here” a Google spokesman said in a statement During a two-week period in late March, Google ran tests to determine the impact of the bug, one of the people said. It found 496,951 users who had shared private profile data with a friend could have had that data accessed by an outside developer. Some of the individuals whose data was exposed to potential misuse included paying users of G Suite, a set of productivity tools including Google Docs and Drive. G Suite customers include businesses, schools and governments. In its contracts with paid users of G Suite apps, Google tells customers it will notify them about any incidents involving their data “promptly and without undue delay” and will “promptly take reasonable steps to minimize harm.” That requirement may not apply to Google+ profile data, however, even if it belonged to a G Suite customer. [The Wall Street Journal | Google exposed data for hundreds of thousands of users | Google+ shutting down after data leak affecting 500,000 users | Google+ Is Shutting Down After a Security Bug Exposed User Info | Google did not disclose security bug because it feared regulation, says report | Laughing at the Google+ bug? You’re making a big mistake | Here’s how to quickly check if you have a Google+ account — and delete it]

Online Privacy

WW – Instagram Prototypes Handing Your Location History to Facebook

Instagram has been spotted prototyping a new privacy setting that would allow it to share your location history with Facebook. That means your exact GPS coordinates collected by Instagram, even when you’re not using the app, would help Facebook to target you with ads and recommend you relevant content. The geo-tagged data would appear to users in their Facebook Profile’s Activity Log, which include creepy daily maps of the places you been. This commingling of data could upset users who want to limit Facebook’s surveillance of their lives. A Facebook spokesperson tells TechCrunch that “To confirm, we haven’t introduced updates to our location settings. As you know, we often work on ideas that may evolve over time or ultimately not be tested or released. Instagram does not currently store Location History; we’ll keep people updated with any changes to our location settings in the future.” That effectively confirms Location History sharing is something Instagram has prototyped, and that it’s considering launching but hasn’t yet. Delivering the exact history of where Instagram users went could assist Facebook with targeting them with local ads across its family of apps. If users are found to visit certain businesses, countries, neighborhoods, or schools, Facebook could use that data to infer which products they might want to buy and promote them. It could even show ads for restaurants or shops close to where users spend their days. Just yesterday, we reported that Facebook was testing a redesign of its Nearby Friends feature that replaces the list view of friends’ locations with a map. Pulling in Location History from Instagram could help keep that map up to date. [TechCrunch | Facebook tests Snapchat-like map for Nearby Friends

WW – Google’s New Chrome Extension Rules Improve Privacy and Security

Google has announced several rules aimed at making Chrome extensions safer and more trustworthy. Many extensions request blanket access to your browsing data, but you’ll soon have the option to whitelist the sites they can view and manipulate, or opt to grant an extension access to your current page with a click. That feature is included in Chrome 70, which is scheduled to arrive later this month and includes other privacy-focused updates.  Developers can no longer submit extensions that include obfuscated code. Google says 70% of malicious and policy-violating extensions use such code. More easily accessible code should speed up the review process too. Developers have until January 1st to strip obfuscated code from their extensions and make them compliant with the updated rules. Additionally, there will be a more in-depth review process for extensions that ask you for “powerful permissions”, Google says. The company is also more closely monitoring those with remotely hosted code. Next year, developers will need to enable two-step verification on their Chrome Web Store accounts. Google also plans to introduce an updated version of the extensions platform manifest, with the aim of enabling “stronger security, privacy and performance guarantees.” Google says half of Chrome users actively employ extensions, so the changes could make browsing the web more secure for millions of people. [engadget – additional coverage at: TechCrunch, CNET News and VentureBeat]

US – Tim Cook Chides Tech Companies for Collecting Personal Data -But Apple Does it Too (Opinion)

Apple CEO Tim Cook took aim at the tech industry’s privacy practices. In an interview with Vice News, he said, “The narrative that some companies will try to get you to believe is, ‘I’ve got to take all your data to make my service better.’ Well, don’t believe that. Whoever’s telling you that, it’s a bunch of bunk.”  Is this a case of the kettle calling the pot black? Apple has cultivated and established a reputation for concern over privacy. There’s a privacy webpage that lists the steps the company takes to safeguard user information and what it refrains from doing. And then there’s the legal privacy policy page that lists the things Apple can and does do with your information. Reading it is enlightening. The page, updated May 22, 2018, “covers how we collect, use, disclose, transfer, and store your personal information.” The details are important. The main one is the first definition: “Personal information is data that can be used to identify or contact a single person.” Is information about a person, such as activities on a website, personal in the sense of being able to identify an individual? No, but it is associated with personal information to become useful. According to Goldman Sachs analyst Rod Hal, Google pays Apple $9 billion a year to remain Safari’s default search engine [coverage]. At the very least, there is a financial incentive for Apple to allow Google access to all the search information. Here is a partial list of “non-personal information” that Apple collects, according to its posted terms: a) Occupation, language, ZIP code, area code, unique device identifier, the URL where your browser was previously, your location and time zone when you used the Apple product; b) product name and device ID; c) details of how you use Apple services, including search queries; d) data stored in Apple log files includes “Internet protocol (IP) addresses, browser type and language, internet service provider (ISP), referring and exit websites and applications, operating system, date/time stamp, and clickstream data”; and e) Apple and its partners “may collect, use, and share precise location data, including the real-time geographic location of your Apple computer or device.”  Perhaps Apple is more concerned with privacy than other companies. Certainly, there’s been no news of a Facebook-style fiasco. Don’t necessarily assume that means you get real privacy. [Inc.com] Coverage at: Apple’s Tim Cook: ‘Don’t believe’ tech companies that say they need your data  | ‘It’s a Bunch of Bunk.’ Apple CEO Tim Cook on Why Tech Firms Don’t Need All Your Data—and Why Apple Expelled Alex Jones | Apple’s Tim Cook is sending a privacy bat-signal to US lawmakers | Apple chief says firm guards data privacy in China | Tim Cook: Don’t Get Hung Up on Where Apple Stores iCloud Data | Tim Cook to talk consumer privacy and data ethics at European data protection conference later this month

WW – Privacy Search Engine Duckduckgo Up 50% Searches in a Year

Privacy-focused search engine DuckDuckGo [wiki] which has just announced it’s hit 30 million daily searches a year after reaching 20M — a year-on-year increase of 50% [see traffic stats]. Hitting the first 10M daily searches took the search engine a full seven years, and then it was another two to get to 20M. DDG’s search engine offers a pro-privacy alternative to Google search that does not track and profile users in order to target them with ads. Instead it displays ads based on the keyword being searched for at the point of each search — dispensing with the need to follow people around the web, harvesting data on everything they do to feed a sophisticated adtech business, as Google does. Google handles least 3BN+ daily searches that daily. This year it expanded from its core search product to launch a tracker blocker to address wider privacy concerns consumers have by helping web users keep more of their online activity away from companies trying to spy on them for profit. [TechCrunch | Privacy: A Business Imperative and Pillar of Corporate Responsibility | DuckDuckGo, the privacy-focused search engine, grows daily searches by 50% to 30 million]

Other Jurisdictions

CA – APEC Cross-Border Privacy Rules Enshrined in U.S.-Mexico-Canada Trade Agreement

On September 30, 2018, the U.S., Mexico and Canada announced a new trade agreement (the “USMCA”) aimed at replacing the North American Free Trade Agreement. Notably, the USMCA’s chapter on digital trade will require the U.S., Canada and Mexico to each “adopt or maintain a legal framework that provides for the protection of the personal information of the users” includeing key principles such as: limitations on collection, choice, data quality, purpose specification, use limitation, security safeguards, transparency, individual participation and accountability. Article 19.8(2) directs the Parties to consider the principles and guidelines of relevant international bodies, such as the APEC Privacy Framework overview here] and the OECD Recommendation of the Council concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data, and Article 19.8(6) formally recognizes the APEC Cross-Border Privacy Rules (the “APEC CBPRs”) [here] within their respective legal systems. In addition, Article 19.14(1)(b) provides that “the Parties shall endeavor to cooperate and maintain a dialogue on the promotion and development of mechanisms, including the APEC Cross-Border Privacy Rules, that further global interoperability of privacy regimes.” The USMCA must still pass the U.S. Congress, the Canadian Parliament, and the Mexican Senate. [Privacy & Information Security Law Blog (Hunton Andrews Kurth coverage at: Womble Bond Dickinson via National Law Review, The Washington Post, Michael Geist Blog, Private Internet Access blog | Data localization concerns in USMCA may be overblown

Privacy (US)

US – FTC Continues to Enforce EU-U.S. Privacy Shield

The U.S. Federal Trade Commission (FTC) recently settled enforcement actions [PR] against four companies accused of misleading consumers about their participation in the European Union-United States Privacy Shield framework [see here, here & wiki here], which allows companies to transfer consumer data from EU member states to the United States in compliance with EU law. These collective actions demonstrate the FTC’s ongoing commitment under new Chairman Joseph Simons to enforce U.S. companies’ filing obligations with the U.S. Department of Commerce as part of their efforts to comply with the Privacy Shield. These actions are also consistent with a recent statement [coverage here] by Gordon Sondlan, U.S. Ambassador to the European Union, that the U.S. is complying with EU data protection rules. Key Takeaways:

  • The FTC will continue to hold companies accountable for the promises they make to consumers regarding their privacy policies, including participation in the Privacy Shield;
  • Companies participating in the Privacy Shield should re-evaluate their privacy procedures and policies regularly to ensure compliance with the various requirements of the Privacy Shield framework;
  • Once a company initiates the Privacy Shield certification process, it must complete that process to claim participation in the Privacy Shield framework; and
  • Companies looking to participate in the Privacy Shield or a similar privacy program should consult counsel to ensure the program is the best option for their particular business needs.

[Dechert LLP Blog | FTC continues aggressive enforcement of Privacy Shield Additional coverage at: Privacy & Information Security Law Blog (Hunton Andrews Kurth), Privacy and Cybersecurity Perspectives (Murtha Cullina), Legal News Line]

US – Google Faces Mounting Pressure from Congress Over Google+ Privacy Flaw

In March, Google discovered a flaw in its Google+ API that had the potential to expose the private information of hundreds of thousands of users. Officials at Google opted not to disclose the vulnerability to its users or the public for fear of bad press and potential regulatory action [in an internal memo first reported here]. Now, lawmakers are asking to see those communications firsthand. Republican leaders from the Senate Commerce Committee are demanding answers from Google CEO Sundar Pichai about a recently unveiled Google+ vulnerability, requesting the company’s internal communications regarding the issued in a letter [PR & PDF]. Some of the senators’ Democratic counterparts on the committee reached out to the Federal Trade Commission to demand that the agency investigate the Google+ security flaw, saying in a letter [3 pg PDF here] that if agency officials discover “problematic conduct, we encourage you to act decisively to end this pattern of behavior through substantial financial penalties and strong legal remedies.” Google has until October 30th to respond to the senators’ inquiries, just weeks before Pichai is scheduled to testify in front of the House Judiciary Committee following the November midterm elections. An exact date for that hearing has yet to be announced. [The Verge | Senators demand Google hand over internal memo urging Google+ cover-up | Senators Demand Memo Behind Google+ Privacy Debacle Cover-Up | Google Draws Bipartisan Criticism Over Data Leak Coverup | Senator Blumenthal Wants FTC To Investigate Google Over Data Leak | Google+ vulnerability comes under fire in Senate hearing | Google facing scrutiny from Australian regulator over Google+ data breach | Google+ Glitch Revelation Sparks German Probe | U.S., European regulators investigating Google glitch

US – Privacy Advocates Tell Senators What They Want in a Data Protection Law

Privacy advocates and tech giants like Google, Amazon and Apple all want a federal privacy law. But while tech companies essentially want a federal privacy bill to be a ceiling that would limit how far states could go with their own privacy rules, privacy advocates want it to be more of a floor that states can build on. During the Oct 10 hearing before the Senate Committee on Commerce, Science and Transportation, privacy advocates stressed the need for a federal privacy law that could work in tandem with state laws instead of overwriting them. Representatives included Andrea Jelinek, the chair of the European Data Protection Board [statement]; Alastair Mactaggart, the advocate behind California’s Consumer Privacy Act [statement]; Laura Moy, executive director of the Georgetown Law Center on Privacy and Technology [statement]; and Nuala O’Connor, president of the Center for Democracy and Technology [statement]. [CNET News | Privacy Groups Urge Congress To Create New National Privacy Law | CDD to Senate: Privacy Legislation Should Be Tough, Comprehensive, Enforceable | Lawmakers Push to Rein In Tech Firms After Google+ Disclosure | Senator calls for FTC investigation into Google+ data exposure

US – Facebook Accused of Violating Children’s Privacy Law

Several US groups advocating public and children’s health have urged the FTC to take action against social media giant Facebook for allegedly violating children’s privacy law. The 18-member group led by the Campaign for a Commercial-Free Childhood (CCFC) have filed a complaint asserting that Facebook’s Messenger Kids, a controversial messaging application for children as young as five, collects kids’ personal information without obtaining verifiable parental consent [PR & Complaint] Messenger Kids is the first major social platform designed specifically for young children, but the complaint argues that] Facebook’s parental consent mechanism does not meet the requirements of the Children’s Online Privacy Protection Act (COPPA) because any adult user can approve any account created in the app and “even a fictional ‘parent’ holding a brand-new Facebook account could immediately approve a child’s account without proof of identity.” The complaint further accused Facebook of disclosing data to unnamed third parties for “broad, undefined business purposes.” In January the CCFC on behalf of the advocacy groups sent Facebook CEO Mark Zuckerberg a letter signed by over 100 experts and advocates asking him to remove Messenger Kids from its platform. Critics have been skeptic about Facebook’s Messenger Kids security measures in protecting children’s privacy, and have been pushing for its closure since its debut last year [see CCFC petition]. [Financial Express]

Privacy Enhancing Technologies (PETs)

WW – Blockchain’s Role as a Privacy Enhancing Technology

Many of us hear the word “blockchain” [wiki & beginner’s guide], mentally file it under “something to do with Bitcoin,” and then swiftly move on. But there is more to this new technology than the cryptocurrencies. Top of mind is blockchain’s potential to enable greater data privacy and data security, says Florian Martin-Bariteau, who runs the University of Ottawa’s Blockchain Legal Lab [here], a research team investigating the practical uses of the technology — and the legal issues those uses raise. He’s also on a panel at the forthcoming CBA Access to Information and Privacy Law Symposium in Ottawa (Oct. 19 and 20) that will compare uses of blockchain in other industries. “The blockchain technology is actually a protocol for information or asset exchange, and an infrastructure for data storage and management,” he says. “It is literally a chain of blocks of information which are interlinked in a secure way.” It was conceived as a kind of secure spreadsheet — a way to timestamp documents in a ledger that could not be edited or tampered with.  Martin-Bariteau describes it as a digital notary system. The technology has since developed to become “a secure, immutable database shared by all parties in a distributed network.” Its utility where privacy is an issue is plain to see.  But part of the attraction of blockchain — the notion that data can’t be edited, altered or erased — is also part of the challenge it creates. For example, in the European Union and elsewhere, GDPR compliance includes the right to erasure. This has enormous implications for any system that requires registered users as part of its design. Martin-Bariteau is clear about the risks involved. “You need to be very careful about the information you register on an immutable ledger,” he notes. “You want to avoid including any personal information, so you need to design your implementation, or advise your clients to design it, in a way that it can use personal information without storing it.” [CBA National and see also: CNIL Publishes Initial Assessment on Blockchain and GDPR

RFID / Internet of Things

US – NIST Seeks Public Comment on Managing Internet of Things Cybersecurity and Privacy Risks

The U.S. Department of Commerce’s National Institute of Standards and Technology recently announced that it is seeking public comment on Draft NISTIR 8228, Considerations for Managing Internet of Things (“IoT”) Cybersecurity and Privacy Risks (the “Draft Report”). The document is to be the first in a planned series of publications that will examine specific aspects of the IoT topic. The Draft Report identifies three high-level considerations with respect to the management of cybersecurity and privacy risks for IoT devices as compared to conventional IT devises: (1) many IoT devices interact with the physical world in ways conventional IT devices usually do not; (2) many IoT devices cannot be accessed, managed or monitored in the same ways conventional IT devices can; and (3) the availability, efficiency and effectiveness of cybersecurity and privacy capabilities are often different for IoT devices than conventional IT devices. The Draft Report also identifies three high-level risk mitigation goals: (1) protect device security; (2) protect data security; and (3) protect individuals’ privacy. Comments are due by October 24, 2018 [download the NIST Comment Template for submitting your comments] [Privacy & Information Security Law Blog (Hunton Andrews Kurth)]


WW –Two-Thirds of Data Security Pros Looking to Change Jobs

Nearly two-thirds of security pros are looking to leave their current jobs. That is one of the findings of a new study on IT security trends by staffing firm Mondo [PR & report] which says that 60% of these workers can be easily hired away. Lack of growth opportunities and job satisfaction are tied as the top reasons to leave a job, according to the survey. The study found several other top reasons why IT security experts leave a job. They include: 1) Unhealthy work environment (cited by 53%); 2) Lack of IT security prioritization from C-level or upper management (cited by 46%); 3) Unclear job expectations (cited by 37%); and 4) Lack of mentorship (cited by 30%). To help retain IT security experts, the study recommends that organizations offer the following benefits, based on responses from security pros: 1) Promoting work-life balance; 2) Taking worker security concerns seriously; 3) Sponsorship of certifications or courses; 4) Increased investment in emerging tech; and 5) CISO leadership/defined ownership of security needs Mondo gathered this data by surveying more than 9,000 IT security professionals and decision-makers. [Information Management]

Smart Cars / Cities

WW – Google’s Plans for First Wired Urban Community Raise Data-Privacy Concerns

A unit of Google’s parent company Alphabet is proposing to turn a rundown part of Toronto’s waterfront into what may be the most wired community in history — to “fundamentally refine what urban life can be.” [see overview here] Sidewalk Labs [here] has partnered with a government agency known as Waterfront Toronto [here] with plans to erect mid-rise apartments, offices, shops and a school on a 12-acre site — a first step toward what it hopes will eventually be an 800-acre development. But some Canadians are rethinking the privacy implications of giving one of the most data-hungry companies on the planet the means to wire up everything from streetlights to pavement. And some want the public to get a cut of the revenue from products developed using Canada’s largest city as an urban laboratory. “The Waterfront Toronto executives and board are too dumb to realize they are getting played,” said former BlackBerry Chief Executive Jim Balsillie who also said the federal government is pushing the board to approve it. “Google knew what they wanted. And the politicians wanted a PR splash and the Waterfront board didn’t know what they are doing. And the citizens of Toronto and Canada are going to pay the price,” Balsillie said. Julie Di Lorenzo, a prominent Toronto developer who resigned from the Waterfront Toronto board over the project [see coverage], said data and what Google wants to do with it should be front and center in the discussions. She also believes the government agency has given the Google affiliate too much power over how the project develops. “How can (Waterfront Toronto), a corporation established by three levels of democratically elected government, have shared values with a limited, for-profit company whose premise is embedded data collection?” Di Lorenzo asked.  Bianca Wylie, an advocate of open government, said it remains deeply troubling that Sidewalk Labs still hasn’t said who will own data produced by the project or how it will be monetized. Google is here to make money, she said, and Canadians should benefit from any data or products developed from it. “We are not here to be someone’s research and development lab,” she said, “to be a loss leader for products they want to sell globally.” Ottawa patent lawyer Natalie Raffoul said the fact that the current agreement leaves ownership of data issues for later shows that it wasn’t properly drafted and means patents derived from the data will default to Google. [The Seattle Times]


US – That Sign Telling You How Fast You’re Driving May Be Spying on You

According to recently released US federal contracting data, the Drug Enforcement Administration will be expanding the footprint of its nationwide surveillance network with the purchase of “multiple” trailer-mounted speed displays “to be retrofitted as mobile License Plate Reader (LPR) platforms.” The DEA is buying them from RU2 Systems Inc., a private Mesa, Arizona company. For overviews of LPRs see EFF] Two other, apparently related contracts, show that the DEA has hired a small machine shop in California, and another in Virginia, to conceal the readers within the signs. An RU2 representative said the company providing the LPR devices themselves is a Canadian firm called Genetec. DEA expects to take delivery of its new license plate-reading speed signs by October 15. The DEA launched its National License Plate Reader Program in 2008; it was publicly revealed for the first time during a congressional hearing four years after that. The DEA’s most recent budget describes the program as “a federation of independent federal, state, local, and tribal law enforcement license plate readers linked into a cooperative system, designed to enhance the ability of law enforcement agencies to interdict drug traffickers, money launderers or other criminal activities on high drug and money trafficking corridors and other public roadways throughout the U.S.” What is a game-changing crime-fighting tool to some, is a privacy overreach of near-existential proportion to others. License plate readers, which can capture somewhere in the neighborhood of 2,000 plates a minute, cast an astonishingly wide net that has made it far easier for cops to catch serious criminals. On the other hand, the indiscriminate nature of the real-time collection, along with the fact that it is then stored by authorities for later data mining is highly alarming to privacy advocates. [QUARTZ | How roadside speed signs in the U.S. could be tracking you using Canadian-made tech]





16–30 September 2018


US – Use of Facial-Recognition Technology Fuels Debate at Seattle School

RealNetworks is offering schools a new, free security tool “Secure, Accurate Facial Recognition — or SAFR, pronounced “safer” — is a technology that the company began offering free to K-12 schools this summer. It took three years, 8 million faces and more than 8 billion data points to develop the technology, which can identify a face with near perfect accuracy. The software is already in use at one Seattle school, and RealNetworks is in talks to expand it to several others across the country. But as the technology moves further into public spaces, it’s raising privacy concerns and calls for regulation — even from the technology companies that are inventing the biometric software. Privacy advocates wonder if people fully realize how often their faces are being scanned, and advocates and the industry alike question where the line is between the benefits to the public and the cost to privacy. “There’s a general habituation of people to be tolerant of this kind of tracking of their face,” said Adam Schwartz, a lawyer with digital privacy group Electronic Frontier Foundation. “This is especially troubling when it comes to schoolchildren. It’s getting them used to it.” School security is a serious issue, he agreed, but he said the benefits of facial recognition in this case are largely unknown, and the damage to privacy could be “exceedingly high.” Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law Center, but she finds the lack of transparency into how the technology is being used and the lack of federal laws troubling. Garvie was on a team that conducted a widespread study that found 54% of U.S. residents are in a facial-recognition database accessible by law enforcement [see PR here & study report here] — usually in the form of a driver’s license photo. “It is unprecedented to have a biometric database that is composed primarily of law-abiding citizens,” Garvie said. “The current trajectory might fundamentally change the relationship between police and the public,” she said. “It could change the degree to which we feel comfortable going about our daily lives in public spaces.” Alessandro Acquisti [here & here], a professor of information technology and public policy at Carnegie Mellon University pointed out that facial recognition can be used for good — to combat child trafficking — and for bad — to track law-abiding citizens anywhere they go. That doesn’t mean it’s neutral, he said. Anonymity is becoming more scarce with the proliferation of photos on social media and the technology that can recognize faces. [Seattle Times, See also: Are You on Board with Using Facial Recognition in Schools? | Is Facial Recognition in Schools Worth the High Price?

Big Data / Analytics

WW – ‘Predictive Policing’: Law Enforcement Revolution or Spin on Old Biases?

Los Angeles has been put on edge by the LAPD’s use of an elaborate data collection centre, a shadowy data analysis firm called Palantir, and predictive algorithms to try to get a jump on crime. Los Angeles isn’t the only place where concerns are flaring over how citizens’ data is collected and used by law-enforcement authorities. Police forces across the U.S. are increasingly adopting the same approach as the LAPD: employing sophisticated algorithms to predict crime in the hope they can prevent it. Chicago, New York City and Philadelphia use similar predictive programs and face similar questions from the communities they are policing, and even legal challenges over where the information is coming from and how police are using it. A sophisticated program called PredPol, short for predictive policing, is used to varying degrees by 50 police forces across the United States. The genesis of the program came from a collaboration between LAPD deputy chief Sean Malinowski and Canadian Jeff Brantingham, an anthropology professor at UCLA. Canadian police forces are very aware of what their U.S. counterparts are doing, but they are wary of jumping in with both feet due to concerns over civil liberties issues. Sarah Brayne, a Canadian sociologist, spent two years inside the LAPD studying its use of predictive policing. She says the LAPD has been using predictive policing since 2012, and crunching data on a wide range of activities — from “where to allocate your resources, where to put your cars, where to put your personnel, to helping investigators solve a crime. And even for some risk management, like tracking police themselves, for performance reviews and different accountability reasons.” But PredPol is just one of the police systems that community watchdogs are concerned about. The Rampart division of the LAPD uses another program to pinpoint individuals who are at risk of committing crimes in the future. This is known as person-based predictive policing. … The program is called Los Angeles Strategic Extraction and Restoration (LASER). At the moment it generates a list of approximately 20 “chronic offenders” that is updated monthly. LAPD documents show how LASER gives people specific scores, which increase with each police encounter. You get five points if you are a gang member. Five points if you are on parole or probation. Five point for arrests with a handgun. And one point for every “quality” police contact in the past two years, which includes what the LAPD calls “Field Interviews.” In Canada, field interviews are called “carding,” referring to the cards police use to record information about the people they have stopped — even when there are no grounds to think they’ve committed an offence. On the chronic offender bulletin there are names, addresses, scores ranging from six to 28, dates of birth and gang affiliations (Crazy Riders, Wanderers, 18th Street, and so on). The police try to track down the people on the bulletin and hand-deliver an “At Risk Behaviour” letter to each one — if they can find them. Officers are given instructions to contact the offenders on the list every month “to check their status” and to remind them to use the community services. They are also encouraged to door-knock on adjacent residences to “spark interest and gather info.” [CBC News]

CA – Q&A: Data Ownership Conundrum in the Data Driven World

Modern society is increasingly reliant upon data and driven by data gathering and data analytics. This leads to many questions that need to be unraveled relating to privacy, data rights and smart cities. One person well-placed to tackle these issues is Teresa Scassa [University of Ottawa law professor & fellow at the Waterloo-based Centre for International Governance Innovation]. In her latest research paper, Data Ownership, Scassa describes how in most jurisdictions the ownership of data is often based in copyright law or protected as confidential information. In Europe, database protection laws also play a role. However, there are limitations and major areas where laws fall short. For example, “Copyright protection requires a human author. Works that are created by automated processes in which human authorship is lacking cannot, therefore, be copyright protected. This has raised concerns that the output of artificial intelligence processes will not be capable of copyright protection,” warns Scassa. To discuss these important issues further, Digital Journal recently asked Teresa Scassa the following questions: 1) How important has data become for businesses?; 2) Are consumers too willing to provide personal data?; 3) How concerned should people be about what is done with personal data?; 4) How about data security issues. How secure is most personal data that is held by companies?; and 5) How are new technologies, like artificial intelligence, affecting data privacy? [Digital Journal] In a follow up interview, Teresa Scassa discusses data privacy laws, considering the recent changes affecting Europe and the possible implications for the U.S. [here]


CA – OPC Publishes Draft Guidelines for Mandatory Breach Reporting

On September 17, 2018, the Office of the Privacy Commissioner of Canada (OPC) published draft guidelines on mandatory breach reporting under the “Personal Information Protection and Electronic Documents Act” (PIPEDA). The guidelines are intended to assist organizations in meeting their breach reporting and record-keeping obligations under PIPEDA’s mandatory breach reporting regime, which comes into force on November 1, 2018. Organizations have until October 2, 2018 to provide feedback on these draft guidelines In April 2018, the federal government published the Breach of Security Safeguards Regulations setting out the requirements of the new regime, and announced that the Regulations would come into force on November 1, 2018 …organizations will be required to notify the OPC and affected individuals of “a breach of security safeguards” involving personal information under the organization’s control where it is reasonable in the circumstances to believe that the breach creates a “real risk of significant harm” to affected individuals. Other organizations and government institutions must also be notified where such organization or institution may be able to mitigate or reduce the risk of harm to affected individuals. Organizations must also keep and maintain records of all breaches of security safeguards regardless of whether they meet the harm threshold for reporting. Failure to report a breach or to maintain records as required is an offence under PIPEDA, punishable by a fine of up to C$100,000. The draft guidelines are intended to assist organizations in meeting their breach reporting and record-keeping obligations under PIPEDA. Unfortunately for stakeholders, much of the information in the draft guidelines is simply a reiteration of the legal requirements as set out in PIPEDA and the Regulations. However, the draft guidelines provide additional guidance in certain areas, including: 1) Who Is Responsible for Reporting a Breach?; 2) When Does a Breach Create a Real Risk of Significant Harm?; 3) Form of Report; and 4) What Information Must Be Included in a Breach Record? [Business Class (Blakes) Additional coverage at: BankInfo Security]

CA – Upcoming Canadian Breach Notification Requirements Still in Flux

Canada’s national breach notification requirements are coming online November 1st, meaning companies experiencing a data breach will soon have new reporting obligations. These requirements were created in 2015 by the Digital Privacy Act, which amended the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada’s main privacy statute. In April 2018, in preparation for the national implementation of the new law, the Office of the Privacy Commissioner of Canada (OPC), with authority to issue promulgating regulations under PIPEDA, issued Regulations that establish detailed requirements regarding the content and methodology of breach notifications to the OPC and affected individuals. After issuing those Regulations, the OPC continued to receive requests for further clarity and guidance regarding the breach notification requirements under PIPEDA and the OPC Breach Regulations. In response to those further requests for guidance, the OPC announced that it would issue further guidance (“What You Need To Know About Mandatory Reporting Of Breaches Of Security Safeguards”) on breach notification and reporting. On September 17th, the OPC invited public feedback on the draft guidance. The OPC will accept feedback until October 2, 2018. Comments can be sent to OPC-CPVPconsult2@priv.gc.ca and must be either in the body of the email or attached as a Word or PDF document. The OPC will publish the final guidance soon after the October 2nd deadline to ensure guidance is in place when the amendment becomes effective in November. … the OPC’s September 17th announcement indicates there is still uncertainty around what exactly will be required of companies that experience a breach. Companies that hold or control information on Canadian residents have one more opportunity to impact the final requirements or pose questions for clarity in the OPC’s guidance, and should submit their views before the October 2nd deadline. [Eye on Privacy (SheppardMullin) and at: BankInfo Security]

CA – OPC Denounces Slow Progress on Fixing Outdated Privacy Laws

Federal Privacy Commissioner Daniel Therrien’s annual report to Parliament was tabled. [see here, Commissioner’s Message here &103 pg PDF here] It outlines the work of the Office of the Privacy Commissioner of Canada (OPC) as it relates to both the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada’s federal private sector privacy law and the Privacy Act, which applies to the federal public sector. It covers important initiatives over the last year, including key investigations, work on reputation and privacy, new consent guidance as well as work on national security and Bill C-59 [here]. In his report, Therrien also reiterated calls for the government to increase his office’s resources. “My office needs a substantial budget increase to keep up our knowledge of the technological environment and improve our capacity to inform Canadians of their rights and guide organizations on how to comply with their obligations,” he says. “Additional resources are also needed meet our obligations under the new breach reporting regulations that come into force in November.” [see here] Under the regulations, companies will be required to report all privacy breaches presenting a real risk of significant harm. While imperfect, Therrien calls the regulations “a step in the right direction.” As breach notification regulations come into force on the private sector side, serious concerns have also emerged about the federal government’s ability to prevent, detect and manage privacy breaches within its own institutions. An OPC review of privacy breach reporting by federal government institutions found thousands of breaches occur annually, and while some go unreported, others likely go entirely unnoticed at many institutions. Therrien [also] warns privacy concerns are reaching crisis levels and is calling on the federal government to take immediate action by giving his office new powers to more effectively hold organizations to account. “Unfortunately, progress from government has been slow to non-existent … There’s no need to further debate whether to give my office new powers to make orders, issue fines and conduct inspections to ensure businesses respect the law. It’s not enough for the government to ask companies to do more to live up to their responsibilities. To increase trust in the digital economy, we must ensure Canadians can count on an independent regulator with the necessary tools to verify compliance with privacy law. If my Office had order making powers, our guidelines would be more than advice that companies can choose to ignore. They would become real standards that ensure real protection for Canadians.” Therrien says. [Office of the Privacy Commissioner of Canada Also see the OPC’s “Alert” Key lessons for public servants from the 2017-18 Annual Report Coverage: Canada’s privacy laws ‘sadly falling behind’ other countries: Privacy commissioner | Privacy commissioner slams ‘slow to non-existent’ federal action in light of major data breaches | Watchdog says Ottawa moving too slowly on privacy threats | Watchdog slams government’s ‘slow to non-existent’ action to protect Canadians’ privacy | Time of ‘self-regulation’ is over, privacy czar says in push for stronger laws]

CA – ‘Right to Be ForgottenCould Trigger Battle Over Free Speech in Canada

A push by some for a “right to be forgotten” for Canadians is setting up what could be a landmark battle over the conflict between privacy and freedom of expression on the internet. In his annual report issued September 27 – PR, Report, Commissioner’s Message &103 pg PDF] Privacy Commissioner Daniel Therrien served notice he intends to seek clarity from the Federal Court on whether existing laws already give Canadians the right to demand that search engines remove links to material that is outdated, incomplete or incorrect, a process called “de-indexing.” Following a round of consultations he launched in 2016, Therrien concluded in a draft report earlier this year that Canadians do have that right under PIPEDA. Google disagrees — and warns that a fundamental charter right is being threatened. [Section 2 (b) — expression & press freedom, wiki here, Charter here, guidance here] “The right to be forgotten impinges on our ability to deliver on our mission, which is to provide relevant search results to our users,” said Peter Fleischer [here], Google’s global privacy counsel. “What’s more, it limits our users’ ability to discover lawful and legitimate information.” University of Ottawa law professor Michael Geist, also blog posts here & here], who specializes in internet and e-commerce law, said “Given the complexity, given the freedom of expression issues that arise out of this, I think the appropriate place is within Parliament to explicitly go through the policy process and decide what’s right for Canada on this” Internet lawyer Allen Mendelsohn [blog posts here & here] worries about the “slippery slope” implied in a right to be forgotten. With no easy answers on how to move forward, he said it’s Parliament’s duty to debate the concept and decide on appropriate standards. “Parliament represents the people, and if the will of the people think this is a good thing to do, then there’s no good reason why they shouldn’t go ahead and do it,” he said. Google argues that freedom of expression is a fundamental human right. While the European court upheld the right to be forgotten, Chile, Colombia and the U.S. have all rejected it. According to Peter Fleischer “As the privacy commissioner considers translating the European model to Canada, it will also have to confront the challenges of how to balance one person’s right to privacy with another’s right to know, and whether the European right to be forgotten would be consistent with the rights outlined in Canada’s Charter of Rights and Freedoms, which assures Canadians ‘freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication.’“ [CBC News | Privacy watchdog to seek ruling on ‘right to be forgotten’

CA – Liberals Won’t Put Political Parties Under Privacy Laws

The Liberal government will not accept a recommendation — endorsed by MPs from the three major parties on Access to Information, Privacy and Ethics Committee [see here & report here also 56 pg PDF] — to develop a set of privacy rules for political parties or bring them under existing laws. Instead, under the Liberals’ electoral rule changes, parties will simply have to post a privacy policy online. Bill C-76 [here] does not allow for any independent oversight, however, to ensure parties are actually following their policies. Because they’re specifically exempted from federal privacy laws, parties are also not required to report if they’ve been hacked or suffered a data breach involving sensitive information about Canadians. The decision means federal political parties can continue to collect, store and use the personal information of Canadian citizens without limitations, laws or independent oversight. Federal Privacy Commissioner Daniel Therrien — along with his counterparts at the provincial and territorial levels — issued a joint statement calling on all levels of government to put some form of restrictions on parties’ data operations — an increasingly crucial aspect of electioneering in Canadian politics [see PR here & Joint Resolution here]. In exempting political parties from privacy laws, Canada is largely an outlier. The United Kingdom, New Zealand, and much of the European Union subjects parties to privacy rules. [Toronto Star coverage at: Toronto Star Editorial | Political parties excused from privacy laws: Why Albertans’ personal information is at risk]

CA – Buyers’ Privacy Top Priority, Says Ontario’s Online Pot Retailer

Ontario’s government-run cannabis retailer is assuring its future customers that their privacy is the top priority, an issue ranked as a major concern for marijuana users in a recent report which ranked privacy and data security among the top demands of Canadian marijuana consumers, noting one in five listed it as the most important feature. [see Deloitte’s 2018 cannabis report, PR]. Critics have raised concerns about how Ontario Cannabis Store (OCS) [here] customers’ data will be used and stored after the online delivery service launches on Oct. 17. There are worries the data may be stored in the United States, where American border agents could access it and ban travellers from entering the U.S. for using a drug that’s illegal there under federal law. The OCS this week announced it’s taking steps to safeguard customers’ privacy and keep their buying history confidential. Ensuring data is stored within Canada and other privacy considerations were key factors in deciding to partner with Shopify, the Ottawa-based e-commerce platform. All information collected will be deleted and no information will be sold to third parties after it’s held for a minimum time, the company says. While dispensaries across the country are getting ready to open their doors on Oct. 17 — when Canada becomes the second country in the world to legalize recreation marijuana — Ontario residents will be able to legally buy pot only through a government-run delivery service. However, new Ontario Premier Doug Ford has rejected the government-monopoly on cannabis sales — a model set up under the previous Liberal government — [and] storefront pot sales are to begin on April 1. [The London Free Press]

CA – TREB CEO Concerned About Homeowner Privacy, Security

The Toronto Real Estate Board is “pressing ahead” with the Competition Bureau’s demand to make home sales data available on realtors’ password-protected websites, but that doesn’t mean the board’s concerns around privacy are gone. In his first interview since the Supreme Court of Canada refused in August to hear TREB’s seven-year fight [read Competition Bureau PR here & TREB PR here] to keep the numbers under wraps – effectively forcing them to be made public – the board’s chief executive officer John DiMichele told The Canadian Press, “the element of privacy in our opinion hasn’t been settled completely yet.” DiMichele is particularly concerned because he claims to have seen evidence of brokers’ remarks about homeowners being posted online, information that is not included in the home sales data feed TREB had to make available to realtors. DiMichele wouldn’t reveal how he discovered such violations [and he did not] discuss in detail what kind of action will be taken against anyone who is caught posting unauthorized information or home sales data without password protections – conditions mandated in a Competition Tribunal ruling [5 pg PDF here] that came into effect recently, after the Competition Bureau argued that TREB’s refusal to release the data was anti-competitive and stifled innovation. In early September, the board sent cease-and-desist letters to real estate companies warning it will revoke data access and TREB memberships or bring legal action against members it believes are violating its user agreement by posting sales numbers online “in an open and unrestricted fashion.” [The Globe & Mail Additional coverage at: The Toronto Star]


WW – Yes Facebook is Using Your 2FA Phone Number to Target You With Ads

Facebook has confirmed it does in fact use phone numbers that users provided it for security purposes to also target them with ads. Specifically a phone number handed over for two factor authentication (2FA) — a security technique that adds a second layer of authentication to help keep accounts secure. Facebook’s confession follows a story Gizmodo ran related to research work carried out by academics at two U.S. universities [Northeastern University and Princeton University] who ran a study [see Investigating sources of PII used in Facebook’s targeted advertising – 18 pg PDF here] in which they say they were able to demonstrate the company uses pieces of personal information that individuals did not explicitly provide it to, nonetheless, target them with ads. Some months ago Facebook did say that users who were getting spammed with Facebook notifications to the number they provided for 2FA was a bug. “The last thing we want is for people to avoid helpful security features because they fear they will receive unrelated notifications,” Facebook then-CSO Alex Stamos wrote in a blog post at the time. Apparently not thinking to mention the rather pertinent additional side-detail that it’s nonetheless happy to repurpose the same security feature for ad targeting. [TechCrunch coverage at: DeepLinks Blog (EFF), The Mercury News and Tom’s Harware]

Facts & Stats

CA – Federal Workers Cited 3,075 Times for Lapses in Document Security

Office workers at Public Services and Procurement Canada were cited 3,075 times last year for failing to lock up documents, USB keys and other storage devices containing sensitive information, says a new security report. And six of those employees were found to be chronic offenders during a “security sweep” at the department in 2017-2018, with each of them leaving confidential material unsecured at least six times over the 12-month period. According to a June 2018 briefing note, obtained by CBC News under the Access to Information Act. [CBC News]

WW – Cyber Crime’s Toll: $1.1 Million in Losses and 1,861 Victims per Minute

Every minute more than $1.1 million is lost to cyber crime and 1,861 people fall victim to such attacks, according to a new report [Evil Internet Minute 2018] from threat management company RiskIQ [see PR, Blog Post & Infographic]. Despite the best efforts of organizations to guard against external cyber threats, spending up to $171,000 every 60 seconds, attackers continue to proliferate and launch successful campaigns online, the study said. Attacker methods range from malware to phishing to supply chain attacks aimed at third parties. Their motives include monetary gain, large-scale reputational damage, politics and espionage. One of the biggest security threats is ransomware. The report said 1.5 organizations fall victim to ransomware attacks every minute, with an average cost to businesses of $15,221. [Information Management]


CA – N.S. Premier Calls Election Promise to Increase OIPC Powers “a Mistake’

In 2013, Stephen McNeil said that if he became premier, he would “expand the powers and mandate of the Office of the Information and Privacy Commissioner, particularly through granting her order-making power.” At the time he responded to a report by the Centre of Law and Democracy [12 pg PDF] that recommended a complete overhaul of the province’s freedom-of-information policy, writing “If elected Premier, I will expand the powers and mandate of the Review Officer, particularly through granting her order-making power” Nearly five years later and with no follow-through on that commitment, he says the pledge was a “mistake.” He said that he thinks the office is functioning “properly” the way it is and that it has all the power it needs. But experts say that McNeil’s failure to institute meaningful reforms in government transparency five years after taking office indicate a larger failure to take government transparency seriously. Catherine Tully, the province’s current privacy commissioner, has issued her own calls to update the legislation including giving her order-making power. She has said that legislation written in 1993 is outdated for the current digital world. [Global News]

US – Privacy Group Sues Archives for Kavanaugh Surveillance Records

The Electronic Privacy Information Center [EPIC] has filed a federal Freedom of Information Act lawsuit seeking records related to U.S. Supreme Court nominee Brett Kavanaugh’s involvement in the George W. Bush administration’s government surveillance programs between 2001 and 2006 during enactment of the Patriot Act and while the administration was conducting warrantless surveillance for counter-terrorism purposes. [see announcement here & 21 pg PDF claim here] The group alleged that Kavanaugh said in 2006 Senate testimony on his nomination to the U.S. Court of Appeals for the District of Columbia Circuit that he didn’t know anything about the warrantless wiretapping program, which was carried out in secret until 2005. His White House email communications and records related to the program have not been made available to the public, the group alleged. Bloomberg BNA


WW – Please Don’t Give Your Genetic Data to AncestryDNA as Part of Their Spotify Playlist Partnership

Ancestry, the world’s largest for-profit genealogy company, has announced a new partnership with Spotify to create playlists based on your DNA. The partnership combines Spotify’s personalized recommendations with Ancestry’s patented DNA home kit data to give users recommendations based on both their Spotify habits and their ancestral place of origin. A ThinkProgress investigation last year found that buried in their terms of service, Ancestry claims ownership of a “perpetual, royalty-free, worldwide license” that may be used against “you or a genetic relative” as the company and its researchers see fit. Upon agreeing to the company’s terms of service, you and any genetic relatives appearing in the data surrender partial legal rights to the DNA, including any damages that Ancestry may cause unintentionally or purposefully. At the same time, maybe their mission isn’t all that different from Spotify’s, who’ve spent the last few years preaching the Big Data gospel in their aim to deliver the most highly-personalized experience to users through data collection. However you feel about data privacy, the Ancestry partnership feels like another big move for Spotify, who have continued to partner with auto manufacturers, telecom behemoths, video providers and more in recent months. [SPIN coverage at: Jezebel, Quartzy, Complex and Campaign]

Health / Medical

US – Congress Urged To Align 42 CFR Part 2 with HIPAA Privacy Rule

The Partnership to Amend 42 CFR Part 2 is urging Congress to include the Overdose Prevention and Patient Safety Act (HR 6082), which would align 42 CFR Part 2 with the HIPAA Privacy Rule, in compromise opioid legislation that the House and Senate are considering. HR 6082 would allow the sharing of information about a substance abuse patient without the patient’s consent. The House passed its comprehensive opioid crisis legislation (HR 6) [here & 9 pg PDF overview here] in June, while the Senate just passed its legislation (S 2680). The two chambers are working on compromise legislation thaty they hope to pass before the mid-term elections. Currently, 42 CFR Part 2 prevents providers from sharing any information on a patient’s substance abuse history unless the patient gives explicit consent. The Partnership to Amend 42 CFR Part 2 wants current law to be amended because, it argues, the stricter confidentiality requirements have a negative effect on medical treatment of individuals undergoing treatment for addiction. They emphasized their case] In a Sept. 18 letter to the Senate and House majority and minority leaders. Not everyone in healthcare favors changing 42 CFR Part 2. The American Medical Association (AMA) has come out against the effort to change current law [arguing in a letter sent to Congress – coverage here] that amending 42 CFR Part 2 would discourage addicted individuals from seeking treatment out of concern that their addiction treatment information will be shared without their permission. [HealthIT Security]

Horror Stories

CA – Proposed Class Action Lawsuit Launched After Alleged NCIX Data Breach

Kipling Warner, a Vancouver software engineer has launched a proposed class action lawsuit in the wake of an alleged data breach involving personal information belonging to former customers of bankrupt computer retailer NCIX. [The issue is being investigated by the RCMP and the BC OIPC see here]. The notice of civil claim filed in B.C. Supreme Court [here] says he gave the company his name and address along with his debit and credit card details in the course of purchasing computer products. He’s seeking to certify a lawsuit against NCIX and the company tasked with auctioning off the computer firm’s old equipment. Warner claims NCIX failed to properly encrypt the information of at least 258,000 people. And he claims the auctioneer failed to take “appropriate steps to protect the private information on its premises.” Warner is suing for loss including damage to credit reputation, mental distress, “wasted time, frustration and anxiety” and time lost “engaging in precautionary communication” with banks, credit agencies and credit car companies. His lawyer, David Klein [here], told CBC that customers dealing with a technology company would expect anyone who comes into contact with their information to take steps to ensure confidentiality. The provincial privacy act says organizations doing business in British Columbia have a duty to protect the personal information entrusted to them. The federal regulation says personal information that is “no longer required to fulfil the identified purposes should be destroyed, erased or made anonymous.” The proposed class action lawsuit says millions of customers could be affected. [CBC News]

US – Uber Agrees to $148M Settlement With States Over Data Breach

Uber will pay $148 million and tighten data security after the ride-hailing company failed for a year to notify drivers that hackers had stolen their personal information, according to a settlement reached with all 50 states and the District of Columbia after a massive data breach in 2016 [here] announced Wednesday. [see California AG PR here, Illinois AG PR here, Alaska AG PR here, New York AG PR here & New Mexico AG PR here] Instead of reporting [the breach], Uber hid evidence of the theft and paid ransom to ensure the data wouldn’t be misused. “This is one of the most egregious cases we’ve ever seen in terms of notification; a yearlong delay is just inexcusable,” Illinois Attorney General Lisa Madigan [wiki here] told The Associated Press. “And we’re not going to put up with companies, Uber or any other company, completely ignoring our laws that require notification of data breaches.” Uber, whose GPS-tracked drivers pick up riders who summon them from cellphone apps, learned in November 2016 that hackers had accessed personal data, including driver’s license information, for roughly 600,000 Uber drivers in the U.S. The company acknowledged the breach in November 2017, saying it paid $100,000 in ransom for the stolen information to be destroyed. The hack also took the names, email addresses and cellphone numbers of 57 million riders around the world. The settlement requires Uber to comply with state consumer protection laws safeguarding personal information and to immediately notify authorities in case of a breach; to establish methods to protect user data stored on third-party platforms and create strong password-protection policies. The company also will hire an outside firm to conduct an assessment of Uber’s data security and implement its recommendations. The settlement payout will be divided among the states based on the number of drivers each has. [The Washington Post coverage at: TechCrunch, PYMNTS, The Wall Street Journal and engadget]

US – Wendy’s Faces Lawsuit for Unlawfully Collecting Employee Fingerprints

A class-action lawsuit has been filed in Illinois against fast food restaurant chain Wendy’s accusing the company of breaking state laws in regards to the way it stores and handles employee fingerprints. The lawsuit was filed on September 11, in a Cook County court [here], according to a copy of the complaint obtained by ZDNet. [The case is: Martinique Owens and Amelia Garcia v. Wendy’s International LLC, et al., Case No. 2018­-ch-­11423, in the Circuit Court of Cook County — complaint here.] The complaint is centered around Wendy’s practice of using biometric clocks that scan employees’ fingerprints when they arrive at work, when they leave, and when they use the Point-Of-Sale and cash register systems. Plaintiffs, represented by former Wendy’s employees Martinique Owens and Amelia Garcia, claim that Wendy’s breaks state law — the Illinois Biometric Information Privacy Act (BIPA) [here] — because the company does not make employees aware of how the company handles their data. Wendy’s does not inform employees in writing of the specific purpose and length of time for which their fingerprints were being collected, stored, and used, as required by the BIPA, and nor does it obtain a written release from employees with explicit consent to obtain and handle the fingerprints in the first place. Nor does it provide a publicly available retention schedule and guidelines for permanently destroying employees’ fingerprints after they leave the company, plaintiffs said. The class-action also names Discovery NCR Corporation [here], which is the software provider that supplies Wendy’s with the biometric clocks and POS and cash register access systems used in restaurants. Plaintiffs said they believe NCR may hold fingerprint information on other Wendy’s employees [ZDNet coverage at: Top Class Actions, The Daily Dot, Human Capital (HRD), Gizmodo and Biometric Update]

WW – Facebook Forces Mass Logout After Breach

Facebook logged 90 million users out of their accounts after the company discovered that hackers had been exploiting a flaw in Facebook code that allowed them to steal Facebook access tokens and take over other people’s accounts. The stolen tokens could also be used to access apps and websites linked to the Facebook accounts. The hackers exploited a trio of flaws that affected the “View As” feature, which lets users see how their profiles appear to other people. Facebook has fixed the security issue; it has also reset the access tokens for 90 million accounts. Facebook became aware of the issue on September 16, when it noticed an unusual spike in people accessing Facebook. [newsroom.fb.com: Security Update | Wired.com: The Facebook Security Meltdown Exposes Way More Sites Than Facebook | Wired: Everything We Know About Facebook’s Massive Security Breach | eWeek: Facebook Data Breach Extended to Third-Party Applications | – ZDnet: Facebook discloses network breach affecting 50 million user accounts |  krebsonsecurity: Facebook Security Bug Affects 90M Users | The Register: Facebook: Up to 90 million addicts’ accounts slurped by hackers, no thanks to crappy code]

WW – Facebook Says Big Breach Exposed 50 Million Accounts to Full Takeover

Facebook Inc said [notice & details here] that hackers stole digital login codes allowing them to take over nearly 50 million user accounts in its worst security breach ever given the unprecedented level of potential access, adding to what has been a difficult year for the company’s reputation. It has yet to determine whether the attacker misused any accounts or stole private information. It also has not identified the attacker’s location or whether specific victims were targeted. Its initial review suggests the attack was broad in nature. Chief Executive Mark Zuckerberg described the incident as “really serious” in a conference call with reporters [see transcript]. His account was affected along with that of Chief Operating Officer Sheryl Sandberg, a spokeswoman said. The vulnerability had existed since July 2017, but the company first identified it on Tuesday after spotting a “fairly large” increase in use of its “view as” [here] privacy feature on Sept. 16, executives said. “View as” allows users to verify their privacy settings by seeing what their own profile looks like to someone else. The flaw inadvertently gave the devices of “view as” users the wrong digital code, which, like a browser cookie, keeps users signed in to a service across multiple visits. That code could allow the person using “view as” to post and browse from someone else’s Facebook account, potentially exposing private messages, photos and posts. The attacker also could have gained full access to victims’ accounts on any third-party app or website where they had logged in with Facebook credentials. Facebook fixed the issue. It also notified the U.S. Federal Bureau of Investigation, Department of Homeland Security, Congressional aides and the Data Protection Commission in Ireland, where the company has European headquarters. Facebook reset the digital keys of the 50 million affected accounts, and as a precaution temporarily disabled “view as” and reset those keys for another 40 million that have been looked up through “view as” over the last year. About 90 million people will have to log back into Facebook or any of their apps that use a Facebook login, the company said. [Reuters See also: Facebook Security Bug Affects 90M Users | Facebook’s spam filter blocked the most popular articles about its 50m user breach | Here’s what to do if you were affected by the Facebook hack | Facebook Says Three Different Bugs Are Responsible For The Massive Account Hacks | Facebook warns that recent hack could have exposed other apps, including Instagram, Tinder, and Spotify | Facebook Faces Class Action Over Security Breach That Affected 50 Million Users | Facebook Could Face Up to $1.63 Billion Fine for Latest Hack Under the GDPR | Facebook could be fined up to $1.63 billion for a massive breach which may have violated EU privacy laws | Until data is misused, Facebook’s breach will be forgotten]

Internet / WWW

EU – Report Warns of Smart Home Tech Impact on Children’s Privacy

Dr. Veronica Barassi of Goldsmiths, University of London, leads the Child Data Citizen research project, and submitted a report on “Home Life Data and Children’s Privacy“ to the Information Commissioner’s Office (ICO), arguing that data collected from children by home automation devices is both personal data and is “home life data,” which is made up of family, household, biometric and highly contextual data. She calls for the ICO to launch a review the impact of home life data on children’s privacy, and to include the concept in future considerations. [Biometric Update coverage at: TechCrunch]

Law Enforcement

CA – RCMP’s Ability to Police Digital Realm ‘Rapidly Declining’

Privacy watchdogs have warned against any new encryption legislation. A note tucked into the briefing binder prepared for RCMP Commissioner Brenda Lucki when she took over the top job earlier this year obtained by CBC News may launch a renewed battle between the national police service and privacy advocates. “Increasingly, criminality is conducted on the internet and investigations are international in nature, yet investigative tools and RCMP capacity have not kept pace. Growing expectations of policing responsibilities and accountability, as well as complexities of the criminal justice system, continue to overwhelm the administrative demands within policing” [says the memo]. Encryption of online data has a been a persistent thorn in the RCMP’s side. “Approximately 70% of all communications intercepted by CSIS and the RCMP are now encrypted. 80 organized crime groups were identified as using encryption in 2016 alone,” according to the 274-page [briefing binder]. Lucki’s predecessor lobbied the government for new powers to bypass digital roadblocks, including tools to get around encryption and warrantless access to internet subscriber information. Some critics have noted that non-criminals — journalists, protesters and academics, among others — also use encryption tools online and have warned any new encryption legislation could undermine the security of financial transactions and daily online communication. Ann Cavoukian …called the RCMP’s push for more online policing power “appalling.” … “I guess we should remind them that we still live in a free and democratic society where people have privacy rights, which means that they should be in control of their personal information … If you’re a law abiding citizen, you get to decide how your information is used and to whom it’s disclosed. The police have no right to access your personal information online, unless of course they have a warrant” she said. [CBC News]

Online Privacy

US – Facebook Scolds Police for Using Fake Accounts to Snoop on Citizens

In a September 19 letter, addressed to Memphis Police Department Director Michael Rallings, Facebook’s Andrea Kirkpatrick, director and associate general counsel for security, scolded the police for creating multiple fake Facebook accounts and impersonating legitimate Facebook users as part of its investigations into “alleged criminal conduct unrelated to Facebook.” Facebook’s letter was sent following a civil rights lawsuit filed by the American Civil Liberties Union (ACLU) of Tennessee that accused the MPD of illegally monitoring activists to stifle their free speech and protests. The lawsuit claimed that Memphis police violated a 1978 consent decree that prohibits infiltration of citizen groups to gather intelligence about their activities. After two years of litigation, the city of Memphis had entered into a consent decree prohibiting the government from “gathering, indexing, filing, maintenance, storage or dissemination of information, or any other investigative activity, relating to any person’s beliefs, opinions, associations or other exercise of First Amendment rights.” Before the trial even began over the ACLU’s lawsuit last month, US District Judge Jon McCalla issued a 35-page order agreeing with the plaintiffs, but he also ruled that police can use social media to look for specific threats: a ruling that, one imagines, would condone the use of fake profiles during undercover police work… butt not the illegal surveillance of legal, Constitutionally protected activism. the ACLU lawsuit uncovered evidence that Memphis police used a fake “Bob Smith” account to befriend and gather intelligence on Black Lives Matter activists. According to the Electronic Frontier Foundation (EFF), Facebook deactivated “Bob Smith” after the organization gave it a heads-up. Then, Facebook went on to identify and deactivate six other fake accounts managed by Memphis police. [Naked Security (Sophos)]

WW – Google Promises Chrome Changes After Privacy Complaints

Google, on the defensive from concerns raised about how Chrome tracks its users, has promised changes to its web browser. Complaints in recent days involve how Google stores data about browsing activity in files called cookies and how it syncs personal data across different devices. Google representatives said there’s nothing to be worried about but that they’ll be changing Chrome nevertheless. In a recent blog post [Zach Koch, Chrome Product Manager said – here] that it will add new options and explanations for its interface and reverse one Chrome cookie-hoarding policy that undermined people’s attempts to clear those cookies. [CNET News Coverage of complaints at: Bloomberg (video), CNBC, WIRED, TechCrunch, Forbes and Popular Mechanics]

WW – Privacy and Anonymity in the Modern World — CyberSpeak Podcast

On this episode of the CyberSpeak with InfoSec Institute podcast [YouTube here], Lance Cottrell, chief scientist at Ntrepid, talks about the evolution of privacy and anonymity on the Internet, the impact of new regulations and laws, and a variety of other privacy-related topics.In the podcast, Cottrell and host Chris Sienko discuss:

  • What about the early Internet drove you to focus on online anonymity and security? (1:45)
  • Do the early privacy tools and concepts hold up in today’s environment? (3:50)
  • When it did become apparent that fraudsters and phishers were taking over the Internet? (5:00)
  • What are some of the most effective social engineering attacks being used? (8:10)
  • Have you ever been scammed or phished? (11:35)
  • Why is online anonymity important? (14:50)
  • What are some examples of privacy and security issues while traveling? (20:50)
  • How will GDPR and California’s new privacy law affect anonymity and privacy? (23:25)
  • What would be your dream privacy regulation or law? (24:55)
  • What are your thoughts on privacy certifications? (28:50)
  • What’s the future of online privacy and anonymity? (29:40)

[Security Boulevard]

Privacy (US)

US – In Senate Hearing, Tech Giants Push Lawmakers for Federal Privacy Rules

A recent hearing at the Senate Commerce Committee [here] with Apple, Amazon, Google and Twitter, alongside AT&T and Charter, marked the latest in a string of hearings in the past few months. This time, privacy was at the top of the agenda. The problem, lawmakers say, is that consumers have little of it. The hearing said that the U.S. was lagging behind Europe’s new GDPR privacy rules and California’s recently passed privacy law, which goes into effect in 2020, and lawmakers were edging toward introducing their own federal privacy law. Here are the key takeaways: 1) Tech giants want new federal legislation, if not just to upend California’s privacy law; 2) Google made “mistakes” on privacy, but evades China search questioning; and 3) Startups might struggle under GDPR-ported rules, companies claim …Committee chairman, Sen. John Thune (R-SD) said [here] that the committee won’t “rush through” legislation, and will ask privacy advocates for their input in a coming hearing. [Watch the full hearing here and read witness statements: Len Cali of ATT – 6 pg PDF here; Andrew DeVore of Amazon – 5 pg PDF here; Keith Enright of Google – 6 pg PDF here & 3pg PDF here; Damien Kieran of Twitter 5 pg PDF here; Guy (Bud) Tribble of Apple 2 pg PDF here; and Rachel Welch of Caharter Communications 5 pg PDF here TechCrunch Coverage: During Senate Hearing, Tech Companies Push for Lax Federal Privacy Rules | Tech Execs Offer Senate Help Writing a Toothless National Privacy Law | US privacy law is on the horizon. Here’s how tech companies want to shape it | Here’s why tech companies are in favor of *federal* regulation | Google confirms Dragonfly project in Senate hearing, dodges questions on China plans | Google confirms secret Dragonfly project, but won’t say what it is]

US – EFF Opposes Industry Efforts to Have Congress Roll Back State Privacy Protections

The Senate Commerce Committee is holding a hearing on consumer privacy [here & PR here], but consumer privacy groups like EFF were not invited. Instead, only voices from big tech and Internet access corporations will have a seat at the table. In the lead-up to this hearing, two industry groups (the Chamber of Commerce and the Internet Association) have suggested that Congress wipe the slate clean of state privacy laws in exchange for weaker federal protections. EFF opposes such preemption, and has submitted a letter to the Senate Commerce Committee to detail the dangers it poses to user privacy. Current state laws across the country have already created strong protections for user privacy. Our letter identifies three particularly strong examples from California’s Consumer Privacy Act, Illinois’ Biometric Privacy Act, and Vermont’s Data Broker Act. If Congress enacts weaker federal data privacy legislation that preempts such stronger state laws, the result will be a massive step backward for user privacy. … The companies represented at Wednesday’s hearing rely on the ability to monetize information about everything we do, online and elsewhere. They are not likely to ask for laws that restrain their business plans. [DeepLinks Blog (Electronic Frontier Foundation)]

US – NTIA Seeks Comment on New Approach to Consumer Data Privacy

The U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) issued a Request for Comments on a proposed approach to consumer data privacy designed to provide high levels of protection for individuals, while giving organizations legal clarity and the flexibility to innovate [see PDF]. The Request for Comments is part of a transparent process to modernize U.S. data privacy policy for the 21st century. In parallel efforts, the Commerce Department’s National Institute of Standards and Technology is developing a voluntary privacy framework [here & here] to help organizations manage risk; and the International Trade Administration is working to increase global regulatory harmony. The proposed approach focuses on the desired outcomes of organizational practices, rather than dictating what those practices should be. With the goal of building better privacy protections, NTIA is seeking comment on the following outcomes: 1) Organizations should be transparent about how they collect, use, share, and store users’ personal information: 2) Users should be able to exercise control over the personal information they provide to organizations; 3) The collection, use, storage and sharing of personal data should be reasonably minimized in a manner proportional to the scope of privacy risks. 4) Organizations should employ security safeguards to protect the data that they collect, store, use, or share; 5) Users should be able to reasonably access and correct personal data they have provided; 6) Organizations should take steps to manage the risk of disclosure or harmful uses of personal data; and 7) Organizations should be accountable for the use of personal data that has been collected, maintained or used by its systems. Comments are due by October 26, 2018 [Newsroom (National Telecommunications and Information Administration) coverage at: Multichannel News, Reuters, CBS News and engadget]

US – NTIA Seeks Comment on New, Outcome-Based Privacy Approach

The U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) [here] issued a Request for Comments [4 pg PDF Federal Register post — also here & PR here] on a new consumer privacy approach that is designed to focus on outcomes instead of prescriptive mandates. The RFC presents an important opportunity for organizations to provide legal and policy input to the administration, and comments are due October 26. The RFC proposes seven desired outcomes that should underpin privacy protections: 1) Transparency, 2) control, 3) reasonable minimization (of data collection, storage length, use, and sharing), 4) security, 5) access and correction, 6) risk management, and 7) accountability. According to the RFC, the outcome-based approach will provide greater flexibility, consumer protection, and legal clarity. Additionally, the RFC describes eight overarching goals for federal action on privacy: 1) Regulatory harmonization; 2) Legal clarity while maintaining the flexibility to innovate; 3) Comprehensive application; 4) Risk and outcome-based approach; 5) Interoperability; 6) Incentivize privacy research; 7) FTC enforcement; and 8) Scalability. The NTIA is seeking comments on the listed outcomes and goals, as well as other issues such as if the FTC needs additional resources to achieve the goals. [Chronicle of Data Protection (Hogan Lovells) coverage at: Multichannel News, Reuters, CBS News and engadget]

US – SEC Brings First Enforcement Action for Violation of ID Theft Rule

On September 26, 2018, the SEC brought its first ever enforcement action [PR] for violations of Regulation S-ID (the “Identity Theft Red Flags Rule”), 17 C.F.R. § 248.201 [here & here also guidance here], in addition to violations of Regulation S-P, 17 C.F.R. 30(a) (the “Safeguards Rule”) [see here & here]. Regulation S-ID and Regulation S-P apply to SEC-registered broker-dealers, investment companies, and investment advisers, and require those entities to maintain written policies and procedures to detect, prevent and mitigate identity theft, and to safeguard customer records and information, respectively. The SEC’s action against Voya Financial Advisors (“Voya”) cements the SEC’s focus on investment adviser and broker-dealer cybersecurity compliance, both in terms of its examination program—which referred the matter to Enforcement—as well as the Division of Enforcement’s Cyber Unit, which investigated and resolved the matter with Voya. The SEC’s enforcement action against Voya arose out of an April 2016 “vishing” intrusion (voice phishing) that allowed one or more persons impersonating Voya representatives to gain access to personal identifying information of approximately 5,600 Voya’s customers. The SEC’s action against Voya was resolved through a settled administrative order, in which Voya neither admitted nor denied the SEC’s findings, but agreed to engage and follow the recommendations of an independent compliance consultant for two years, certify its compliance with the consultant’s recommendations, and pay a $1 million fine. Voya was also enjoined from future violations of Regulation S-P or Regulation S-ID and was censured by the SEC. The SEC noted that, in reaching the settlement, it considered the remedial actions that Voya promptly undertook following the attack. [Privacy & Data Security (Alston & Bird) and at: Reuters, Infosecurity Magazine, Business Record, InvestmentNews and Law 360]

US – Google Releases Framework to Guide Data Privacy Legislation

Google released a set of privacy principles [3 pg PDF & blog post here] to guide Congress as it prepares to write legislation aimed at governing how websites collect and monetize user data. The framework largely consists of privacy principles that Google already abides by or could easily bring itself into compliance with. It calls for allowing users to easily access and control the data that’s collected about them and requiring companies to be transparent about their data practices. The set of proposals is designed to be a baseline for federal rules regarding data collection. Google appears to be the first internet giant to release such a framework, but numerous trade associations have published their own in recent weeks. The industry has gotten on board with the idea of a national privacy law in the weeks since California passed its own strict regulations aimed at cracking down on data collection and increasing user control. Internet companies have universally opposed the measure and have begun pushing Congress to establish a national law that would block states from implementing their own. [The Hill coverage at: AdWeek Coverage at: Charter: Parity Is Key to Online Privacy Protection | In Reversal, IAB Says Congress Should Consider Privacy Legislation


US – Revealed: DoJ Secret Rules for Targeting Journalists With FISA Court Orders

Revealed for the first time are the Justice Department’s rules for targeting journalists with secret FISA court orders. The documents [PDF] were obtained as part of a Freedom of Information Act lawsuit brought by Freedom of the Press Foundation and Knight First Amendment Institute at Columbia University. While civil liberties advocates have long suspected secret FISA court orders may be used (and abused) to conduct surveillance on journalists, the government—to our knowledge—has never acknowledged they have ever even contemplated doing so before the release of these documents today. [These DOJ] FISA court rules are entirely separate from—and much less stringent—than the rules for obtaining subpoenas, court orders, and warrants against journalists as laid out in the Justice Department’s “media guidelines,” which were strengthened in 2015 after scandals involving surveillance of journalists during the Obama era. The DOJ only must follow its regular FISA court procedures (which can be less strict than getting a warrant in a criminal case) and get additional approval from the Attorney General or Assistant Attorney General. FISA court orders are also inherently secret, and targets are almost never informed that they exist. The documents raise several concerning questions: 1) How many times have FISA court orders been used to target journalists?; 2) Why did the Justice Department keep these rules secret — even their very existence — when the Justice Department updated its “media guidelines” in 2015 with great fanfare? and 3) If these rules can now be released to the public, why are the FBI’s very similar rules for targeting journalists with due process-free National Security Letters still considered classified? And is the Justice Department targeting journalists with NSLs and FISA court orders to get around the stricter “media guidelines”? [Freedom of the Press Foundation coverage at: The Intercept]

CA – Cameras on School Buses Are an Option, Says N.L. Privacy Commissioner

The privacy commissioner of Newfoundland and Labrador says the English School District has the right to put cameras on school buses. The issue came up last week when CBC News reported on allegations of sexual assault on a school bus in Western Newfoundland … [where] a teenaged boy has been charged and faces three counts in relation to incidents involving two alleged victims. The family of one of the alleged victims — an eight-year-old girl — is calling on the school district to install cameras on school buses. … “The school district has the ability to put cameras on school buses. They have lots of cameras in many schools across the province,” information and privacy commissioner Donovan Molloy told CBC’s Corner Brook Morning Show [listen here]. School board CEO Tony Stack has said cameras would only be considered as a “last resort” due to privacy reasons. But Privacy Commissioner Molloy says there’s nothing in the law that says cameras are not allowed. He did say, however, that] other measures should be attempted first, such as assigned seating to separate younger and older students, and the use of student monitors, which is permitted under the law. Molloy emphasized that the Office of the Information and Privacy Commissioner has not forbidden the use of cameras on school buses. At the same time he cautioned that he is not advocating for such a change, because constant surveillance may do more harm than good, taking away children’s sense of independence. [CBC News see also: Teenage boy charged with sexual assaults after incidents on school bus | Renewed Calls for Cameras After Alleged School Bus Sexual Assault | North Shore parent starts petition over safety concerns for children riding school buses | School Bus Cameras Not a Cure-All, says Privacy Commissioner

CA – Maps Show All Secret Surveillance Cameras Spying on Canadians

Canadian police agencies have taken part in the increasingly intense law enforcement protocols that have become common across North America and Europe. The most controversial of these efforts, of course, is public surveillance. While Canada’s public surveillance system is less famous than those in the United States and United Kingdom, it does exist. Road cameras are the most well-known and there are potentially thousands of them across the country, all of which are regularly if not constantly monitored. The cameras are designed to catch traffic violations, but they can also be used as a method of public surveillance more broadly, according to Wired. The cameras, of course, also capture activity on sidewalks and public open spaces. According to the Office of the Privacy Commissioner of Canada, Canadian law enforcement agencies “increasingly view it as a legitimate tool to combat crime and ward off criminal activity—including terrorism … however, they present a challenge to privacy, to freedom of movement and freedom of association.” [see here] While the locations of the cameras are (now) public information, most Canadians are unaware that authorities have placed them so extensively in every Canadian city. To give you a sense of the scope of road surveillance in Canada, we’ve compiled these maps, which depict the exact locations of road cameras in every major Canadian city Including: Vancouver, Calgary, Edmonton, Winnipeg, Toronto, Ottawa and Montreal. [MTL Blog coverage at: CBC News]

US Legislation

US – California Approves Bills Tightening Security, Privacy of IoT Devices

Gov. Jerry Brown has signed two bills that could make manufacturers of Internet-connected devices more responsible for ensuring the privacy and security of California residents. Gov. Jerry Brown’s office announced on September 28 that Brown had signed the legislation, Assembly Bill 1906 and Senate Bill 327. The two bills could make manufacturers of Internet-connected devices more responsible for ensuring the privacy and security of California residents. Both pieces of legislation specified they must be signed by the governor and can only become law if the other bill is also signed. Both bills will become law in about 15 months, on Jan. 1, 2020. Senate Bill 327 is the older of the two and was introduced in Feb. 2017 by state Sen. Hannah-Beth Jackson [wiki here], but as currently amended, the senator told Government Technology, is “pretty much a mirror” of AB 1906, introduced in January by Assemblywoman Jacqui Irwin [wiki here] … Both require manufacturers of connected devices to equip them with a “reasonable security feature or features” that are appropriate to their nature and function, and the information they may collect, contain or transmit — and are designed to protect the device and its information from “unauthorized access, destruction, use, modification or disclosure.” The bills also specify that if such a device has a “means for authentification outside a local area network,” that will be considered a reasonable security feature if either the preprogrammed password is unique to each device made; or the device requires a user to create a new “means of authentication” before initial access is granted. The question of what defines a “reasonable security feature or features” is one of several that industry groups cited in their opposition to AB 1906. In a statement provided to GT, the CMTA [California Manufacturers and Technology Association] said the bills are an attempt to “create a cybersecurity framework by imposing undefined rules on California manufacturers,” but instead create a loophole allowing imported devices to “avoid implementing any security features.” This, it said, makes the state less attractive to manufacturers, less competitive and increases the risk of cyberattacks. The Entertainment Software Association in opposition to SB 327, said existing law already requires manufacturers to set up “reasonable privacy protections appropriate to the nature of the information they collect.” [Government Technology See also: California governor signs country’s first IoT security law | Hey, Alexa, California’s New IoT Law Requires Data Protections

US – Amendments to the California Consumer Privacy Act of 2018

Amendments to California’s expansive Consumer Privacy Act of 2018 [AB – 375 here] include new provisions that may significantly impact the timing of enforcement and provide exemptions for large amounts of personal data regulated by other laws. Because the Act was hastily passed [in June, 2018] … it was expected that the Act would undergo significant amendments before it enters into effect on January 1, 2020. The first amendments were passed by the California State Legislature on August 31, 2018, in the form of SB-1121, and Governor Brown [signed it into law September 23, 2018 – see here]. While SB-1121 is labeled as a “technical corrections” bill designed to address drafting errors, ambiguities, and inconsistencies in the Act, in fact, it creates new provisions in addition to those already contained within the Act. One notable provision of the Bill is that it grants a six-month grace period from the date the California AG issues regulations or July 1, 2020, whichever is earlier, before enforcement actions can be brought. Another key effect of the Bill is that it fully exempts data that is regulated by the Gramm-Leach-Bliley Act, the California Financial Information Privacy Act, HIPAA, the California Confidentiality of Medical Information Act, the clinical trials Common Rule, and the Driver’s Privacy Protection Act from the privacy requirements of the Act. However, these industries are still subject to the privacy provisions of the Act if they engage in activities falling outside of their applicable privacy regulations (except for the health care industry, if it treats all data as PHI, then it remains exempt as to all data). As we previously predicted, the Act will continue to evolve prior to its January 1, 2020 enactment. While the current Bill attempts to clarify the Act, it does not address all of the ambiguities and uncertainties. We anticipate further changes and guidance regarding the Act and will continue to monitor the latest developments. [Security & Privacy Bytes (Squire Patton Boggs) Additional coverage at: Privacy and Cybersecurity Perspectives (Murtha Cullina), Workplace Privacy Report (Jackson Lewis), Privacy & Data Security (Alston & Bird) and Data Privacy Monitor (BakerHostetler)]

US – California Consumer Privacy Act: What to Expect

This is the fourth installment in Hogan Lovells’ series [here] on the California Consumer Privacy Act [see installment 1 here, installment 2 here and installment 3 here]. It discusses litigation exposure that businesses collecting personal information about California consumers should consider in the wake of the California Legislature’s passage of the California Consumer Privacy Act of 2018 (CCPA). [AB – 375 here] For several years, the plaintiffs’ bar increasingly has relied on statutes like the Confidentiality of Medical Information Act, Cal. Civ. Code § 56 et seq. [here], and the Customer Records Act, Cal. Civ. Code § 1798.81, et seq. [here], to support individual and classwide actions for purported data security and privacy violations. The CCPA creates a limited private right of action for suits arising out of data breaches. At the same time, it also precludes individuals from using it as a basis for a private right of action under any other statute. Both features of the law have potentially far-reaching implications and will garner the attention of an already relentless plaintiffs’ bar when it goes into effect January 1, 2020. [This post covers] what you need to know [under two headings]: 1) The CCPA Provides a Limited Private Right of Action for Data Breach Suits; and 2) Plaintiffs Likely Will Argue the CCPA Provides a Basis for Unfair Competition Law Claims. Chronicle of Data Protection (Hogan Lovells)

Workplace Privacy

WW – Many Employee Work Habits Seem Innocent but Invite Security Threats

While most employees are generally risk averse, many engage in behaviors that could lead to security incidents, according to a new report from Spanning Cloud Apps LLC [here], a provider of cloud-based data protection. [see Trends in U.S. Worker Cyber Risk-Aversion and Threat Preparedness here] The company surveyed more than 400 full-time U.S. employees, and found that more than half (55%) admitted to clicking links they didn’t recognize, while 45% said they would allow a colleague to use their work computer and 34% were unable to identify an unsecure ecommerce site. The results paint a picture of a workforce that has a general understanding of security risks, but is underprepared for the increasing sophistication and instance of ransomware and phishing attacks, the report said. Employees would rather be “nice” than safe, the study said. Of workers with administrative access, only 35% responded that they would refuse to allow a colleague to access their device. And they like to shop from work, with more than 52% saying they shop online from their work computer. Workers are underprepared for sophisticated phishing emails. When presented with a visual example, only 36% correctly identified a suspicious link as being the key indicator of a phishing email, the study said. [Information Management coverage at: BetaNews]