24–31 October 2018

Big Data / Artificial Intelligence

US – “Information Fiduciaries” Must Protect Your Data Privacy

Legislators across the country are writing new laws to protect data privacy. One tool in the toolbox could be “information fiduciary” rules. The basic idea is this: When you give your personal information to an online company in order to get a service, that company should have a duty to exercise loyalty and care in how it uses that information. Sounds good, right? We agree, subject to one major caveat: any such requirement should not replace other privacy protections. The law of “fiduciaries“ is hundreds of years old. It arises from economic relationships based on asymmetrical power, such as when ordinary people entrust their personal information to skilled professionals (doctors, lawyers, and accountants particularly). In exchange for this trust, such professionals owe their customers a duty of loyalty, meaning they cannot use their customers’ information against their customers’ interests. They also owe a duty of care, meaning they must act competently and diligently to avoid harm to their customers. These duties are enforced by government licensing boards, and by customer lawsuits against fiduciaries who do wrong. These long-established skilled professions have much in common with new kinds of online businesses that harvest and monetize their customers’ personal data. First, both have a direct contractual relationship with their customers. Second, both collect a great deal of personal information from their customers, which can be used against these customers. Third, both have one-sided power over their customers: online businesses can monitor their customers’ activities, but those customers don’t have reciprocal power. Accordingly, several law professors [e.g., Jack M. Balkin – PDF, Jonathan Zitttrain – here & Neil Richards & Woodrow Hartzog – PDF] have proposed adapting these venerable fiduciary rules to apply to online companies that collect personal data from their customers. New laws would define such companies as “information fiduciaries.” EFF supports legislation to create “information fiduciary” rules. While the devil is in the details, [the remainder of this post examines what] those rules might look like. [DeepLinks Blog (Electronic Frontier Foundation) | Data Protection Bill Series: Obligations on data fiduciaries and compromises made (India)]


CA – New Data Breach Rules Come Into Effect Nov 1, But Privacy Chief Says They Don’t Go Far Enough

Changes to the Personal Information Protection and Electronic Documents Act (PIPEDA), amendment & new reporting regulations come into force Nov. 1, requiring companies to quickly disclose security data breaches. Companies will be required to keep internal records for all breaches and security safeguards for two years, and in cases where there is a risk of significant harm, companies need to report a breach to the Office of the Privacy Commissioner and to the people affected. [see OPC guidance here & Reporting Form here] But critics, including Canada’s privacy commissioner Daniel Therrien, say that the new measures still don’t go far enough to protect citizens’ privacy. As long as companies report their breaches, there are no financial penalties, which is something that Therrien isn’t thrilled about. “The odd nature of this is that there are very hefty fines for failing to report, but there are no fines for failing to have the security safeguards that would have prevented the breach from occurring There could be actions in the civil courts by individuals whose data was disclosed improperly for any damages incurred, but that of course is very costly for individuals to bring companies to court.” As such, damage to reputation is the main risk for companies that get hacked or suffer other kinds of privacy breaches. In addition, Therrien complains that while he’ll get reports from companies that suffer privacy breaches, his office has yet to be allocated any additional funding to handle those reports. And his office is limited in terms of how it can respond. “What we cannot do is order companies to improve their security posture. So companies are free to accept our recommendations or not,” he said. [Financial Post | Final Breach Reporting Guidance just released by the Office of the Privacy Commissioner of Canada (OPC) | D.O.Eh: Here’s the new privacy law Canada can’t really enforce | New data breach reporting requirements come into force this week | New security breach reporting guidelines come into force on November 1st | New Data Breach Reporting Requirements in Canada | Canada Privacy Office Issues Data Breach Reporting Guidance | Many companies not ready for new data-breach rules, experts say

CA – Privacy Commissioner ‘Surprised’ by Liberal Party Arguments Against Privacy Rules for Parties ‘Harvesting Data On People’

Privacy commissioner Daniel Therrien said he was “surprised” and unpersuaded by the Liberal Party’s arguments against making political parties abide by Canada’s existing privacy rules. At an Access to Information, Privacy and Ethics Committee hearing November 1 [ETHI , hearing 124 notice, watch on ParlVU – Commissioner Therrien’s prepared comments] Liberal Party legal advisor Michael Fenrick [of Paliare Roland] argued that bringing political parties under the jurisdiction of Canada’s privacy laws would create a chilling effect on political involvement, with prospective volunteers fearing harsh punishments in the event of privacy breaches. That argument didn’t hold water for Canada’s privacy commissioner. Therrien said there are a number of jurisdictions, including British Columbia, where parties are regulated on their privacy practices and he sees no chilling effect. In Europe, political parties must abide by the onerous new rules created by the new General Data Protection Regulation that enforces hefty penalties for malfeasance. “So far as we see, I think, democracy continues to thrive in these jurisdictions,” said Therrien. Although the government currently has an election bill making its way through Parliament [Bill C-76 here], the commissioner has said it adds nothing of substance to privacy protection even though parties are “harvesting data on people.” .At the very least, Canadians should have the right to access any data the parties hold about them, said Therrien. That’s a rule that exists in Europe, but not in Canada. “It’s something eminently reasonable and that Canadians would expect,” said Therrien. Parties will also have no independent oversight and no requirement to report any data breaches they suffer, even though revamped privacy rules now require that for Canadian companies Earlier in the committee hearing, chief electoral officer Stéphane Perrault [here] also said he was disappointed that Bill C-76 didn’t provide privacy rules for political parties. “Canadians increasingly want to understand the nature and source of communications that are reaching them. An important part of understanding that is transparency,” said Perrault. [National Post | Gap in privacy law leaves elections open to ‘misuse’ of personal information: privacy commissioner | Liberals say political parties need privacy rules, but have no immediate plans to impose them | Some MPs worry as election looms without ‘any enforceable rules’ for parties on privacy | Politicians are dragging their feet on privacy rules]

CA – Privacy Commissioner Launches Investigation into Statistics Canada

The Office of the Privacy Commissioner of Canada has received complaints related to Statistics Canada and its collection of personal information from private sector organizations and has opened an investigation. The complaints follow media reports that Statistics Canada requested several banks provide the agency with the financial transaction information of hundreds of thousands of Canadians. Privacy Commissioner Daniel Therrien welcomes the Chief Statistician’s invitation for his office to take a “deeper dive” into StatCan’s collection of data from private sector organizations. The Commissioner’s office will be seeking details regarding the information requests the agency has made to various industry sectors. StatsCan had previously consulted with the Privacy Commissioner’s office on the privacy implications related to data collection from private sector organizations, but outside the context of an investigation. A summary of those consultations is included in the Commissioner’s 2017-18 Annual Report to Parliament. The Commissioner’s office has a legal obligation to investigate the complaints fairly and impartially under the law, and the Privacy Act, Canada’s federal public sector privacy law, includes confidentiality provisions. Therefore, no additional details can be provided at this time.[ Office of the Privacy Commissioner of Canada | Privacy Commissioner of Canada launches investigation into StatCan over controversial data project | Privacy commissioner launches investigation into Statscan’s efforts to obtain banking records | Privacy commissioner investigating StatCan’s attempt to get banking info| StatCan scooped up 15 years of personal financial data from Canadian credit bureau  | Scheer opposed to StatCan plan to collect personal-banking data | StatsCan has already seized reams of our private financial info | Conservatives blast Trudeau government over StatCan collection of personal financial data | Toronto’s no fan of StatsCan info grab | Trudeau defends Statistics Canada move to collect banking info of 500,000 Canadians | Big Brother Liberal government wants your private banking info | ANALYSIS: StatCan’s push to scoop payment data on 500,000 Canadians deserves scrutiny | EXCLUSIVE: Stats Canada requesting banking information of 500,000 Canadians without their knowledge | Globe editorial: StatsCan needs to own up to its data breaches | NDP joins Conservatives, asks Trudeau Liberals to shut down controversial StatCan projects]

CA – NWT Privacy Report Laments Slow Pace of Change, City’s Issues

Elaine Keenan Bengts, Information and Privacy Commissioner of the Northwest Territories, had harsh words for the Department of Justice in an annual report tabled in the Legislative Assembly [66 pg PDF – the report was submitted in August and covers the period of April 1, 2017 – March 31, 2018]. The report stated the government was moving too slowly to update the Access to Information and Protection of Privacy Act. The commissioner’s office completed 18 review reports over the 2017-18 fiscal year and opened 53 files under the Act (down from 61 in the previous year). Fifteen of this year’s files fell under requests for review regarding access to information, while there were nine review requests relating to privacy issues. Keenan Bengts also drew attention to the many privacy issues plaguing the City of Yellowknife over the past year, from a possible email theft shared with local media to allegations that a senior employee was misusing City cameras to watch women. She said she offered assistance to help the City develop a stronger privacy policy but received no response. “This is not the first time I have offered to work with the City on privacy concerns. The non-response has, however, been consistent,” she charged. Keenan Bengts also took aim at the Health Information Act [here], pointing out that many breaches involved misdirected faxes or unencrypted emails. “I simply cannot understand the apparent reluctance of the health sector to adopt the better technology,” she said. She also noted there has been little progress made to ensure the public has a say in who can access their health information. She explained that while system-wide standards, policies, and procedures were issued in May 2017, they do not appear to be publicly available yet. She called for better public consultation regarding policies that directly affect the public. On October 29, the same day the annual report was presented to the legislature, Bill 29: “An Act to Amend the Access to Information and Protection of Privacy Act” was read for the first time [see debate in Hansard, October 30, 2018 – at PDF pg 62 (text pg 56) of 84 pg PDF here]. CABIN Radio


CA – Senate Banking Committee Advises Greater Privacy Watchdog Powers

The Senate Committee on Banking Trade and Commerce released a study of cyber fraud and cyber security titled “Cyber Assault — It Should Keep You Up at Night” [see here and Executive Summary which recommends that Parliament give Canada’s privacy commissioner new authority to make binding orders and impose hefty fines on companies that fail to keep the private data of Canadians secure The importance of the privacy commissioner’s role is growing with the number of massive data breaches, and other cybersecurity threats facing Canadians, the senator suggested. The committee makes 10 sweeping recommendations, including calls for new powers for federal officials, including the privacy commissioner; the creation of national cybersecurity standards and guidelines so businesses know what they are supposed to be doing to protect their customers’ data; as well as co-ordinated sharing within the private sector and with governments of sensitive private information related to cybersecurity and cyberthreats The Senate committee criticized the Trudeau government for its “timid responses” so far to the “real and rising online threats” that have affected millions of Canadians who have been defrauded online, or who have had their personal data stolen and exploited as a result of corporate data breaches — many of which were not publicly revealed until months or years later. In addition to recommending that governments prioritize and fund cybersecurity education, both for businesses and citizens, the senators recommended the creation of:

  1. a federal cybersecurity task force to propose a national cybersecurity strategy establishing Canada as a global leader on cybersecurity;
  2. a consistent set of leading cybersecurity standards that are harmonized with the highest international standards and that would apply to all entities participating in 10 critical infrastructure sectors (e.g. health, food, water, communications and government);
  3. standards to protect consumers, business and governments from threats emanating from the Internet of Things that connects such smart digital devices as phones, TVs, cars and medical implants; and
  4. tax incentives for investment in cybersecurity (such as accelerated capital cost allowance deductions for businesses). [The Lawyers Daily | Government failing to protect Canadians from cyber threats, says Senate report

US – FTC Issues Paper on Informational Injury Workshop

The FTC recently issued a paper outlining key takeaways from its December 2017 workshop examining injuries consumers may suffer from privacy and data security incidents. The paper indicates that the FTC convened the workshop to better understand consumer injury for the following two purposes: 1) To allow the FTC to effectively weigh the benefits of governmental intervention against its costs when making policy determinations; and 2) To identify acts or practices that “cause or are likely to cause substantial injury” for purposes of bringing an enforcement action under the FTC Act for an “unfair” act or practice The paper discusses the examples of informational injuries given by participants. These examples involve injuries that may result from medical identity theft, doxing (i.e. the deliberate and targeted release of private information about an individual with the intent to harass or injure), exposure of personal information, and erosion of trust (i.e. consumers’ loss of trust in the ability of businesses to protect their data). The paper also reports that “there was some discussion of whether the definition of injury should include risk of injury [from certain practices]” and shares opposing arguments made by participants. The issue of whether informational injuries that may result from alleged statutory violations are sufficient to provide a consumer in a private action with Article III standing under the U.S. Supreme Court’s Spokeo standard continues to be litigated. In Spokeo, the Supreme Court [see 27 pg PDF decision here] indicated that, to satisfy the “injury-in-fact” requirement for Article III standing, a plaintiff must show that he or she suffered “an invasion of a legally protected interest” that is both “concrete” and “particularized.” To be particularized, an injury must affect the plaintiff “in a personal and individual way.” To be concrete, an injury must “actually exist;” it must be “real.” However, the Supreme Court also acknowledged that intangible injuries can satisfy the concrete injury standard and that in some cases an injury-in-fact can exist by virtue of a statutory violation. (The Spokeo standard does not apply to government enforcement actions.) [The National Law Review ]


CA – Statscan Must Justify Request for Personal Banking Data: Former Chief

Former Statistics Canada chief statistician Wayne Smith who resigned two years ago said in an interview that banking records are second only to health files in terms of Canadians’ most sensitive personal information and that Statistics Canada needs to clearly explain why it is seeking access to people’s banking records. He said Statistics Canada should be able to say: ‘Okay, here’s the purpose and here’s why it’s important enough to justify this intrusion,’ he said. “If they don’t have an answer, they should stop now.” In 2016 Smith resigned in protest over the agency’s decision to house its computer servers with Shared Services Canada, a move he said could weaken the agency’s ability to protect its data. “By moving the data into the Shared Services Canada data centres, we moved it into a data centre that does have connections to the Internet and therefore it opens up the potential of hacking, however secure they might make it” His comments come as Canada’s statistics agency is at the centre of a political controversy after informing Canada’s banks that they will be required to provide consumer records, such as individual credit- and debit-card purchases, starting in January. The move caught Canada’s banking sector by surprise and bank officials are in discussion over how to respond. Royal Bank of Canada, Bank of Montreal, CIBC, National Bank and Scotiabank all confirmed via e-mail that they have not provided any client information to Statistics Canada, echoing a recent statement by the Canadian Bankers Association. While TD Bank told customers via Twitter that it has not agreed to share client data with Statscan. Senior Statscan officials held an online chat and were asked why banking records are now required. “It is becoming increasingly difficult to capture household expenditures by relying on traditional surveys,” the officials wrote. “People do not want to respond to hour-long surveys or keep a diary of their daily expenditures over a two-week period. Financial transaction data will give us more timely data at a greater level of granularity.” Federal Privacy Commissioner Daniel Therrien announced that his office will launch a formal investigation into Statscan’s plans. He told reporters that his investigation is unlikely to be finished by January. He said Statscan informed him about the agency’s general direction, but did not share specific plans related to banking data. He said he advised the agency to only ask for private data that has been anonymized, meaning no individual names are included. However, he acknowledged it is unclear to him at this stage how such a practice could comply with all relevant laws. For a fourth day in a row, Conservative MPs attacked the Liberal government in the House of Commons over Statscan’s plans. The critics say Statscan should not be receiving data that identifies individual Canadians without their consent. [The Globe and Mail ]

US – ‘Vote with Me’ App Reports Party Affiliation and Vote Record of Contacts

Social media platforms have been harvesting your personal data to fuel hyper-targeted advertising for years, but there hasn’t been a handy way for the average user to check the political gang affiliation in their social circle — until now, for better or worse. An app called “VoteWithMe,” claims to be an effort to get people to the polls. But VoteWithMe has a trick up its sleeve that we’re not used to seeing. It can tell you the party affiliation of everyone in your contacts list, and it can tell you what elections your contacts have voted in historically voter registration information like party affiliation and voting history is technically public information, there is no handy website where a person can ordinarily look this information up, at least until now. Having this information in an app that’s snooping on your phone contacts is not only potentially invasive but potentially incredibly divisive, given how far apart Democrats and Republicans have become. And as you might imagine, the app’s reach doesn’t stop with your social circle. Someone using VoteWithMe can now look up the affiliations and records related to any phone number that was used to register to vote, for free, with a few taps. The potential for voter intimidation reaches new heights, only days before the election. [The Download Blog (CNET) | | A new app called Vote With Me lets you see the voting record and political party of every contact in your phone | Vote With Me is a creepy new app that checks your contacts’ voting historyOr should we say, soon-to-be-ex-contacts | New app reveals your contacts’ voting history


US – DHS Urged to Apply Encryption to Web Browsing

Senator Wyden expresses concerns to the Department of Homeland Security (DHS) about web browsing privacy. Metadata revealing specific federal website visits is currently transmitted over the internet without encryption; a U.S. Senator asks that DHS consider requiring federal agencies to use 1 of 2 encryption methods to prevent hackers from learning what particular website a user is visiting. [Letter to DHS Regarding Government Web Browsing – Senator Ron Wyden]

EU Developments

WW – 40th ICDPPC Sets the Way Forward for Ethics, Data Protection and Privacy

As the 40th ICDPPC Annual Meeting in Brussels October 21-26 came an end, the Closed Session, which gathered this year 236 delegates from 76 countries, has released the outcome of its two-day discussions, paving the way for the future of data protection and privacy at global [see 9 pg PDF road map]. The Closed session also adopted a landmark text, the ICDPPC Declaration a Declaration on ethics and data protection in artificial intelligence [6 pg PDF], in order to contribute to the global discussion on this matter. The declaration endorses six guiding principles, as core values to preserve human rights in the development of artificial intelligence. These principles first of all build upon data protection elements, but also expand to ethical considerations which are inextricably linked to the development of artificial intelligence. The Closed session also adopted three other resolutions on e-learning platforms [31 pg PDF], on the Conference Census [2 pg PDF] and on collaboration between Data Protection Authorities and Consumer Protection Authorities [3 pg PDF]. Two new members of the ICDPPC Executive Committee [here] have been elected: the Philippines’ National Privacy Commission (NPC) and the Office of the Australian Information Commissioner (OAIC). With the mandate of Isabelle Falque-Pierrotin [President/Commissioner of the CNIL] coming to an end, the ICDPPC has elected Elizabeth Denham, the UK Information Commissioner (ICO) as new Chair of the Executive Committee. [CNIL News (Commission Nationale de l’Informatique et des Libertés – France)]

UK – ICO Fines Facebook Over Data Privacy Scandal, EU Seeks Audit

The Information Commissioner Office slapped Facebook with a fine of 500,000 pounds ($644,000) [see ICO PR & Penalty Notice & Commissioner’s video statement] — the maximum possible — for failing to protect the privacy of its users in the Cambridge Analytica scandal. The OPC found that between 2007 and 2014, Facebook processed the personal information of users unfairly by giving app developers access to their information without informed consent. The failings meant the data of some 87 million people was used without their knowledge. The fine amounts to a speck on Facebook’s finances. It will take less than seven minutes for Facebook to bring in enough money to pay for the fine. But it’s the maximum penalty allowed under the law at the time the breach occurred [the UK’s Data Protection Act 1998]. Had the scandal taken place after new EU data protection rules [General Data Protection Regulation (GDPR)] went into effect this year, the amount would have been far higher — including maximum fines of 17 million pounds or 4% of global revenue, whichever is higher. Under that standard, Facebook would have been required to pay at least $1.6 billion, which is 4% of its revenue last year. Also, European Union lawmakers demanded an audit of Facebook to better understand how it handles information, reinforcing how regulators in the region are taking a tougher stance on data privacy compared with U.S. authorities. They said Facebook should agree to a full audit by Europe’s cyber security agency and data protection authority “to assess data protection and security of users’ personal data.” The EU lawmakers also call for new electoral safeguards online, a ban on profiling for electoral purposes and moves to make it easier to recognize paid political advertisements and their financial backers. [CTV News ]

EU – Top ENISA-Reported Incidents Involved E-Signature and E-Seal Creation

The EU Agency for Network and Information Security (“ENISA”) has published a report on security incidents reported by trust services providers (“TSPs”), pursuant to article 19 of Regulation 910/2014. Other affected services included certificate revocation lists, verification of e-signatures and seals, and e-signature devices; root causes were third party and system failures, human error and malicious actions (incidents varied in severity from significant (54%), severe (23%) to disastrous (23%)). ENISA – Annual Report Trust Services Security Incidents 2017

Facts & Stats

AU – 245 Breach Notifications to Australian DPA from July to September

Breach notifications received by the Office of the Australian Information Commissioner (“OAIC”) from July 1 – September 30, 2018. Malicious or criminal attacks caused 57% of reported breaches (phishing, credential theft, brute-force, insider threats, social engineering), 37% resulted from human error (PI sent to the wrong recipient, loss of device or paperwork, insecure disposal, failure to redact), and system faults caused 6%; PI compromised included contact, identity and health information, tax file numbers and financial details. [OAIC – Notifiable Data Breaches Quarterly Statistics Report – 1 July – 30 September 2018]

EU – CNIL Publishes Statistical Review of Data Breaches Since GDPR

On October 16, the French Data Protection Authority (the “CNIL”) [here & wiki here] published a statistical review of personal data breaches during the first four months of the EU General Data Protection Regulation’s (“GDPR”) entry into application [in French here & Google English text translation here]. Between May 25 and October 1, 2018, the CNIL received 742 notifications of personal data breaches that affected 33,727,384 individuals located in France or elsewhere. Of those, 695 notifications were related to confidentiality breaches. The accommodation and food services sector is the sector in which the highest number of breaches were observed, with 185 notifications. This is due to a specific case, where a booking service provider was affected by a data breach. More than half of the notified breaches (421 notifications) were due to hacking via malicious software or phishing. 62 notified breaches were related to data sent to the wrong recipients, 47 notified breaches were due to lost or stolen devices, and 41 notified breaches were due to the unintentional publication of information. The CNIL also reported that it will adopt an aggressive approach when the data controller does not comply with its obligation to notify the breach within 72 hours after having become aware of it. Failure to comply with that obligation may lead to a fine of up to €10 million or 2 percent of the total worldwide annual revenues. Conversely, if the CNIL receives the notification in a timely manner, the CNIL will adapt an approach that aims at helping the professionals involved take all the necessary measures to limit the consequences of a breach. When necessary, the CNIL will contact organizations for the purposes of: 1) Verifying that adequate measures have been taken before or after the breach; and 2) Assessing the necessity to notify affected data subjects. [Privacy & Information Security Law Blog (Hunton Andrews Kurth) ]


CA – Credit Card Purchases for Cannabis May Not Be Private

Federal legislation makes it legal for Canadians to enjoy cannabis in the privacy of their homes. That legislation, however, does not necessarily offer privacy protection for cannabis purchasers. Legal experts have raised concerns that credit card data may not be stored in this country and may be accessible to prying eyes in other countries. Credit card and other purchasing information stored outside of Canada, particularly in the United States, may be accessible by law enforcement there. This issue was recently raised in a new [October 16] guidance document released by the Office of the Information and Privacy Commissioner for British Columbia [see PR & PDF]. According to the guidance document, access to the personal information of cannabis users may be used by some countries to deny entry. It also points out that, in a digital age, the privacy issue is not moot. “Keep in mind,” it notes, “that storing data in the Cloud or in proprietary software means there is likely disclosure of that personal information outside of Canada. It is much more privacy protective to store personal information on a server located in Canada to prevent access by unauthorized third parties.” Mark Hayes, founder of Hayes eLaw LLP, an IP and technology firm in Toronto says “Until questions about the potential risks of using credit cards for cannabis purchases are resolved, purchasers may want to pay cash for cannabis purchases in provinces which allow in-person shopping and consider using only anonymous prepaid credit cards or gift certificates for online purchases.” [Canadian Lawyer | Warning: This Is Why You Should Never Buy Marijuana With A Credit Card In Canada | Cannabis IQ: Everything you need to know about pot and the border | Province pulls electronic ID scanners from cannabis stores | Privacy commissioner investigating personal data collection at cannabis stores | Think about your privacy before you purchase pot: federal watchdog | Can we Implement Random Cannabis Drug Testing? – (5 pg PDF here) | Cannabis Is Legal: Top Tips for Employers | Marijuana in the workplace: What your boss can and can’t do | Understanding Cannabis Rules for Employees who Travel to Canada or the United States for Business – (5 pg PDF here)


WW – Treating ‘Genetic Privacy’ Like It’s Just One Thing Keeps Us from Understanding People’s Concerns

“Genetic privacy” is a complicated concept, and a new study published today in the journal PLOS One finds that decoding how people feel about the idea is equally complex. Researchers analyzed 53 studies (covering over 47,000 participants) that looked at how the general public, professionals, and patients viewed genetic privacy. The results paint a complex picture, says study author Ellen Clayton, a professor of law and health policy at Vanderbilt University. If you ask people “are you worried about genetic privacy?” most will say yes. But if you ask a patient whose genetic data was collected for medical testing about a more specific situation, like “are you concerned about sharing data with third parties?” the answers can vary widely. It’s simplistic to claim either that people “are” or “aren’t” concerned about genetic privacy when it’s an multifaceted term that can cover different (and often conflated) concepts like confidentiality, security, and control. “To get insight into how people actually feel, it’s important to ask them what particular outcomes they’re worried about,” says Clayton. For future studies and surveys, she recommends that instead of asking about “genetic privacy,” researchers should break the issue into different parts. That way, we’ll all have a better idea of what people want, so we can better respect their wishes. [The Verge | Genetic Privacy, Data Use by Employers, Insurers, Government Concern Study Participants | You Should Be Worried About Your DNA Privacy | How 23andMe thinks about genetic privacy in the age of forensic genealogy and Facebook’s woes | You don’t have to sequence your DNA to be identifiable by your DNA | New File Type Improves Genomic Data Sharing While Maintaining Participant Privacy | How your third cousin’s ancestry DNA test could jeopardize your privacy]

Health / Medical

US – FDA Issues Draft Guidelines for Medical Device Manufacturers

The US Food and Drug Administration provides draft nonbinding recommendations regarding the security of medical devices. The US FDA recommends devices be designed to protect critical functionality (even when security has been compromised), “deny by default” (i.e., generally reject all unauthorized connections), and detect, log and notify users of a potential cybersecurity breach; risk assessments must include a description of testing conducted on controls, and evidence of security effectiveness. When final, the guidance will supersede recommendations issued in 2014 [FDA – Content of Premarket Submissions for Management of Cybersecurity in Medical Devices – Draft Guidance for Industry and FDA Staff]

US – Risk Assessment: HHS Improves Security Analysis Tool

The US Department of Health and Human Services (HHS) announced changes to its security risk assessment (SRA) tool. Covered entities and business associates can use the tool to meet their obligation to conduct a thorough and accurate assessment of potential risks and vulnerabilities to the confidentiality, integrity and availability of ePHI; the assessment should cover all lines of business, and facilities and locations of the organization. [HHS – ONC and OCR Bolster the Security Risk Assessment (SRA) Tool with New Features and Improved Functionality]

US – New Guidance on Preparation and Response to Medical Device Cybersecurity Incidents

Recently, the MITRE Corporation, in collaboration with the U.S. Food and Drug Administration (FDA) announced the release of the Medical Device Cybersecurity Regional Incident Preparedness and Response Playbook [38 pg PDF]. The Playbook was designed to provide “tools, references, and resources” for Healthcare Delivery Organizations (HDOs) to better prepare for and respond to medical device cybersecurity incidents. The Playbook provides detailed insight and guidance for HDOs on, among other topics, how to prepare for, detect, analyze, contain, eradicate, and recover from “threats or vulnerabilities that have the potential for large-scale, multi-patient impact and raise patient safety concerns.” It’s not an official FDA rule, regulation, or guidance However, HDOs and medical device companies would do well to familiarize themselves with the Playbook and, wherever feasible, incorporate its recommendations into existing cyber incident response plans [since it is a real possibility that in the future] the agency [may] consider failure to abide by the Playbook an aggravating factor warranting a less favorable administrative outcome. [DBR on Data Blog (DrinkerBiddle)]

Horror Stories

US – ERS Online Coding Error Exposes 1.25M Users to Health Data Breach

Recently reported health data breaches include those from: The Employee Retirement System of Texas, Yale University, North Carolina’s Catawba Valley Medical Center, The Children’s Hospital of Philadelphia and Texas-based FirstCare Health Plans: 1) In a statement on its website, The Employee Retirement System (ERS) of Texas [here] explained that a coding error on its password-protected ERS Online portal allowed certain members who logged in with their username and password to view other members’ information. ERS said that members would have to use a specific function to input search criteria in order to view other members’ information. ERS reported to OCR [here] information on potentially 1.25 million people may have been exposed. Information that might have been exposed included first and last names, Social Security numbers, and ERS member identification numbers; 2) Yale University reported to OCR [here] on Oct. 17 an unauthorized paper disclosure that exposed PHI on 1,102 individuals. No additional information was provided. This comes after Yale admitted in July that a data breach occurred between 2008 and 2009 affecting 119,000 facility, staff, and alumni. In a release, Yale said that attackers gained access to a database stored on a Yale server; 3) North Carolina-based Catawba Valley Medical Center (CVMC) announced Oct. 12 it suffered a phishing attack. CVMC told OCR [here] that 20,000 individuals may have been impacted by the breach between July 4 and Aug. 17. Information that might have been compromised included patient names, dates of birth, health information about services, health insurance information, and, for some, Social Security numbers; 4) The Children’s Hospital of Philadelphia (CHOP) reported to OCR [here] on Oct. 23 an email hacking incident that put PHI on 5,368 at risk. In a press release, the hospital said that it discovered two email breaches that exposed PHI, including patient name, date of birth, and clinic information related to neonatal and/or fetal care provided at CHOP or at the Hospital of the University of Pennsylvania. The first breach, discovered on Aug. 24, occurred when an unauthorized user gained access to a CHOP physician’s email account. A second breach, discovered on Sept. 6, identified unauthorized access to an additional email account on Aug. 29. CHOP sent letters to potential victims on Oct. 23; and 5). Texas-based FirstCare Health Plans reported to OCR on Oct. 12 an email error exposing e-PHI on 8,056 individuals. In a press release, FirstCare said the breach may have compromised member name, identification number, treatment description, procedure costs, authorization number, and treating provider name. [Health IT Security]

US – Yahoo Agrees to $50M Settlement Package for Users Hit by Massive Security Breach

One of the largest consumer internet hacks has bred one of the largest class action settlements after Yahoo agreed to pay $50 million to victims of a security breach that’s said to have affected up to 200 million U.S. consumers and some three billion email accounts worldwide. [see wiki here] In what appears to be the closing move to the two-year-old lawsuit, Yahoo — which is now part of Verizon’s Oath business — has proposed to pay $50 million in compensation to an estimated 200 million users in the U.S. and Israel, according to a court filing [35 pg PDF here]. In addition, the company will cover up to $35 million in lawyer fees related to the case and provide affected users in the U.S. with credit monitoring services for two years via AllClear, a package that would retail for around $350. There are also compensation options for small business and individuals to claim costs for losses associated with the hacks. That could include identity theft, delayed tax refunds and any other issues related to data lost at the hands of the breaches. Finally, those who paid for premium Yahoo email services are eligible for a 25 percent refund. The deal is subject to final approval from U.S. District Judge Lucy Koh [wiki here] of the Northern District of California at a hearing slated for November 29. [TechCrunch | Yahoo Agrees to Pay $85M to Settle Consumer Data Breach Class Actions | Yahoo to pay $50M, other costs for massive security breach | Yahoo agrees to pay $50M in damages over biggest security breach in history | Yahoo must pay $50 million in damages for data breaches]

Internet / WWW

EU – Commission Provides Guidance on New Geo-Blocking Regulation

In the European Commission’s plan to create a unified “Digital Single Market” [see here & HL overview here & wiki here] In late September it updated its detailed guidance [download 45 pg PDF Q&A here] on the Geo-blocking Regulation 2018/302 [goes into effect December 3, 2018 – see here, 15 pg PDF here & overview here]. The underlying aim of the regulation is to ban unjustified geo-blocking [wiki here] and other forms of discrimination based on customers’ nationality, place of residence or place of establishment within the digital internal market. The term geo-block as such relates to the phenomenon that Internet users are either re-directed to another website or provided with differing terms & conditions because of their IP address or other identifiers. Accordingly, the Geo-Blocking Regulation includes provisions addressing online discrimination occurring when one accesses a website (Article 3), buys goods or services online (Article 4), or wants to do shopping abroad using means of payment labelled in Euros (Article 5). The Q&A list provided by the Commission covers a comprehensive range of aspects. It is intended for traders seeking to comply with the new rules, as well as for consumers seeking more information about their new rights, and Member States authorities who will have to apply and enforce the dispositions. The questions cover both the substantive provisions and the enforcement tools put in place to achieve the goal of the Regulation. The guidance also features a section putting the regulation in context by exploring how the new piece of legislation fits in the broader e-commerce framework, including its interaction with the Regulation (EU) 2018/644 on cross-border parcel delivery services, the Consumer Rights Directive 2011/83/EU and specific VAT rules. [Global Media and Communications Watch (Hogan Lovells) | The Geo-blocking Regulation]

Online Privacy

WW – Google Will Now Take You Through Your Privacy Settings Step-by-Step

On October 31, Google introduced a handful of new security measures, starting with a risk assessment feature that requires JavaScript [wiki here – to be activated] to run. It has also leveled up its Security Checkup feature, so that once you’ve signed in, it will ask you to delete any apps it thinks is harmful and to cut off any devices you don’t use anymore and now also let you know whenever you share any of your Google data with third-party apps. Finally, if the tech giant believes that your account has been compromised, it will automatically trigger a process that prompts you to perform a series of verifications. You’ll need to verify your settings and make sure nobody can access your account via a recovery phone number or email address, which means you have to secure your other accounts, as well. Google will then ask you to check your financial activities to make sure nobody made unauthorized charges to your credit card or Google Pay account. Finally, the company will ask you to review your Gmail and Drive data to check if anybody accessed or misused it. The process could save even those who aren’t that tech-savvy from getting their identities stolen. And by putting together a step-by-step process, Google is making it easier for even those who are tech-savvy to ensure that all aspects of their accounts are secure. [engadget ]

WW – Privacy by Proxy? VPN Extensions Aren’t As Secure As Users Think

New research published this week by an ethical hacker named “File Descriptor” [Twitter] examined some of the most popular VPN extensions [difference between VPN & VPN extensions here] available to download, including ZenMate [here], uVPN [here], and DotVPN [here] File Descriptor claims that “After several pentests and personal researches on VPN extensions almost all VPN extensions are vulnerable to different levels of IP leaks and DNS leaks. Ironically, although most of them are results of extensions’ misconfigurations, browsers are also responsible as there are a lot of pitfalls and misleading documentations on proxy configurations.” The researcher also noted that a number of VPN extensions were vulnerable to IP and DNS leaks through issues with misusing helper functions, whitelisting hostnames, unencrypted proxy protocols, and Chrome’s DNS prefetching. Ariel Hochstadt of VPNMentor echoed File Descriptor’s findings, telling The Daily Swig that extensions are “not safe as standalone software. Many times what VPN companies call ‘VPN extension’ is merely a limited proxy, and users should be concerned with that I would say that if you are looking for a quick, one-click solution to change your IP to watch blocked content, for example, you can use an extension. But if it is privacy that you are worried about, it is not suffice.” [The Daily Swig (PortSwigger)]

WW – Study of Google Data Collection Comes amid Increased Scrutiny Over Digital Privacy

According to research by Douglas C. Schmidt, Cornelius Vanderbilt Professor of Engineering at Vanderbilt University, if you use an Android device with the Chrome browser running, Google knows whether you are traveling by foot or car, where you shop, how often you use your Starbucks app and when you’ve made a doctor’s appointment [and a whole lot more]. His study commissioned by Digital Content Next looked at Google’s data collection practices under a “day in the life” scenario of an Android phone user [see overview here & 55 pg PDF here]. It detailed data mining over a 24-hour period from an idle Android phone with Chrome running in the background. In the study’s scenario, a researcher created a new Google account as “Jane” and carried a factory-reset Android mobile phone with a new SIM card throughout a normal day. The smartphone running Google’s Android operating system and Chrome sent data to the company’s servers an average of 14 times an hour, 24 hours a day. “These products are able to collect user data through a variety of techniques that may not be easily graspable by a general user,” Schmidt concluded in the paper “A major part of Google’s data collection occurs while a user is not directly engaged with any of its products.” Schmidt wrote, “Google collected or inferred over two-thirds of the information through passive means. At the end of the day, Google identified user interests with remarkable accuracy.” What qualifies as passive data? With Chrome running and location enabled, an Android phone is “pinged” throughout the day by other wireless networks, hot spots, cell towers and Bluetooth beacons. During a short 15-minute walk around a residential neighborhood, for example, Jane’s phone sent nine location requests to Google. The requests collected 100 unique identifiers from public and private Wi-Fi access points. Schmidt also studied data gathering from all Google platforms and products, such as Android mobile devices, the Chrome browser, YouTube and Google Photos, plus the company’s publishing and advertising services, such as DoubleClick and AdWords. The study also compared data collection from an idle Android phone running Chrome with an idle iPhone running Apple’s operating system and the Safari browser. “I found that an idle Android phone running the Chrome browser sends back to Google nearly 50 times as many data requests per hour as an idle iOS phone running Safari,” Schmidt said. “I also found that idle Android devices communicate with Google nearly 10 times more frequently as Apple devices communicate with Apple servers. These results highlight the fact that Android and Chrome platforms are critical vehicles for Google’s passive data collection.” After the study’s release, Google questioned its credibility. “This report is commissioned by a professional lobbyist group, and written by a witness for Oracle in their ongoing copyright litigation with Google. So, it’s no surprise that it contains wildly misleading information,” the company said in a statement. Schmidt replied: “Google has not been able to identify any specific aspects of my report’s methods or conclusions as erroneous.” [Vanderbilt News (Vanderbilt University)]

Other Jurisdictions

US – The Rise of Digital Authoritarianism: Fake news, data collection and the challenge to democracy

Freedom on the Net 2018: The Rise of Digital Authoritarianism [see overview & interactive map], the latest edition of the annual country-by-country assessment of online freedom, released today by Freedom House. The report assesses internet freedom in 65 countries that account for 87% of internet users worldwide. The report focuses on developments that occurred between June 2017 and May 2018, though some more recent events are included. Online propaganda and disinformation have increasingly poisoned the digital sphere, while the unbridled collection of personal data is breaking down traditional notions of privacy. At the same time, China has become more brazen and adept at controlling the internet at home and exporting its techniques to other countries. These trends led global internet freedom to decline for the eighth consecutive year in 2018. Beijing took steps during the year to remake the world in its techno-dystopian image. Chinese officials have held trainings and seminars on new media or information management with representatives from 36 out of the 65 countries assessed by Freedom on the Net. China also provided telecommunications and surveillance equipment to foreign governments and demanded that international companies abide by its content regulations even when operating abroad. A proliferation of data leaks has underscored a pressing need to improve protections for users’ information and privacy. Both democracies and authoritarian regimes are instituting changes in the name of data security, but some initiatives actually undermine internet freedom and user privacy by mandating data localization and weakening encryption. In India, a massive data breach affecting 1.1 billion citizens [see coverage at ZDNet] reiterated the need for reforms to the country’s data protection framework, beyond an ineffective government proposal to require that data be stored locally. Over the past 12 months, false claims and hateful propaganda helped to incite jarring outbreaks of violence against ethnic and religious minorities in Myanmar, Sri Lanka, India, and Bangladesh. One of the steepest declines in internet freedom occurred in Sri Lanka [see coverage at engadget], where authorities shut down social media platforms after rumors and disinformation sparked vigilante violence that predominantly targeted the Muslim minority. In India, internet users experienced an unprecedented number of shutdowns due in part to the spread of rumors on WhatsApp. In Egypt, a Lebanese tourist was sentenced to eight years in prison for “deliberately broadcasting false rumors” after she posted a Facebook video describing the sexual harassment she experienced while visiting Cairo [see coverage at Reuters]. In Rwanda, blogger Joseph Nkusi was sentenced to 10 years in prison for inciting civil disobedience and spreading rumors, having questioned the state’s narrative of the 1994 genocide and criticized the lack of political freedom in the country [see Human Rights Watch coverage here]. Key findings include:

  1. Declines outnumber gains for the eighth consecutive year;
  2. Internet freedom declines in the United States;
  3. Citing fake news, governments curb online dissent;
  4. Authorities demand control over personal data;
  5. More governments manipulate social media content;
  6. Internet freedom declines coincided with elections;
  7. Governments disrupted internet services for political and security reasons; and
  8. Digital activism fuels political, economic, and social change.

[Press Releases (Freedom House) | The global threat of China’s digital authoritarianism | China’s Web Surveillance Model Expands Abroad | Chinese-style ‘digital authoritarianism’ grows globally: study | China’s Internet Censorship Is Influencing Digital Repression Around the World, Report Warns]

Privacy (US)

US – FTC Announces Hearings on Consumer Privacy and Data Security

As part of its Hearings Initiative, the Federal Trade Commission will hold four days of hearings in December and February to examine the FTC’s authority to deter unfair and deceptive conduct in data security and privacy matters. The December hearings will focus on data security and will take place December 11-12 [Notice], 2018. Tthey will include five panel discussions and additional discussion of research related to data breaches and data security threats. The first day’s panel discussions will examine incentives to invest in data security and consumer demand for data security. Discussions on the second day will focus on data security assessments, the U.S. framework related to consumer data security, and the FTC’s data security enforcement program. Staff has already begun developing the agenda for the December data security hearing. The hearings on consumer privacy will take place in the same venue on February 12-13, 2019 [Notice]. They will provide the first comprehensive re-examination of the FTC’s approach to consumer privacy since 2012. The FTC is seeking comments from the public on what the agenda should include. To be included for consideration, the FTC is seeking comment by December 21 on specific questions to be discussed at the February event. In addition, FTC staff welcomes comments on both the data security and privacy hearings until March 13, 2019. [FTC Press Release | FTC Announces PrivacyCon 2019 and Calls for Presentations | Advocates push to beef up privacy regulator | FTC Commissioner Chopra Calls for Greater (and More Expensive) Enforcement

Privacy Enhancing Technologies (PETs)

WW – New Signal Privacy Feature Removes Sender ID from Metadata

Signal app [here] is testing a new technique called “sealed sender” that’s designed to minimize the metadata that’s accessible to its servers. A beta release announced Monday will send messages that remove most of the plain-text sender information from message headers. It’s as if the Signal app was sending a traditional letter through the postal service that still included the “to” address but has left almost all of the “from” address blank. Like most messaging services, Signal has relied on the “from” address in message headers to prevent the spoofing of user identities and to limit spam and other types of abuse on the platform. Sealed sender, which puts most user information inside the encrypted message, uses two new devices to get around this potential privacy risk: 1) Senders periodically retrieve short-lived sender certificates that store the sender’s phone number, public key, and expiration timestamp; and 2) Delivery tokens derived from the sender’s profile key are used to prevent abuse. Users who want to receive sealed-sender messages from non-contacts can choose an optional setting that doesn’t require the sender to present a delivery token. This setting opens a user up to the possibility of increased abuse, but for journalists or others who rely on Signal to communicate with strangers, the risk may be acceptable. Even under the sealed sender, observers said, Signal will continue to map senders’ IP addresses. That information, combined with recipient IDs and message times, means that Signal continues to leave a wake of potentially sensitive metadata. Still, by removing the “from” information from the outside of Signal messages, the service is incrementally raising the bar. [Ars Technica | Signal rolls out a new privacy feature making it tougher to know a sender’s identity | Apple’s 2018 MacBooks come a chip that protects against eavesdropping | Apple’s new T2 security chip will prevent hackers from eavesdropping on your microphone | Apple’s T2 security chip disconnects a MacBook’s microphone when users close the lid  | Apple says its T2 chip can prevent hackers from eavesdropping through your MacBook mic

WW – Accelerating the Future of Privacy through SmartData Agents

Imagine a future where you can communicate with your smartphone – or whatever digital extension of you exists at that time – through an evolved smart digital agent that readily understands you, your needs, and exists on your behalf to procure the things and experiences you want. What if it could do all this while protecting and securing your personal information, putting you firmly in control of your data? George Tomko Ph.D, Expert-in-Residence at Privacy, Security and Identity Institute at the University of Toronto [here], Adjunct Professor in Computer Science at Ryerson University, and Neuroscientist, believes the time is ripe to address the privacy and ethical challenges we face today, and to put into place a system that will work for individuals, while delivering effective business performance and minimizing harms to society at large. I had the privilege of meeting George to discuss his brainchild, SmartData: the development of intelligent agents and the solution to data protection. The evolution of Privacy by Design, which shifts control from the organization and places it directly in the hands of the individual (the data subject) [see here] These ideas were published in SmartData: Privacy Meets Evolutionary Robotics, co-authored with Ann Cavoukian, former 3-term Privacy Commissioner in Ontario and inventor of Privacy by Design [wiki here]. This led to his current work, Smart Data Intelligent Agents, the subject of this article. [Forbes]


CA – Gov’t Failing to Protect Canadians from Cyber Threats: Senate Report

The federal government is failing to protect Canadians from increasingly sophisticated cyber attacks that have already victimized millions, according to a scathing Senate Committee on Banking, Trade and Commerce report released October 29 that calls for the creation of a minister of cyber security [see here: Exec Summary & 34 pg report]. In 2017 alone, over 10 million Canadians had their personal information compromised through targeted attacks and — more often — through cyber operations directed against businesses that hold Canadians’ private information, says the executive summary. Among the committee’s recommendations: 1) all levels of government must prioritize cyber security education as part of the national cyber security strategy. there should be a national cyber literacy program, led by the newly-created Canadian Centre for Cyber Security, to educate consumers and businesses about how to protect themselves; 2) Ottawa should create a new national centre of excellence in cyber security and expand two existing centres to promote university-level research and encourage Canadians to pursue careers in cyber security-related fields. The centres of excellence should be the Canadian Institute for Cybersecurity at the University of New Brunswick, the Cybersecurity and Privacy Institute at the University of Waterloo, a third yet to be chosen in Western Canada. They would join the Montreal-based Smart Cybersecurity Network (SERENE-RISC) already receives funding as a centre of excellence; 3) the federal government should modernize PIPEDA, including empowering the Office of the Privacy Commissioner to make orders and impose fines against companies that fail to protect their customers’ information, and to allow information sharing about cyber threats within the private sector and between the private sector, government and relevant international organizations; 4) businesses should be given incentives to invest in cyber security improvements, for example, by making these investments tax deductible; 5) a new federal minister of cyber security should be created to co-ordinate cyber security efforts across all levels of government. The minister would have responsibility for the new Canadian Centre for Cyber Security — now overseen by the Defence department — and the RCMP’s National Cybercrime Co-ordination Unit; 6) Ottawa should create a federal expert task force on cyber security to provide recommendations regarding the national cyber security strategy that would establish Canada as a global leader in cyber security. The government released an update to its national security strategy in June; 7) the federal government develop standards to protect consumers, businesses and governments from threats related to the Internet of Things devices; and 8) it should also develop a consistent set of leading cyber security standards that are harmonized with the highest international standards and would apply to all entities participating in critical infrastructure sectors. “Governments, businesses and individual Canadians each have a role to play in protecting the country from this cyber scourge,” says the report. “It should keep you up at night.” [IT World Canada]

US – Best Practices for Preventing and Responding to Incidents

The US Department of Justice, Cybersecurity Unit updates existing best practices on how to prevent falling victim to a cyber incident. The US DoJ Cybersecurity Unit recommends prior to an incident, organizations educate senior management on risks, identify company assets, and have appropriate authorizations in place; following an incident, organizations should enact their incident response plan, notify stakeholders (internal personnel, regulators and potential victims), and avoid using a compromised system for further communication. [Best Practices for Victim Response and Reporting of Cyber Incidents – Department of Justice, Cybersecurity Unit]

US Legislation

US – Sen. Ron Wyden Proposes Bill That Could Jail Executives Who Mishandle Consumer Data

Sen. Ron Wyden (D-OR) released an early draft of legislation today that would create substantially stiffer guidelines for the misuse of consumers’ data. Among other provisions, the bill suggests creating a penalty of 10 to 20 years imprisonment for senior executives who fail to follow new rules around data use. called the “Consumer Data Protection Act” [see PR, 1 pg PDF hsummary, 2 pg PDF section overview & full 38 pg PDF text] it would give the FTC more authority and resources to police the use of data by adding a total of 175 new staff. Under the proposal, the FTC would also be allowed to fine companies up to 4% of revenue for a first offense. It would create a centralized Do Not Track list meant to let consumers stop companies from sharing their data with third parties, or from using it for targeted advertising. It would allow companies to block users who opt out and offer a paid version of the service in place of the tracking. Consumers could also ask to review and challenge the information collected on them. Companies that make more than $1 billion in revenue and that handle information on more than 1 million people, or smaller companies that handle information on more than 50 million people, would also be required to submit regular reports to the FTC that describe any privacy lapses. Failure to comply with the measure could lead to jail time. Wyden is accepting feedback on the bill at PrivacyBillComments@wyden.senate.gov. [The Verge | Senator Wyden wants to jail execs who don’t protect consumer data | Senator’s data privacy law draft could put CEOs in jail for lying | Sen. Ron Wyden Introduces Bill That Would Send CEOs to Jail for Violating Consumer Privacy | Oregon senator proposes privacy regulations with prison and revenue penalties for data misuse | California Consumer Privacy Act of 2018 – Full Text

Workplace Privacy

UK – Court Confirms Vicarious Liability for Rogue Employee

The UK Court of Appeal considers the liability of Morrisons Supermarket PLC (“Morrisons”) for the actions of an employee. The employee’s actions (stealing employee data and disclosing it to third parties) were within the field of activities assigned by the company (accessing payroll information, copying data to a USB stick), and the company had an obligation to reasonably ensure the reliability of any employee with access to personal data. [WM Morrison Supermarkets PLC and Various Claimants – 2018 EWCA Civ 2339 – England and Wales Court of Appeal]




16-23 October 2018


WW – Israel’s Fingerprint Identification System Failing Border and Police Checks

A committee reviewing Israel’s Biometric Database Law says that the fingerprint identification system is suffering high rates of failure when used both at the country’s borders and by police. The committee produced a report which includes findings from a review of data provided by the Interior Ministry’s Population Registry, the National Biometric Database Authority, and police from mid-2017 to mid-2018. The Population and Immigration Authority responded that fingerprint identification failures are not causing problems, as face matching is working at the predicted rates, and passport holders are identified face to face by a border control employee in the event that biometric identification fails. It also noted that some of the failures are caused by incorrect finger placement. The report also found that the Biometric Database Authority has been deleting data when required to do so by law. The management of the database has been contested in court after it was allegedly performed by a private contractor for two years, in violation of the laws establishing it. Meanwhile, police were blocked from direct access to the database by a court ruling earlier this year. [Biometric Update]

US – TSA Proposes Increased Use of Biometrics for Security

A 23-page report released by the U.S. Transportation Security Administration outlines proposals to how passengers are screened before boarding and states that, among the changes, biometric technology will eventually replace passports and other forms of identifications. In the report, TSA Administrator David Pekoske said, “In addition to addressing key operational needs, implementing the Biometrics Roadmap will secure TSA’s position as a global leader in aviation security and advance global transportation security standards.” The New York Times

CN – Shanghai Airport Introduces Fully Automated Facial-Recognition Kiosks

Shanghai Hongqiao International Airport unveiled self-service kiosks fueled by facial-recognition technology for flight and baggage check-in, security clearance and boarding. The system is being touted as the first fully automated operation in China but has raised privacy concerns for some. Maya Wang, senior China researcher for Human Rights Watch, said, “Authorities are using biometric and artificial intelligence to record and track people for social control purposes,” adding, “We are concerned about the increasing integration and use of facial recognition technologies throughout the country because it provides more and more data points for the authorities to track people.” The system is currently available to those with a Chinese identity card, and it is expected that the system will be introduced to Beijing and Nanyang city. The Province

Big Data / Data Analytics

CA – StatsCan Promises More Detailed Portrait of Canadians with Fewer Surveys

Canadians are increasingly shunning phone surveys, but they could still be providing Statistics Canada with valuable data each time they flush the toilet or flash their debit card. The national statistics agency laid out an ambitious plan to overhaul the way it collects and reports on issues ranging from cannabis and opioid use to market-moving information on unemployment and economic growth. Statscan is reaching agreements with other government departments and private companies in order to gain access to their raw data, such as point-of-sale information. According to agency officials, such arrangements reduce the reporting paperwork faced by businesses while creating the potential for Statscan to produce faster and more reliable information. Other examples of how Statscan is focusing on the databases of other organizations includes a partnership with the Canada Border Services Agency, where border-crossing photos of vehicle licence plates and traveller declarations of items that have been purchased now inform Statscan’s tourism statistics. The officials said Statscan works closely with Canada’s Privacy Commissioner as it seeks new sources of data, and they said the agency has always gone to great length to ensure that no information is released that could identify individual Canadians. However, some companies have expressed concern about Statscan’s request for customer data such as phone records, credit bureau reports and electricity bills, according to Tobi Cohen, a spokesperson for the Privacy Commissioner. Ms. Cohen said the office is in ongoing discussions with Statscan about this direction. [The Globe and Mail]

WW – ICDPPC Establishes Working Group on Ethics and Data Protection in AI

At the 40th International Conference of Data Protection and Privacy Commissioners in Brussels this week, the French data protection authority, the CNIL, the European Data Protection Supervisor and Italian DPA, the Garante, co-authored a new declaration on ethics and data protection in artificial intelligence. Along with the declaration’s six principles, the ICDPPC, “in order to further elaborate guidance to accompany the principles,” will establish “a permanent working group addressing the challenges of artificial intelligence development,” an ICDPPC release states. The working group “will be in charge of promoting understanding of and respect for the principles of the present resolution, by all relevant parties involved in the development of [AI] systems, including governments and public authorities, standardization bodies, [AI] systems designers, providers and researchers, companies, citizens and end users” of AI systems. [ICDPPC Resolution]

WW – IAF, Hong Kong DPA Release Ethical Accountability Report, Framework

The Information Accountability Foundation, together with Hong Kong Privacy Commissioner for Personal Data Stephen Kai-yi Wong, has released an “Ethical Accountability Framework for Hong Kong China,” as well as “a model assessment and oversight process framework for the cascading of ethics from shared values to workable business process,” according to a blog post from the IAF’s Martin Abrams. Wong commissioned the IAF to work with nearly two dozen Hong Kong–based businesses to develop the report and framework. “The challenge,” Abrams writes, “was to create a compelling and implementable framework for doing the right thing, for all stakeholders, in a legal system with an ombudsman structure for data protection.” Though the project was conducted in Hong Kong, Abrams says the “framework has practical use and implications for all privacy regimes.” The IAF is also looking for feedback on the documents. Full Story

WW – IAPP, UN Release Joint Report on Building Ethics into Privacy Frameworks

This week, privacy and data protection commissioners from more than 100 countries congregated in Brussels for the 40th annual International Conference of Data Protection and Privacy Commissioners. The debate will focus on digital ethics, including topics that exceed the traditional remit of privacy professionals. Indeed, privacy officers and data protection regulators are increasingly called upon to serve as a moral compass for their organizations and societies. Today, the IAPP releases a joint report with the United Nations Global Pulse titled “Building Ethics Into Privacy Frameworks For Big Data & AI.” IAPP.org

WW – Intel Releases Paper on Privacy and AI

Intel released a paper titled “Protecting Individuals’ Privacy and Data in the Artificial Intelligence World“ during the 40th International Conference of Data Protection and Privacy Commissioners. As tech companies use automation to make decisions in real time, Intel released its observations about what that could mean for privacy. The company states increased automation should not result in fewer privacy protections, and companies should place a focus on transparency with its algorithms. Intel offered six policy recommendations for privacy and artificial intelligence, such as new comprehensive legislative and regulatory initiatives that are tech neutral and support the free flow of data and an emphasis on risk-based accountability approaches. Intel Blog

WW – Google Releases Training Module on Machine-Learning Fairness

Google has released a 60-minute training module designed to help machine-learning practitioners consider fairness as they develop machine-learning models. The module was created by Google’s engineering-education and machine-learning fairness teams and is part of the tech company’s Machine Learning Crash Course. The course teaches users on the different types of human biases that can appear in machine-learning models through data, the best ways to spot human biases in data before a model is trained, and methods to evaluate a model’s predictions for both over performance and bias. Google Blog

WW – Analyst Names Digital Ethics and Privacy Among Top Trends for 2019

A Gartner analyst has named digital ethics and privacy as one of the top 10 strategic technology trends for 2019. While familiar technology trends also topped the list, including artificial intelligence–driven development, blockchain and autonomous things, digital ethics and privacy cut across trends. Gartner noted digital ethics and privacy have become a “growing concern for individuals, organisations and governments,” and wrote, “People are increasingly concerned about how their personal information is being used by organisations in both the public and private sector, and the backlash will only increase for organisations that are not proactively addressing these concerns.” TechCrunch

WW – 2018 IAPP-EY Privacy Governance Report

The IAPP and EY released the fourth annual IAPP-EY Privacy Governance Report, the authoritative look at how the job of privacy is done, with documentation of average budgets, staff sizes, program priorities, and much more. This year, the responses include a much greater proportion from the EU, and the report focuses on the response to the EU General Data Protection Regulation, which has had every bit the impact many predicted. The data shows organizations expect to spend an average of $3 million in building compliance programming and adapting products and services. Further, the average privacy team has grown to 10 full-time staffers. In total, the data and analysis stretch to 132 pages, presented in an easy-to-digest format and brimming with benchmarking data you can use to guide your own privacy program. IAPP.org

US – ITI Releases New Privacy Framework

The Information Technology Industry Council has released its “Framework to Advance Interoperable Rules (FAIR) on Privacy.” The framework offers recommendations to give data subjects more control and a stronger understanding of the ways their information is used. Measures are also included to ensure companies focus on responsible data use and transparency. “This framework moves us toward that goal by enhancing transparency, increasing individual control, establishing company accountability, promoting security, and fostering innovation. We expect this framework will continue to take shape as we work alongside lawmakers and consumers to develop meaningful privacy legislation in the United States and across the world,” ITI President and CEO Dean Garfield said. ITIC.org


CA – Supreme Court Asked to Tell Toronto Police to Respect Rights of Minorities

The Supreme Court of Canada is being asked to tell police to respect the privacy rights of minorities in poor neighbourhoods in a case in which three Toronto officers entered a public-housing backyard without permission and found a man, Tom Le, who is Asian-Canadian, with a gun and cocaine. Mr. Le was convicted of gun and drug offences at trial, and the Ontario Court of Appeal upheld his conviction in a 2-1 ruling, saying that even if the police had no legal right to enter the backyard, Mr. Le had no “reasonable expectation of privacy” as a guest, and his rights therefore were not violated. He appealed to the Supreme Court, which heard the case on Friday morning. To Mr. Le’s lawyers, and several intervenors, the case raises questions about police conduct toward minorities, while to the Ontario government’s prosecutors, it is about protecting communities. [The Globe and Mail]

CA – OPC Goes to Court to Determine if Canada Can Force Google to Delete History

Following public consultations, the OPC has taken the view that PIPEDA provides for a right to de-indexing – which removes links from search results without deleting the content itself – on request in certain cases. This would generally refer to web pages that contain inaccurate, incomplete or outdated information. However, there is some uncertainty in the interpretation of the law. In the circumstances, the most prudent approach is to ask the Federal Court to clarify the law before the OPC investigates other complaints into issues over which the office may not have jurisdiction if the court were to disagree with the OPC’s interpretation of the legislation. A Notice of Application, filed in Federal Court, seeks a determination on the preliminary issue of whether PIPEDA applies to the operation of Google’s search engine. In particular, the reference asks whether Google’s search engine service collects, uses or discloses personal information in the course of commercial activities and is therefore subject to PIPEDA. It also asks whether Google is exempt from PIPEDA because its purposes are exclusively journalistic or literary. [Free Speech/Techdirt]

CA OPC Shares Views on Bill C-58, An Act to amend the Access to Information Act and the Privacy Act

The Privacy Commissioner of Canada, Daniel Therrien, appeared before the Standing Senate Committee on Legal and Constitutional Affairs [notice here, watch here] to discuss Bill C-58 [here], An Act to amend the Access to Information Act and the Privacy Act and to make consequential amendments to other Acts. In his remarks, he shares his concerns about the bill in its current form and how it disrupts the current balance between access and privacy. He claims “by granting order-making powers to the Information Commissioner [here], including in respect of personal information, Bill C-58 risks giving access pre-eminence over privacy.” he concludes his remarkes by saying: “In my view, the best way to ensure a balance between access to information and privacy rights would be to grant me order-making powers, as my colleague will have. However, in the absence of equal powers, the solutions we have jointly proposed represent a step towards maintaining this balance. [Office of the Privacy Commissioner of Canada]

CA – OPC Wants “Under the Hood” of the Communications Industry

The Standing Senate Committee on Transport and Communications [here] met [October 16] for the fifth time [here & watch here] to continue its examination of how the three federal communications statutes (the Telecommunications Act [here], the Broadcasting Act [here], and the Radiocommunication Act [here]) can be modernized to account for the evolution of the broadcasting and telecommunications sectors in the last decades. With the testimony of the Privacy Commissioner, Daniel Therrien [remarks here], it became obvious, to us anyway, that the many consultations undertaken by government lately, including the Broadcasting and Telecommunications Legislative Review (BTLR) [here] will necessitate significant co-ordination when they will be writing the recommendations leading to the various legislations involved. With the end of the National Digital and Data Consultations on October 12, 2018, it is likely that one outcome will be to bring changes to the Personal Information Protection and Electronic Documents Act (PIPEDA) [here & wiki here]. These changes could impact the Telecommunications Act Of course, the Telecommunications predates the PIPEDA, but as the Privacy Commissioner motioned, PIPEDA is a general application law that applies to all sectors and it allows the CRTC to go further in its applications, for example for express content as opposed to implied content. Therrrien also said “Under the current laws, all regulatory agencies are prohibited from sharing information with others, including our sister regulatory agencies, which somewhat impedes the completeness of the studies that we make. We can have discussions at the broad policy level with the CRTC and the Competition Bureau, but when we investigate specific complaints we cannot share with them—although it would be very productive—the product of our investigations because we are legally prohibited … So, the information for which I would like more flexibility—and I think the sister agencies are in agreement with that—would be information that we collect in the course of our work.” Finally, the Commissioner recommended that he be given more powers that would apply to the communications industry. [CARTT]

CA Democratic Institutions Minister Rejects Call to Subject Political Parties to Privacy Laws

The federal government is rejecting opposition calls urging it to accept amendments to the electoral reform bill to subject political parties to federal privacy laws. Democratic Institutions Minister Karina Gould [here] appeared Monday as the last scheduled witness [here, watch here] before the Procedure and House Affairs Committee [here] dives into a marathon effort this week to review and vote on more than 300 amendments to Bill C-76, which makes changes to the Canada Elections Act. While Ms. Gould did not state categorically that she opposes such a change to the bill, it was strongly implied in her comments. “I would like to see a broader study of privacy and political parties. I think that it’s something that is really important,” she said. “I think it does require a deeper dive.” Both the federal Privacy Commissioner [here, PR here] and the head of Elections Canada [here] have called for privacy laws to apply to political parties. The Commons access to information, privacy and ethics committee [here] made an all-party recommendation [here & 56 pg PDF here] in June that extending privacy laws to political parties was “urgently” required. [The Globe and Mail | MPs begin deliberating hundreds of amendments to key elections bill ] See also: Ireland’s Data Protection Commission has published guidance on elections and canvassing activities.

CA Nova Scotia Accepts Some of Ombudsman’s Recommendations Over Minister’s Use of Private Email

Nova Scotia’s Health Department has accepted five of the six recommendations made by the province’s information and privacy commissioner [here] in a report [Review Report 10-05] blasted the government for violating its own Freedom of Information and Protection of Privacy (FOIPOP) Act after it failed to assist then-Global News reporter Marieke Walsh in her multiple FOIPOP requests for emails sent by Leo Glavine during his time as Minister of Health and Wellness. It also criticized the office of Premier Stephen McNeil for appearing to interfere in Commissioner Tully’s attempts to interview Glavine’s former executive assistant. A letter filed with the Office of the Information and Privacy Commissioner by the health department and the Department of Internal Services reads in part: “The Departments appreciate the opportunity to clarify the efforts taken in this case and, through the response to the recommendations, demonstrate its continued commitment to transparency” The province says they agree with the recommendation that emails sent to or from personal email accounts that reside on government servers are within the custody of the province but dispute that it means staff should be able to request a minister search his personal email for records relating to government business. [Global News]


US – U.S. to Help Define New Int’l Standard for Consumer Privacy by Design

A coalition of U.S. tech companies and government agencies is joining forces with 11 other countries to develop consumer privacy-by-design international standards as part of ISO Project Committee 317. The U.S. will work with the U.K., China, Canada and other countries to create the global standard and will be represented by its Technical Advisory Group. OASIS See also: the National Telecommunications and Information Administration has extended its comment period for feedback on an approach to consumer privacy. The deadline for comment is now extended to Nov. 9. More | The European Data Protection Supervisor published an opinion on a recent legislative package, “A New Deal for Consumers,” Giovanni Buttarelli said the EU needs to adopt a big-picture approach to addressing harms and called for cooperation between consumer law and data protection rules. Europa.eu

WW Apple Lets You Download All Your Data

Apple fulfilled its promise to offer a data download service for its users the U.S., and now you can find out what Apple’s got on you with a few simple clicks [Apple privacy portal here – Apple ID required]. Here’s how you take a peek at what Apple retains about you: 1) Go to the Apple privacy portal and sign in. You’ll have to enter an authentication code if you’ve enabled 2FA. If you don’t have two-factor authentication, you should; 2) Once you’re in, you’ll see a few options and you want to click “Obtain a copy of your data.” You can choose which services you want to request the data from but you might as well just grab it all’ and 3) It’s possible that you might have to do some extra verification and answer some questions but mostly you’ll just have to wait. It can take up to seven days for the information to be compiled and sent in a zip file to your email address. Along with the EU and the U.S., the tool should now be available in Australia, Canada, Iceland, Liechtenstein, New Zealand, Norway, and Switzerland. Otherwise, you can submit a form request for your data here. Apple uses encryption to anonymize your data for its own product analysis. What you’re requesting is the data that can specifically be tied to your account and device. The new portal accompanies some routine changes to Apple’s Privacy Policy that gets everything in line with the improvements to iOS 12 and macOS Mojave. It’s a good policy and worth perusing if you want to see how it should be done. Just always remember Apple is not your friend. [GIZMODO | Apple Launches Portal for U.S. Users to Download Their Data | How to download your data from Apple | Apple enables data downloads for US customers

WW – Apple Launches Privacy Website that Lets You Find All the Data the Company Has On You

Apple is moving forward several privacy upgrades, including launching a portal that allows customers to search and see what kind of data the company has kept on them. The privacy portal was already tested in the European Union in May, coinciding with the EU’s launch of restrictive privacy legislation called the General Data Protection Legislation (GDPR). The information collected may include data such as calendar entries, photos, reminders, documents, website bookmarks, App Store purchases or support history of repairs to your devices, among other items. The search function, which provides customers a report on their tracked data, fits into a broader narrative as Apple seeks to differentiate itself as a company that makes its money from selling hardware, rather than targeted ads based on the data of its customers. In addition to the search portal, Apple has launched several enhanced privacy initiatives with its new website and new iOS 12 operating system for iPhones and iPads. The company is touting its “Intelligent Tracking Prevention” technology, essentially a way to stop the kind of data collection that causes consumers to see ads for products related to their recent purchases or web searches. Apple has also made changes standardizing certain settings to prevent so-called “machine fingerprinting” or “browser fingerprinting,” a way that a person’s individual device can be identified using its unique settings and preferences, like special fonts, even if the customer has blocked other forms of data tracking. There are future plans for privacy as well, according to the company, including end-to-end encryption for its Group FaceTime video chat product, which will launch soon and will allow up to 32 people to join a group conversation. Encryption will also protect the new “Screentime” feature, so users will be able to keep information about how often they use their devices private. [CNBC] See also: Apple says ‘dangerous’ Australian encryption laws put ‘everyone at risk’

WW – Apple Introduces Privacy Portal to Give Users Access to Their Data

Apple rolled out several privacy upgrades, including a privacy portal that will allow customers to understand what personal data the company has stored. Apple also introduced enhanced privacy initiatives, including the new iOS 12 operating system for iPhones and iPads and Intelligent Tracking Prevention technology. The company also announced plans to add encryption to its group FaceTime video chat and the new Screentime feature. Apple CEO Tim Cook is scheduled to keynote this year’s International Conference of Data Protection and Privacy Commissioners, which takes place next week in Brussels. CNBC

WW – Google, Mozilla Each Roll Out New Privacy Features

Google announced it has been working on a new tool that will allow users to understand what data the company collects and the options available to control it. The new feature, called Your Data, will allow users to better understand why and what Google collects. To start, Google will introduce privacy controls within its search, allowing users to directly review and delete the activity log. Meanwhile, Mozilla is experimenting with a new privacy offering. While the company generates money through search-ad deals, Mozilla is offering a virtual private network service to users who want more privacy, offsetting lost revenue with a $10 monthly fee. (Registration may be required to access this story.) WIRED

US – Cook Endorses US Federal Privacy Law at ICDPPC

While the theme of this year’s 40th annual International Conference of Data Protection and Privacy Commissioners may be “Debating Ethics,” Apple CEO Tim Cook centered his keynote address firmly around privacy law: “We at Apple are in full support of a comprehensive federal privacy law in the United States.” It was part of a pointed and definitive endorsement of privacy by the world’s largest company, which drew a round of applause from the collected data protection authorities and observers. Apple CEO and long-time data privacy advocate Tim Cook has made an impassioned speech calling for new digital privacy laws in the US. At a privacy conference in Brussels, Cook said that modern technology has resulted in a “data-industrial complex” where personal information is “weaponized against us with military efficiency,” and in a way that doesn’t just affect individuals but whole sections of society. “Platforms and algorithms that promised to improve our lives can actually magnify our worst human tendencies,” said Cook. “Rogue actors and even governments have taken advantage of user trust to deepen divisions, incite violence, and even undermine our shared sense of what is true and what is false. This crisis is real. It is not imagined, or exaggerated, or crazy.” Cook praised Europe’s “successful implementation” of privacy law GDPR, and said that “It is time for the rest of the world … to follow your lead. We at Apple are in full support of a comprehensive federal privacy law in the United States.” He outlined four key areas that he believes should be turned into legislation: the right to have personal data minimized; the right for users to know what data is collected on them; the right to access that data; and the right for that data to be kept securely. “Technology’s potential is and always must be rooted in the faith people have in it.” He then followed up his speech with a tweet that asked, “It all boils down to a fundamental question: What kind of world do we want to live in?” [Source]


CA Cannabis IQ: Everything You Need To Know About Pot and the Border

Recreational marijuana is now legal in Canada, but what does that mean for our relationship with our closest neighbour and biggest trading partner? Issues linked to the Canada-United States border were top of mind for many officials as Canada prepared for legalization. How would it affect screening at the crossings? What about pot tourists coming here from the states? And how would American officials react to Canadians who admitted they’d consumed the drug in the past? In most of the U.S., the recreational use and possession of marijuana remains illegal. Here’s a rundown of everything Canadians need to know about marijuana and the border, drawn from nearly two years of in-depth reporting by Global News. Border Security Minister Bill Blair, who until recently served as the government’s point-person on pot, advised Canadians to always be truthful with border guards, even if it means being turned away and banned for life. But that’s “dangerous” advice, according to U.S. lawyer Len Saunders. He recommended that Canadians refuse to answer questions about past marijuana use. That will probably lead to them being turned back to Canada on that one occasion, but won’t lead to more serious consequences. There are also concerns surrounding what could happen if consumer data (credit card purchase records, online ordering to a home address, etc.) ends up on servers in the United States. There’s little to stop that data from making its way to U.S. border officials, privacy experts say, which could lead to some uncomfortable questions at the border crossing. In fact, how a credit card marijuana purchase will appear on your bank statement could put you in an impossible position — admit to marijuana use and be banned for that, or deny it and be banned for lying. And, as always, trying to bring the drug over either side of the border is a big no-no. [Global News See also: Think about your privacy before you purchase pot: federal watchdog | Can we Implement Random Cannabis Drug Testing? – (5 pg PDF here) | Cannabis Is Legal: Top Tips for Employers | Marijuana in the workplace: What your boss can and can’t do | Understanding Cannabis Rules for Employees who Travel to Canada or the United States for Business – (5 pg PDF here)


US BTA & PCSP introduce the Educator Toolkit for Teacher and Student Privacy

Schools provide a rich opportunity to trade educational programs and assistance for a rich deposit of data in students (and teachers) who may not even be aware that Big Data is gathering data points by the bushel from their simplest activities. Industry leading Summit Learning, a provider of computer-run education programs, has admitted that they share the data they gather with 18 “partners.” The practice is not new; generations of test takers filled out the personal information pages with the PSAT [wiki]; in 2013, the College Board and ACT were sued over the practice of selling student information [see 12 pg PDF]. One of the items that sent West Virginia teachers out on strike last year was a rule that all teachers would carry a device [Humana Go365 App here] that monitored their movement and activity [the requirement was abolished in April 2018 – here]. Last year saw incidents of a hacker group holding school district data hostage, and backing up their demands by sending threatening emails to parents. In response to all of this, the Badass Teachers Association [here – really, that’s what their called] and the Parent Coalition for Student Privacy [here] have issued the Educator Toolkit for Teacher and Student Privacy [see PR here & 55 pg PDF here]. There is a full chapter laying out the pertinent privacy laws as they currently stand (if you think you know FERPA [here & wiki here], you may be unaware of the loopholes that have been added over the past decade). Is that data wall in your child’s classroom legal? Probably not. If the teacher wants to use a free app to monitor student behavior and communicate with families, is that okay? The answer turns out to be complicated. And what are the rules for that survey that the school just handed out to all students? The laws have become complex, and most parents and teachers did not go to law school. The toolkit provides some simple guidance. The toolkit offers ten teacher rules for using social media (or not) [and] also provides practical tips for protecting privacy, and for advocating for better protections for all. An appendix shows the results of a survey given to teachers about technology in their schools. Almost half of those responding said their school uses an online app or program to track student behavior. And well over half reported that their school requires them to use certain computer based programs and materials. The toolkit certainly doesn’t have all the answers, but if you are a teacher or a parent, particularly one who’s just starting to realize there’s something to worry about in our new data era, this is a good place to start. The toolkit was supported by grants from the Rose Foundation for Communities and the Environment, the American Federation of Teachers, and the NEA Foundation. [Forbes | Stronger Data-Privacy Protections for Students and Teachers Needed, Report Argues | Teachers And Privacy And Telling Tales Out Of School | Stephen Miller’s Old Teacher Suspended for Calling Him a ‘Loner’ Who Ate Glue

EU – Irish DPC Launches Data Privacy Education Pilot Program for Children

The Irish Data Protection Commission has launched pilot data privacy education modules in three schools within the country. The modules are targeted toward three different age groups: 9 to 10, 14 to 15, and 16 and older. Data Protection Commissioner Helen Dixon said she plans to host a public consultation later this year to discuss the benefits of data privacy education in schools and that initial feedback to the pilot program could inform a national lesson plan. “We want to see if children understand the concept of personal data; how they engage with online services, the notices they are given and what they understand the risks to be; whether they understand the rights they have and can they exercise them by themselves,” Dixon said. Siliconrepublic.com


WW – The Threat of Quantum Computers for the Internet

An article for The Economist examines how quantum computers will impact the internet and when they will become available. While some venture to guess such a computer will be available sometime between 2030 and 2040, the National Institute of Standards and Technology has already begun a competition to devise quantum-resistant proposals, with conclusions expected in 2024. The article states, “All this means that quantum-proofing the internet is shaping up to be an expensive, protracted and probably incomplete job.” (Registration may be required to access this story.) The Economist

EU Developments

EU – Jourová, Ross Release Joint Statement on Privacy Shield Review

EU Justice Commissioner Věra Jourová and U.S. Secretary of Commerce Wilbur Ross released a joint statement on the second annual review of the EU-U.S. Privacy Shield agreement. The EU and U.S. noted three new members have been confirmed to the Privacy and Civil Liberties Oversight Board and Manisha Singh’s designation as Privacy Shield ombudsperson. “In the wake of recent privacy incidents involving the personal data of Europeans and Americans, the U.S. and EU reaffirm the need for strong privacy enforcement to protect our citizens and ensure trust in the digital economy,” the statement reads. “The Commerce Department will revoke the certification of companies that do not comply with Privacy Shield’s vigorous data protection requirements.” The European Commission is expected to publish a report on Privacy Shield by the end of the year. Europa.eu Coverage: US taking privacy shield deal seriously, EU officials say | EU-U.S Privacy Shield: U.S making serious efforts to comply with data pact | Commerce chief Ross, after review with EU, says U.S. to focus on appointment of privacy ‘ombudsperson’ (subscribers only) | FTC’s Chopra Seeks Privacy Shield Probes in Data Enforcement | Privacy Shield review: Prepare for the worst | U.S. making serious efforts to comply with EU data rules: EU officials | Ding ding! Round Two: Second annual review for transatlantic data flow deal Privacy Shield | Outcome of Privacy Shield Review Uncertain, Despite U.S. Steps Toward Compliance | Europe and US lock horns on transatlantic privacy

EU CNIL Publishes Initial Analysis on Blockchain and GDPR

Many questions surround the Blockchain’s [wiki & beginners guide] compatibility with EU General Data Protection Regulation (GDPR). The French Data Protection Supervisory Authority (the CNIL) has recently published its initial thoughts on this topic [PDF in French], providing some responses and practical recommendations on how the usage of a blockchain may be compatible with GDPR and more generally Data Protection Law, taking into account the “constraints” imposed by such technology. The guidance covers the four following topics: 1) What solutions for a responsible use of Blockchain involving personal data?; 2) How to minimize risks for data subjects when the processing of their personal data relies on a blockchain?; 3) How to ensure the effective exercise of the data subjects’ rights?; and 4) What are the security requirements? Although this is a preliminary analysis of the CNIL, it is certainly interesting to know its position on this topic, and to see that its approach is rather pragmatic and takes into account the constraints imposed by the Blockchain technology. The CNIL will continue its reflection on Blockchain and is likely to publish additional guidelines in the future. The CNIL has already announced that it will work on this topic with the other authorities in order to adopt a solid and common approach. It will also liaise with other national regulators such as the AMF in order to lay the foundation of an inter-regulation, which will allow the different stakeholders to have a better understanding of the various requirements applicable to Blockchain. [Privacy Matters Blog (DLA Piper) and CNIL Publishes Initial Assessment on Blockchain and GDPRPrivacy & Information Security Law Blog (Hunton Andrews Kurth)

WW – Denham Named Chair of ICDPPC

U.K. Information Commissioner Elizabeth Denham has been named the new chair of the International Conference of Data Protection and Privacy Commissioners. Denham said the current age of borderless data flows has made this a critical time for global unification on data protection and privacy. “My vision for the ICDPPC is to lead a decade of global data protection,” Denham said. “A decade when data protection and privacy by design become mainstream aspects of the digital economy, safeguarding democratic governance and ensuring protection for society’s vulnerable groups, including young people.” ICO.uk


CA Nova Scotia Set to Replace Compromised FOIPOP Website

Nova Scotia is set to replace a program behind a data breach that exposed personal information such as social insurance numbers, birth dates and personal addresses to the general public. The Freedom of Information and Protection of Privacy Portal (FOIPOP) website, which was originally breached between March 3 and March 5, was taken down on April 5 when officials with the Department of Internal Services — which is responsible for the FOIPOP website — were first informed by a provincial employee that it was possible to inadvertently access documents through the portal. More than 7,000 documents were inappropriately downloaded as a result of the breach, and 369 of the documents contained “highly sensitive” personal information such as social insurance numbers, birth dates and personal addresses. While a portion of Nova Scotia’s FOIPOP system was brought back online on Sept. 5 — 152 days after being taken offline — it only included the ability to download previously completed FOI requests [see PR here & coverage here]. The provincial government says it expects to issue a Request for Procurement (RFP) in the first quarter of 2019 for an AMANDA 7 [see here & here] replacement. Internal emails between the government and Unisys, the company company tasked with maintaining the government’s online services, indicated that extensive changes were needed to fix the core code of AMANDA 7 and remove the possibility of another security breach. [Global News ]

Health / Medical

US – OCR to Propose Rulemaking on ‘Good Faith’ Disclosure for Patient Care

The U.S. Department of Health & Human Services’ Office for Civil Rights is drafting a notice of proposed rulemaking on “good faith” disclosures of patient data by health care providers in patient emergencies. Speaking as the keynote at the Safeguarding Health Information: Building Assurance Through HIPAA Security conference, OCR Director Roger Severino explained such disclosure could be done so without patient consent. Meanwhile, the Centers for Medicare & Medicaid Services announced that Healthcare.gov suffered a data breach that may have impacted as many as 75,000 people who were receiving help with getting health insurance coverage. HealthITSecurity | Aetna has reached settlements with several state attorneys general for disclosing the HIV statuses of 12,000 patients in violation of HIPAA. HealthITSecurity

US – Health Care CISO: Start Protecting Patient Privacy at Home

Health care professionals should think beyond merely protecting the organization and start protecting patients’ privacy at home. “At some point, I’m going to have [to] start thinking about how to protect patients in their home,” Christiana Care Health System Chief Information Security Officer Anahi Santiago said. “My information security program is not going to just be about the data center or the cloud but an extension into the patients’ homes. So, we can be responsible for protecting them wherever they use technology.” She added, “The patients are going to be driving the decision when it comes to their care, how they communicate, and the technology they want to use.” [HealthITSecurity]

CA – Former Employee Snooped on Health Records of More than 1,400 People

Alberta may need new ways of preventing information in electronic health records from falling into the wrong hands, the province’s privacy commissioner says in a new report written by Chris Stinner, a manager of special projects and investigations from the Office of the Information and Privacy Commissioner [PR & 26 pg report] The report concluded that Alberta Health Services (AHS) failed to ensure privacy training and proper oversight of a former typist and medical secretary at a psychiatric hospital who improperly looked at the medical records of 1,418 patients over 12 years. “The findings from this investigation suggest it is well past time to consider whether the current approach to safeguarding health information made available through Netcare, as implemented by AHS in co-operation with Alberta Health, is adequate,” information and privacy commissioner Jill Clayton wrote in a preamble to the report. Clayton is now considering whether she should instigate a wider review of Alberta Netcare, which is an electronic medical record system that gives 48,946 health-care workers access to diagnoses, treatment, and medical images for patients’ physical and mental health. In August 2015, AHS terminated the Alberta Hospital employee who broke the privacy rules. However, Stinner’s report said her coworkers four times reported her suspected misuse of the Netcare system to AHS managers in the 17 months before she lost her job. The first three times, managers neglected to check Netcare data logs to see how the worker was using the system, Stinner said. Stinner recommended AHS review privacy training for all staff, review how it investigates privacy breaches, and revisit how it audits employees’ use of Netcare. The law requires AHS to monitor how employees use the electronic records. [Edmonton Journal | AHS blamed for breach that saw thousands of patient files improperly accessed | AHS failed to protect health information, privacy commissioner finds | Alberta Health Services rebuked for failing to protect health records

US – FDA Issues Draft Guidance for Cybersecurity Management in Medical Devices

A draft of updated premarket guidance from the U.S. Food and Drug Administration shows that manufacturers should prepare a “cybersecurity bill of materials” before marketing medical devices. The requirement would require manufacturers to produce a list of the components that could be susceptible to vulnerabilities. FDA Commissioner Scott Gottlieb said, “Because of the rapidly evolving nature of cyber threats, we’re updating our premarket guidance to make sure it reflects the current threat landscape so that manufacturers can be in the best position to proactively address cybersecurity concerns when they are designing and developing their devices.” GovInfoSecurity

AU – My Health Record Privacy Amendments ‘Woefully Inadequate’: Labor

After carefully reading and considering 31 public submissions on the My Health Record privacy amendments, as well as three further documents from community health organisations, the Senate Community Affairs Legislation Committee’s report on its inquiry has made just one solitary recommendation. “The committee recommends the Bill be passed,” it reads. The committee didn’t come up with a single suggested improvement to the hastily written My Health Records Amendment (Strengthening Privacy) Bill 2018, which is intended to allay the privacy concerns which have lead to 900,000 Australians opting out of the centralised digital health records system. The Bill only addresses the two most prominent concerns, however. If passed, it would grant individuals the ability the delete their records completely rather than merely making them inactive, and tighten the restrictions on law enforcement agencies being able to access an individual’s health records. While the Labor senators on the committee supported the recommendation to pass the Bill, they’ve also called it “woefully inadequate”. … [and] the inquiry has revealed a range of serious flaws in the current legislation that are not addressed by the government’s Bill,” they wrote. The Labor senators said that they would move further amendments, including to assure that My Health Record can never be privatised or commercialised, or used by private health insurers; that employees’ right to privacy is protected in the context of employer-directed healthcare; and that vulnerable children and parents such as those fleeing domestic violence are protected. Originally scheduled to have ended in mid-October, the opt-out period for My Health Record has been extended to November 15. Labor has called for it to be extended indefinitely. The comprehensive Senate inquiry into the My Health Record system is now due to report this Wednesday October 17. ZDNet | With one month left to opt out of My Health Record privacy concerns remain

Horror Stories

WW Facebook Says Hackers Accessed Sensitive PII on 29 Million Users

Last month, Facebook disclosed a massive security vulnerability that it claimed affected some 50 million login tokens. Facebook now believes the number of accounts impacted to be closer to 30 million [blog notice]. For 400,000 of the accounts, which these attackers used to seed the process of gathering login tokens, personal information, such as “posts on their timelines, their lists of friends, Groups they are members of, and the names of recent Messenger conversations” and, in one instance, actual message content, were compromised. Of the 30 million ensnared in the attack, Facebook believes that for around half, names and contact information—meaning phone numbers, email addresses, or both—were visible to the attackers; 14 million of that pool had that same information scraped as well as myriad other personal details, which Facebook believes could contain any of the following: “[U]sername, gender, locale/language, relationship status, religion, hometown, self-reported current city, birthdate, device types used to access Facebook, education, work, the last 10 places they checked into or were tagged in, website, people or Pages they follow, and the 15 most recent searches” Facebook believes only 1 million of the total compromised accounts had no personal information accessed whatsoever. [Gizmodo | Facebook hackers accessed more private information than previously revealed | Chilling new details reveal intimate personal data stolen by Facebook hackers | How Facebook Hackers Compromised 30 Million Accounts | How to find out if yours was one of the unlucky hacked Facebook accounts]

US – Yahoo Agrees to Pay $85M to Settle Data Breach Lawsuits

Yahoo has agreed to pay $85 million to settle class-action lawsuits related to its 2013 and 2014 data breaches. Yahoo users will receive $50 million, and $35 million will go toward legal fees. U.S. and Israeli Yahoo users who had an account between Jan. 1, 2012, and Dec. 31, 2016, are eligible for the settlement. Small-business owners with Yahoo accounts can also claim money from proceedings. “We are pleased that we were able to reach a settlement with Yahoo, which would provide relief to impacted users and ensure that Yahoo improves its security practices going forward,” Morgan & Morgan Lead Counsel John Yanchunis said in a statement. U.S. District Court Judge Lucy Koh is expected to rule on the settlement Nov. 29. The Mercury News

US – Millions of Phones Numbers, Strategy Documents Exposed by Data Leak Affecting Tea Party Super PAC

Internal documents belonging to the Tea Party Patriots Citizens Fund, a Republican super PAC, were publicly exposed online as a result of a misconfigured database, including material involving the 2016 U.S. presidential race and call lists containing the names and phone numbers of more than a half-million people, a cybersecurity firm said. Upguard, a Silicon Valley-based cybersecurity firm that made the discovery, said its researchers came across a publicly available database in August containing over 2 gigabytes of files belonging to the Tea Party Patriots Citizens Fund, or TPPCF, a federal super PAC that raised and spent millions for conservative causes since its founding in 2013. Among the trove of exposed files were calls lists containing the names and phone numbers for more than 527,000 individuals, as well as “strategy documents, call scripts, marketing assets and other files revealing a focused effort to politically mobilize U.S. voters,” according to the Upguard. The files were found on a misconfigured Amazon Web Services S3 “bucket,” or cloud server, and were eventually made private after being brought to the super PAC’s attention. The Washington Times]

WW – Dating App Leaks Users’ Data

The entire user database of Donald Daters, a new online dating app for supporters of U.S. President Donald Trump, has leaked online. Touting it wants to help “make America date again,” the app’s database, which included usernames, profile pictures, device type, and private messages, as well as access tokens, was accessible from a public data repository. After being alerted to the issue, Emily Moreno, founder of the app and former aide to Sen. Marco Rubio, R-Fla., said, “We have taken swift and decisive action to remedy the mistake and make all possible efforts to prevent this from happening again.” TechCrunch

Identity Issues

CA – No Privacy Concerns with ID Scanners at P.E.I. Cannabis Stores, Says Government

The P.E.I. government is defending its use of ID scanners at its new cannabis retail stores, insisting they’re not being used to collect private information from customers and are an important tool to flag fake IDs. P.E.I. Information and Privacy Commissioner Karen Rose said the commission had received information from a member of the public “concerned about the collection of personal information by the recently opened cannabis outlets.” Rose said she will ask the liquor commission what personal information is being collected by these outlets, and their authority for collecting such information. She will also inquire “about their compliance with the FOIPP Act, including the security measures which are in place to protect personal information.” The P.E.I. Cannabis Management Corporation said that they do not retain any data and that the ID scanners are not connected to any sort of internet and are essentially standalone devices. The scanner is an industry standard used in other jurisdictions to validate a wide variety of national and international identification cards. The practice of confirming valid ID cards for everyone, even people who appear to be much older than the legal age for purchasing cannabis, will continue as well. [CBC News] See also: Privacy commissioner investigating personal data collection at cannabis stores]

CA Update: PEI Pulls Electronic ID Scanners from Cannabis Stores

Cannabis stores across the province of Prince Edward Island will no longer use electronic ID scanners. The decision comes after some customers questioned what information was being collected, and how it was being used. The concerns prompted an ongoing investigation from P.E.I.’s information and privacy commissioner. Cannabis Management Corporation said the scanners were meant to safeguard against underage purchases and fake IDs. “They were not meant to retain or track any data, but an IT specialist examined the scanners today and found some data was being kept for 24 hours inside the device,” it said. “This data was immediately wiped and settings were changed so as not to keep data in the future.” Finance Minister Heath MacDonald said the scanners will be gone for good, unless there’s some other need or reason to bring them back. [CBC News]

Internet / WWW

WW Gartner Picks Digital Ethics and Privacy as a Strategic Trend for 2019

A Gartner Group Analyst Gartner has put businesses on watch that as well as dabbling in the usual crop of nascent technologies organizations need to be thinking about wider impacts next year — on both individuals and society. Digital ethics and privacy has been named as one of Gartner’s top ten strategic technology trends for 2019 [PR here, blog post here & infographic here]. It writes: “Any discussion on privacy must be grounded in the broader topic of digital ethics and the trust of your customers, constituents and employees. While privacy and security are foundational components in building trust, trust is actually about more than just these components. Trust is the acceptance of the truth of a statement without evidence or investigation. Ultimately an organisation’s position on privacy must be driven by its broader position on ethics and trust. Shifting from privacy to ethics moves the conversation beyond ‘are we compliant’ toward ‘are we doing the right thing.” [TechCrunch]

Law Enforcement

US – Police Officers Rely on Social Media to Conduct Covert Investigations

Kashmir Hill reports on the growing use of undercover police officers to comb through unsuspecting social media profiles to discover illegal activity. Hill writes that little has been done to curtail law enforcement’s use of undercover online surveillance. A Freedom of Information Act request to more than 50 police departments across the U.S. showed that while many have social media policies, most fail to address covert investigations. Hill writes, “the unregulated nature of undercover social media police work leads to disturbing outcomes,” adding that while it may be a useful tool, officers “shouldn’t be secretly friending people for indefinite amounts of time, keeping them under permanent suspicion.”
The Root

US – Law Enforcement Robots Begin Patrolling New York Neighborhoods

Law enforcement robots have started patrolling several neighborhoods in New York, as well as at LaGuardia Airport. Each robot contains five cameras, one of which uses thermal-imaging technology. The data gathered by the patrol is communicated to an internet-based portal accessed by local security and law enforcement. The robots can also observe pedestrians on sidewalks, record license-plate numbers, and detect cellphone serial numbers. Knightscope CEO William Santana Li said, “This is a crazy combination of artificial intelligence, self-driving autonomous technology, robotics, and analytics in something that’s actually useful for society.” CBS Local

CA – K9 Drug Units Must Adapt to New Pot Laws

As legalization looms, K9 units across the country are facing a problem: their dogs are outdated. Drug-sniffing dogs undergo training from a very young age to be able to detect a wide variety of drugs, including cannabis, which will be legal in Canada on Oct. 17. And while some have been forced into early retirement, many will remain in their jobs, raising questions for legal experts concerned that law-abiding citizens might be stopped and searched by police based on an alert for a perfectly legal substance. Some organizations said they’ll be totally unaffected by legalization. Since crossing the border with cannabis will remain illegal without a permit, the Canadian Border Services Agency said all their drug-sniffing dogs will remain in the same role. When cannabis was illegal, police had reasonable grounds to search a person if a dog smelled cannabis on them. Now, Lewin said, though cannabis-related offences will still exist, the waters are muddied. Since dogs don’t distinguish their alerts based on specific drugs, police won’t know whether a dog is alerting them to the presence of fentanyl or a joint. Toronto cannabis lawyer Harrison Jordan said he expects to see court challenges where dogs alert their handler for the presence of a drug that turns out to be legal cannabis, and the cop finds a different illegal item, like a handgun — will that charge hold up in court, since the initial search was for a legal substance? “It really depends on the reasonable grounds that they have,” Jordan said. Toronto Star


US – Facebook, Google Hit With Lawsuits for ‘Secret’ Location Tracking

Facebook and Google have both been hit with lawsuits claiming that the Silicon Valley giants secretly track their users’ locations against their will and use the information to pad its advertising business. The class action complaint against Facebook, which was filed by Brett Heeger in San Francisco federal court, said the social network tracks its users even after they’ve opted out of its “Location History” feature. “Facebook secretly tracks, logs, and stores location data for all of its users–including those who have sought to limit the information about their locations that Facebook may store in its servers by choosing to turn Location History off,” the suit said. “Because Facebook misleads users and engages in this deceptive practice, collecting and storing private location data against users’ expressed choice, Plaintiff brings this class action on behalf of himself and similarly situated Facebook users.” Facebook in pushed back against the lawsuit, saying its location tracking policy has always been transparent. The lawsuit follows a similar complaint against Google, which was filed on Oct. 12. in San Francisco federal court. The suit claims that Google “intentionally provided inaccurate instructions” for its users to turn off its own “Location History” feature. “Google explicitly represented that its users could prevent Google from tracking their location data by disabling a feature called ‘Location History’ on their devices. Google stated: ‘With Location History off, the places you go are no longer stored.’ This statement is false,” the lawsuit claimed. “Turning off the ‘Location History’ setting merely stops Google from adding new locations to the ‘timeline’ accessible by users. In secret, Google was still tracking, storing, and monetizing all the same information.” Instead, users have to navigate a labyrinth to reach the correct “Web & Activity” page to turn off location tracking — a page “Google’s instructions intentionally omit all references to,” according to the class action complaint. The suit points to an Aug. 13 report from the Associated Press that brought Google’s tracking policies into question. Following the AP’s report, Google updated its location tracking policy to “make it more consistent and clear,” the company told TheWrap in August. [The Wrap]

Online Privacy

EU – Twitter Faces Investigation by Privacy Watchdog Over User Tracking

Ireland’s Data Protection Commission has launched an investigation into Twitter after it refused a request inquiring into its collection of location data from users. The social media company is alleged to have turned down the request for information about the data Twitter collects from shortened Twitter links. When users tweet a link to a web address on the platform, Twitter applies its own technology to shorten the URL into a t.co format. Mr Veale believes the company could be collecting information such as timestamps and the devices being used. It could be possible to figure out a person’s location with this data. Ireland’s privacy watchdog received the complaint in August. Twitter’s circumvention of Mr Veale’s request to access information could count as a violation of the European Union’s General Data Protection Regulation – a new privacy framework that came into effect in May. Twitter says it uses the t.co domain to allow the sharing of long URLs without exceeding the 280 character limit of a tweet, to measure information such as how many times a link has been clicked and “as a quality signal for surfacing relevant, interesting tweets.” It is also part of the company’s service to protect users from malicious sites, with the URL-shortening application used to assess the link against a list of potentially dangerous web pages. The Telegraph

Other Jurisdictions

WW – ICDPPC Releases Road Map on Future of Conference

The International Conference of Data Protection and Privacy Commissioners has released the “Resolution on a Roadmap on the Future of the International Conference.” The ICDPPC launched a survey in 2017 on the future size and membership of the conference. After roundtables and public consolations were held, the ICDPPC released the road map to highlight the key trends and demands in order to transform the conference from “an annual meeting to an effective network of privacy and data protection authorities.” The road map is co-authored by the Office of the Privacy Commissioner of Canada and France’s data protection authority, the CNIL. Co-sponsors of the road map include DPAs from Italy, Albania, Argentina, Germany and Poland. PrivacyConference2018

Privacy (US)

US – FTC Looks Back on 20 Years of COPPA

In a blog post marking the occasion, the U.S. Federal Trade Commission looks back at 20 years of the Children’s Online Privacy Protection Act, noting, “Many of the kids the law was originally designed to protect are now parents themselves.” The FTC’s Peder Magee notes the agency “continues to be committed to rigorous COPPA enforcement” and that the law has “responded to developments in technology.” Magee says the FTC also “seeks new ways to ensure verifiable parental consent” and that self-regulation plays an important role in the ecosystem. “So far,” he points out, “the FTC has approved seven Safe Harbor Programs that companies can work with to ensure their practices are up to scratch.” FTC.gov

Privacy Enhancing Technologies (PETs)

WW – Privacy by Design Guidance Book Published

This year’s IAPP Privacy Engineering Section Forum was sold out. As part of a dedicated response to covering the topic, the IAPP released “Strategic Privacy by Design,” a new book by R. Jason Cronk. Written from a practitioner’s perspective, this is the first IAPP book to get into the details of how privacy by design works, with dozens of sample scenarios, workflows, charts, and tables. While this book is not written specifically for the GDPR, it can be used as a process for data protection by design and default, and it is invaluable for building better processes, products, and services that consider privacy as a design requirement. IAPP Store See the Privacy Engineering Program website and Kicking off the NIST Privacy Framework: Workshop #1 – video | commentary: A Framework for Online Privacy and

WW Tim Berners-Lee’s Launches New Project on Data Privacy

Tim Berners-Lee, the inventor of the World Wide Web, is trying to figure out how to keep your private information from advertisers’ prying eyes. He teamed up with a group of experts, including folks from MIT, and started Inrupt, a start-up whose open-source project, Solid [here], should achieve that lofty goal. Solid accumulates all your data into what its creators call a “Solid POD,” a repository of all the personal information you want to share with advertisers or apps, with a clear and understandable permission system. You can decide which app gets your data and which do not. Furthermore, when using apps that support Solid (say, your fitness app), you won’t need to enter any data — just allow or disallow access to the Solid POD, and the app will do the rest on its own. While this is helpful, and it’s really cool that it simplifies personal-data management, the truth is that another, much more potent solution already exists. Every day this solution gains traction in the developer community, and many of its features are already being embedded in financial and other institutions worldwide. It’s called distributed ledger technology (DLT). [wiki here] DLT-based apps (also called “dApps”) are superior to Solid for 3 reasons: 1) dApp data is spread across hundreds, if not thousands, of different nodes (users, servers, etc.); 2) dApps enable users not only to decide with whom to share data, but also to earn value by doing that, thanks to a special utility token system that rewards users with tokens for specific actions; and 3) Since your data on DLT is encrypted, you can share segments of it with advertisers and service providers, while still remaining anonymous if you so choose. My biggest complaint remains its centralization. DLT circumvents that by providing a robust, nearly unhackable system, where personal data can truly remain hidden and untouched by those without necessary permissions — be they hackers or greedy advertisers. I may sound like I’m bashing Berners-Lee’s brainchild, but it’s quite the opposite. I want both of them to succeed. In fact, I propose a Solid POD based on DLT. That way, it would definitely be decentralized. That way it would have all the capabilities decentralized ledger technology has to offer, and the potential to upgrade the internet to 2.0. [MarketWatch]


CA StatsCan Survey: One in Five Businesses Hit by Cyberattacks Last Year

More than one in five Canadian companies say they were impacted by a cyberattack last year, with businesses spending $14 billion on cybersecurity — $8 billion on cybersecurity staff and contractors, $4 billion on related software and hardware and $2 billion on other prevention and recovery measures — as they confront greater risks in the digital world, according to a new Statistics Canada survey [see blog post here, report here & Infographic here]. The most common suspected motive was an attempt to steal money or demand a ransom payment, according to the survey. Theft of personal or financial information was less typical — less than one-quarter of the cyberattacks — though it was the most cited reason for investing in cybersecurity, StatCan said. Only 10% of businesses affected by a cyberattack reported it to law enforcement agencies last year, StatCan said. Large businesses — those with 250 or more employees — were more than twice as likely as small ones — between 10 and 49 employees — to be apparent targets, according to the report. It said the attacks resulted in an average of 23 hours of “downtime” per company in 2017. Data for the survey — the first of its kind in Canada — were collected between January and April 2018, with a sample size of 12,597 businesses and a response rate of 86%. [The Toronto Star]

US Security Leaders Will Need to Protect Patient Privacy at Home

Healthcare security leaders need to think beyond protecting the organization to protecting patient privacy and data security at home in the coming years, observed Christiana Care Health System [here] CISO Anahi Santiago [here]. “At some point, I’m going to have start thinking about how to protect patients in their home. My information security program is not going to just be about the data center or the cloud but an extension into the patients’ homes. So, we can be responsible for protecting them wherever they use technology,” Santiago told a panel at the HIMSS Healthcare Security Forum. “The patients are going to be driving the decision when it comes to their care, how they communicate, and the technology they want to use,” she said. Healthcare information security is a patient safety issue, she stressed. “As we think about the next generation of security, we need to bake security into the fabric of the organization, as opposed to putting it in after the fact,” she added. Santiago said that providers need to automate more of their security tasks to keep up with threats. Organizations should automate menial tasks that take up a lot of time, such as researching phishing attacks. [HealthIT Security]

CA Three Quarters of Canadian SMBs Don’t Have Patching Policy: Survey

Small and mid-sized Canadian companies still have a long way to go to beef up their defences against cyber attacks if a newly-released Canadian Internet Registry Authority (CIRA) survey of people with responsibility over IT security decisions is representative [see PR here & survey report here]. Of the 500 business owners and employees who manage information technology questioned, 71% said their firm did not have a formal patching policy. In addition, only 54% of small businesses said their firm provides cybersecurity training for employees. CIRA staff noted that 82% of respondents from mid-sized firms (over 250 employees) said their company has a training program. Meanwhile, 78% of respondents were confident in their level of cyber threat preparedness. 40% of respondents said their firm experienced a cyber attack that staff had to respond to in the last 12 months. 10% experienced 20 or more attacks. The survey also showed 67% of respondents outsource at least part of the cybersecurity footprint to external vendors. Almost 90% (88%) of respondents were concerned with the prospect of future cyber attacks, and 28% suggested they will add cybersecurity staff in the next year. Among respondents, 24% said no one in their firm has primary responsibility over cyber security. Another 18% said their firm has one person responsbile for those functions. [ITWorld Canada]

CA – 38% of Canadian Businesses Unaware of PIPEDA: CIRA

Within its Fall 2018 Cybersecurity Survey, the Canadian Internet Registration Authority revealed 38 percent of Canadian businesses said they are unfamiliar with the Personal Information Protection and Electronic Documents Act. The lack of PIPEDA awareness comes as 59 percent of businesses state they store customers’ personal information, and 40 percent claim they have suffered a cyberattack within the past 12 months. The CIRA survey also found 54% of small businesses provide cybersecurity training to their employees, and 88% of respondents were concerned about future cyberattacks. “Training and awareness are critical to ensuring your business is cyber-secure,” CIRA Chief Security Officer Jacques Latour said in a statement. “No matter how great your IT team is, anyone with a network-connected device can be the weak point that brings your business down.” MobileSyrup

WW – Study Expects 247% Increase in Third-Party Attacks Over Next Two Years

Opus, a provider of global compliance and risk management solutions, partnered with research firm ESI ThoughtLab, WSJ Pro Cybersecurity, and other cybersecurity organizations to launch The Cybersecurity Imperative to benchmark the cybersecurity practices and performance of more than 1,300 organizations globally. The study found attacks on and through third-party partners, customers and vendors to present the fastest-growing threat and predicts such attacks on partners and vendors will grow 284% over the next two years, while attacks through vendors will increase 247%. Opus Vice President of Innovation and Alliances Dov Goldman said that as companies turn to rely on vendors, they expose themselves to increased cybersecurity risk, adding, “Companies must support digital innovation with the tools and business practices to manage rising information security and privacy risks, especially those from third parties.” The Associated Press

US Convinced a Cyber Attack is Looming, Many Firm Still Don’t Prepare

Many organizations think experiencing a security cyber attack is inevitable, but a majority are not taking adequate steps to protect themselves, according to a report from insurance firm The Travelers Companies Inc. [here] The firm commissioned Hart Research [here] to conduct a national online survey of 1,201 business decision makers in June 2018, and found more than half (52 percent) of respondents think suffering an attack is inevitable [see PR here & 1 pg PDF infographic here]. Despite this, 55 percent have not completed a cyber risk assessment for their businesses; 62 percent have not developed a business continuity plan; 63 percent haven’t completed a cyber risk assessment on vendors who have access to their data; and 50 percent do not purchase cyber insurance. The percentage of respondents who think the current business environment is more risky continues to decrease, according to the report. In 2018 it was 36%, compared with 41% in 2016 and 48% in 2014. Overall, the concern about cyber security risk is second only to medical cost inflation. But it’s the top concern of large businesses and in sectors such as technology, banking and professional services. [Information Management]

US FTC Published New Materials on Cybersecurity for Small Business

The FTC launched new cybersecurity resources for small businesses – you’ll find them at FTC.gov/SmallBusiness. This new national cybersecurity education campaign grew out of discussions we had last year with small business owners across the country about cybersecurity challenges. The campaign is co-branded with the National Institute of Standards and Technology (NIST), the Department of Homeland Security (DHS), and the Small Business Administration (SBA). The new materials include fact sheets, videos and quizzes on these topics:

  • Cybersecurity Basics – [here];
  • Understanding the NIST Cybersecurity Framework – [here];
  • Physical Security – [here];
  • Ransomware – [here];
  • Phishing – [here];
  • Business Email Imposters – [here];
  • Tech Support Scams – [here];
  • Vendor Security – [here];
  • Cyber Insurance [here];
  • Email Authentication – [here];
  • Hiring a Web Host – [here]; and
  • Secure Remote Access – [here].

The simple format delivers information in a way that will make it easy for you to talk about cybersecurity with your employees, vendors, and others involved in your business. [FTC Blog]

CA – Canadian Businesses Spent $14B on Cybersecurity in 2017: Survey

A survey from Statistics Canada finds Canadian businesses spent $14 billion on cybersecurity in 2017. The survey also reveals more than one in five businesses suffered a cyberattack last year, but only 10% of those businesses reported the incident to law enforcement. Of the $14 billion Canadian businesses devoted to cybersecurity, $8 billion went to the addition of staff and contractors, $4 billion went to software and hardware, and $2 million was spent on recovery and prevention measures. Statistics Canada polled 12,597 businesses for the report and received responses from 86 percent of their targets. Financial Post

Smart Devices / IoT

US – IoT Security Bill Requires Devices to Ship with Unique Passwords

On September 28, 2018, the Information Privacy: Connected Devices bill was signed into law. Effective January 1, 2020, the law requires that Internet of Things (IoT) devices ship with unique passwords instead of a common default password.

  • nextgov: In California, It’s Going to Be Illegal to Make Routers With Weak Passwords
  • engadget: California bans default passwords on any internet-connected device
  • scmagazine: Weak passwords outlawed out West, California law aims to secure IoT devices
  • legislature.ca.gov: SB-327 Information privacy: connected devices.

CA Privacy Expert Steps Down From Advisory Role With Sidewalk Labs

Ann Cavoukian, a leading privacy expert and former Information and Privacy commissioner of Ontario, quit her advisory role with Sidewalk Labs, Google’s sister company, which is preparing to build a data-driven neighbourhood at Quayside on Toronto’s waterfront – here]. Saying in her resignation letter that the proposed protection of personal data [see blog post update here & 41 pg PDF here] “is not acceptable.” Cavoukian believes the plan for the Quayside smart-city development does not adequately protect individual privacy, and data collected from sensors, surveillance cameras and smartphones must be de-identified at source. Writing “Just think of the consequences: If personally identifiable data are not de-identified at source, we will be creating another central database of personal information (controlled by whom?), that may be used without data subjects’ consent, that will be exposed to the risks of hacking and unauthorized access As we all know, existing methods of encryption are not infallible and may be broken, potentially exposing the personal data of Waterfront Toronto residents! Why take such risks?” Cavoukian’s resignation (the fourth adviser to resign from the project citing privacy concerns – here) came less than a week after Sidewalk Labs published its digital governance proposals, a 41-page document that sought to put people’s privacy fears to rest by detailing how data collected in Quayside would be managed by an independent civic data trust, and not owned or controlled by Google. While Sidewalk Labs said it would de-identify data, it couldn’t guarantee what third parties would do. When the plan was recently intoduced to the Quayside’s digital advisory panel, Cavoukian realized “de-identification at source” was not a guarantee. “When Sidewalk Labs was making their presentation, they said they were creating this new civic data trust which will consist of a number of players — Sidewalk, Quayside, Waterfront Toronto and others — and that Sidewalk Labs would encourage them to de-identify the data involved that was collected but it would be up to the group to decide.” “That’s where I just said no.” David Fraser, a privacy lawyer advising Sidewalk Labs, was surprised Cavoukian’s resignation came when it did. “Her resignation seems to me a little premature because she would be very influential with (the civic data trust) [see here starting at pg 12] once it’s established,” he said. Fraser said the proposal to establish a civic data trust is “revolutionary.” Chantal Bernier [here & wiki here], legal adviser to Waterfront Toronto (and former interim Privacy Commissioner of Canada), said the project is sparing no effort to identify and address privacy issues. “We are still identifying every privacy risk to which we will apply every privacy protection available to us,” Bernier said in an email. On the other hand, Fenwick McKelvey, an associate professor in communication Studies at Concordia University said “Sidewalk Labs is at the centre of a debate about data and data protection. The resignation of Cavoukian is clear evidence that we don’t have proper regulatory infrastructure to deal with these new smart city initiatives Her resignation, especially because she was participating in good faith, is a major blow to the legitimacy of the project.” The Toronto Star | Ontario’s former privacy commissioner resigns from Sidewalk Labs | ‘Not good enough’: Toronto privacy expert resigns from Sidewalk Labs over data concerns | Privacy expert Ann Cavoukian resigns from Sidewalk Toronto smart-city project: ‘I had no other choice’ – (subscribers only) | Privacy expert Ann Cavoukian resigns as adviser to Sidewalk Labs | Waterfront Toronto, advisory panel want Quayside master plan delayed and see also: Facing privacy backlash, Sidewalk Labs proposes giving data to a public trust | An Update on Data Governance for Sidewalk Toronto | Sidewalk Labs unveils draft data and privacy plans for high-tech Toronto project | Sidewalk Labs promises not to control data collected in Quayside’s public spaces | Waterfront Toronto ‘not shying away’ from Sidewalk Toronto data privacy questions, senior official says | Sidewalk Labs use of cellphone data in proposed U.S. deal raises concern in Toronto | Sidewalk Labs requires detailed safeguards for its own employees’ data | ‘Public good’ not ‘properly represented’ by Sidewalk Labs: former RIM CEO | Sidewalk Labs unveils draft data and privacy plans for high-tech Toronto project


US Secret Government Report Shows Gaping Holes in Privacy Protections from U.S. Surveillance

In response to Freedom of Information Act requests, a federal privacy watchdog [Privacy and Civil Liberties Oversight Board] released an important report [finalized in December 2016 – here & 28 pg PDF here ] about [the 2014 Obama Presidential Policy Directive “PPD-28” – here] on how the U.S. government handles people’s personal information that it sweeps up in its surveillance. The report addresses government agencies’ implementation of the policy directive on government spying and the treatment of “personal information,” which includes communications like emails, chats, and text messages. The report makes clear that PPD-28’s protections are weak in practice and rife with exceptions. And it will likely only add to concerns European regulators already have about the ways in which U.S. surveillance harms the privacy rights of Europeans — jeopardizing an important transatlantic data-sharing agreement. Here are three key takeaways: 1) The report confirms just how modest the directive’s privacy protections are; 2) There has been significant uncertainty — and inconsistency — among agencies about what spying activities the directive covers; and 3) There are reasons to be concerned about the NSA’s information-sharing practices and other agencies’ exploitation of intercepted communications. … In short, the U.S. government is exploiting the personal information it gathers using these spying activities more broadly than ever, but the report reveals just how anemic PPD-28’s protections are in practice. It also raises serious questions about whether the directive has been implemented fully and consistently across the intelligence community. [Speak Freely Blog (ACLU) | US Intelligence Privacy Policies Inadequate – Federal Report

WW – Study Finds 88% of Android Apps Share Info with Alphabet

A study conducted by University of Oxford researchers analyzed nearly 1 million Android apps to determine how smartphone data is collected and shared. The researchers found the average app transferred data to 10 third parties. For one in every five apps, the number would exceed 20 third parties. Of all the apps examined, 88% was designed to send information back to Google’s parent company, Alphabet, with 43% able to send information to businesses owned by Facebook. University of Oxford Computer Scientist and Project Lead Reuben Binns said the surge in data sharing is a result of app developers’ reliance on advertisements rather than sales. (Registration may be required to access this story.) FT.com

CA – Digital Displays in Condos Used to Target Ads

Residents at a Liberty Village condo have learned that video screens placed in their condo elevators are equipped with cameras that are collecting data for advertisers. The screens, installed by Visio Media, scan the faces of residents as they watch ads, and share the information with advertisers, providing statistics on gender and age. The cameras can detect the presence of people in the elevator and display ads that are catered to their demographic. According to Visio Media’s website, that even includes ads that appeal to children, like the one below for a root beer flavoured tooth polish. 680 News

Telecom / TV

CA U.S. Lawmakers Warn Canada to Keep Huawei Out of Its 5G Plans

In a letter dated October 11 addressed to Canadian Prime Minister Justin Trudeau. U.S. Senators Mark Warner and Marco Rubio make a very public case that Canada should leave Chinese tech and telecom giant Huawei [here & wiki here] out of its plans to build a next-generation mobile network [5G networks]. The outcry comes after the head [Scott Jones] of the Canadian Centre for Cyber Security dismissed security concerns regarding Huawei in comments last month [here & text here – in his September 20 testimony the Standing Committee on Public Safety and National Security – here]. As part of the Defense Authorization Act, passed in August, the U.S. government signed off on a law that forbids domestic agencies from using services or hardware made by Huawei and ZTE. A week later, Australia moved to block Huawei and ZTE from its own 5G buildout. Next generation 5G networks already pose a number of unique security challenges. Lawmakers caution that by allowing companies linked to the Chinese government to build 5G infrastructure, the U.S. and its close allies (Canada, Australia, New Zealand and the U.K.) [the Five Eyes alliance – wiki here] would be inviting the fox to guard the henhouse. [TechCrunch Additional coverage at: engadget and see also: Ottawa probes Huawei equipment for security threats | Ottawa launches probe of cyber security | U.S. intelligence officials question Canada’s ability to test China’s Huawei for security breaches | New cybersecurity chief defends Canadian approach to Huawei security rumours | U.S. intelligence officials question Canada’s ability to test China’s Huawei for security breaches]

Workplace Privacy

WW 3 Out of 4 Employees Pose a Security Risk

According to MediaPRO’s [here] third annual State of Privacy and Security Awareness Report [see PR here, blog posts here & here, 1 pg PDF infographic here and flip book here] some 75% of employees today pose a moderate or severe risk to their company’s data and 85% of finance workers show some lack of data security and privacy knowledge. MediaPRO surveyed more than 1,000 employees across the United States to quantify the state of privacy and security awareness in 2018. More people fell into the risk category this year than in 2017 – and that number had nearly doubled since the inaugural survey, he says. MediaPRO based its study on a variety of questions that focus on real-world scenarios, such as correctly identifying personal information, logging on to public Wi-Fi networks, and spotting phishing emails. Based on the percentage of privacy and security-aware behaviors, respondents were assigned to one of three risk profiles: risk, novice, or hero. Here’s a thumbnail of some other notable findings: 1) Employee performance was worse this year across all eight industry verticals measured. Respondents did much worse in identifying malware warning signs, knowing how to spot a phishing email and social media safety; 2) Managers showed riskier behaviors than lower-level employees. Management performed worse than their entry- and mid-level counterparts when asked how to respond to a suspected phishing email. Only 69% of managers chose the correct answer vs. 86% of lower-level employees. And nearly one in six management-level respondents – 17% – chose to open an unexpected attachment connected to a suspected phishing email; 3) Finance sector employees performed the worst. Of the seven vertical industry sectors examined, financial employees got the lowest scores. 85% showed some lack of cybersecurity and data privacy knowledge. And, 19% of finance workers thought opening an attachment was an appropriate response to a suspected phishing email; and 4) Too many employees could not identity phishing emails. 14% of employees could not identity a phish, a notable increase from 8% in 2017. And, 58% could not define business email compromise. Oonly 81% say they would report the suspected phishing email to their IT department. [DARKReading | Why 75% of your employees could end up costing you millions | It’s Everyone’s Job to Ensure Online Safety at Work

US – Survey: 75% of Employees Display Insufficient Cyber Knowledge

A survey performed by MediaPro found 75% of U.S. employees put their companies at risk of a cyberattack. For the survey, more than 1,000 staff members were asked about their cybersecurity knowledge and awareness. Financial organizations have had the most difficulty with their staff members in this area, as 85% of employees in the industry displayed an insufficient understanding of data privacy and security. Management positions displayed riskier behavior compared to their employees. Of the respondents polled, 77% of executives showed a lack of privacy and security recognition compared to 74% of other employees. [TechRepublic]




1-15 October 2018


US – Feds Force Suspect to Unlock an Apple iPhone X With Their Face

A child abuse investigation unearthed by Forbes [PDF] includes the first known case in which law enforcement used Apple Face ID facial recognition technology to open a suspect’s iPhone. That’s by any police agency anywhere in the world, not just in America. It happened on August 10, when the FBI searched the house of 28-year-old Grant Michalski, a Columbus, Ohio, resident who would later that month be charged with receiving and possessing child pornography [see August 24 DoJ PR]. With a search warrant in hand, a federal investigator told Michalski to put his face in front of the phone, which he duly did. That allowed the agent to pick through the suspect’s online chats, photos and whatever else he deemed worthy of investigation. Whilst the feds obtained a warrant, and appeared to have done everything within the bounds of the law, concerns remain about the use of such tactics. “Traditionally, using a person’s face as evidence or to obtain evidence would be considered lawful,” said Jerome Greco, staff attorney at the Legal Aid Society. “But never before have we had so many people’s own faces be the key to unlock so much of their private information.” Thus far, there’s been no challenge to the use of Face ID in this case or others. But Fred Jennings, a senior associate at Tor Ekeland Law, said they could come thanks to the Fifth Amendment, which promises to protect individuals from incriminating themselves in cases. [Forbes  Additional coverage at: Naked Security (Sophos), The Verge and Ars Technica]


CA – Draft Guidance Released Regarding Mandatory Breach Reporting Under PIPEDA

On September 17, 2018, the Office of the Privacy Commissioner of Canada (OPC) released draft guidance regarding PIPEDA’s new mandatory security and privacy breach notification requirements, which come into force on November 1, 2018. This guidance contains helpful information regarding how and when to report breaches of security safeguards to the OPC, the corresponding notice that must be provided to individuals, and record-keeping obligations associated with such breaches. Of particular note, this guidance provides the following key pieces of information and clarification:

  • Not all breaches must be reported to the OPC. Only those breaches that create a “real risk of significant harm” to an individual are the subject of mandatory reporting obligations;
  • Reporting should commence as soon as possible once the organization determines that a breach creates a real risk of significant harm;
  • The obligation to report resides with the organization in control of the personal information that is the subject of the breach;
  • A report made to the OPC must contain information regarding the date of the breach, the circumstances of the breach, personal information involved, number of individuals affected;
  • When a breach creates a real risk of significant harm, the individuals whose personal information was the subject of the breach must also be notified of the breach;
  • If a breach may also be mitigated or the risk of harm reduced via notification of other government institutions or organizations, then notification of these bodies must also occur; and
  • The obligation to maintain records regarding breaches is not limited to only those breaches that are reportable to the OPC.

The draft guidance includes a PIPEDA breach report form, which can be used by organizations to report security and privacy breaches to the OPC following the effective date of the breach notification requirements. The draft guidance and breach report form are consultation documents, and as such, the OPC invited stakeholders to provide feedback on both documents by October 2, 2018. The final versions of both documents will be published in time for November 1, 2018. [Mondaq]

CA – OPC Seeks Federal Court Determination on Key Issue for Canadians’ Online Reputation

The Office of the Privacy Commissioner of Canada (OPC) is turning to the Federal Court to seek clarity on whether Google’s search engine is subject to federal privacy law when it indexes web pages and presents search results in response to queries of a person’s name. The OPC has asked the court to consider the issue in the context of a complaint involving an individual who alleges Google is contravening PIPEDA [OPC guidance] by prominently displaying links to online news articles about him when his name is searched. The complainant alleges the articles are outdated, inaccurate and disclose sensitive information about his sexual orientation and a serious medical condition. By prominently linking the articles to his name, he argues Google has caused him direct harm. Google asserts that PIPEDA does not apply in this context and that, if it does apply and requires the articles to be de-indexed, it would be unconstitutional. Following public consultations, the OPC took the view [see position paper] that PIPEDA provides for a right to de-indexing – which removes links from search results without deleting the content itself – on request in certain cases. This would generally refer to web pages that contain inaccurate, incomplete or outdated information. However, there is some uncertainty in the interpretation of the law. In the circumstances, the most prudent approach is to ask the Federal Court to clarify the law before the OPC investigates other complaints into issues over which the office may not have jurisdiction if the court were to disagree with the OPC’s interpretation of the legislation. A Notice of Application [see here], filed today in Federal Court, seeks a determination on the preliminary issue of whether PIPEDA applies to the operation of Google’s search engine. In particular, the reference asks whether Google’s search engine service collects, uses or discloses personal information in the course of commercial activities and is therefore subject to PIPEDA. It also asks whether Google is exempt from PIPEDA because its purposes are exclusively journalistic or literary. While Google has also raised the issue of whether a requirement to de-index under PIPEDA would be compliant with s. 2(b) of the Charter, the OPC has decided not to refer this issue to the Court at this stage. The Charter issue may not need to be addressed depending on how the reference questions are answered. The Charter issue is also highly fact based and would require an assessment of the facts of the complaint, making it inappropriate for a reference. Investigations into complaints related to de-indexing requests will be stayed pending the results of the reference. The Privacy Commissioner’s office will also wait until this process is complete before finalizing its position on online reputation. [Office of the Privacy Commissioner of Canada] | Coverage at: Will Canadians soon have the ‘right to be forgotten’ online? Here’s what you need to know | Privacy czar asks Federal Court to settle ‘right to be forgotten’ issue | Privacy watchdog asks Federal Court to rule on Google de-indexing question]

CA – B.C. Political Parties Face Personal Data Collection Investigation

How British Columbia’s political parties harvest and use personal information from social media will be subject to an Office of the Information and Privacy Commissioner investigation within the next month, Commissioner Michael McEvoy said Sept. 28 in his comments in Vancouver to B.C. Information Summit 2018 delegates. McEvoy said reviews of how parties use information has already led to auditing in the United Kingdom, where he has assisted the work of that country’s information commissioner, his B.C. predecessor. “That is something we are going to be doing in British Columbia,” he said. “Politicians realize that uses, misuses and abuses of data in a personal context can change elections,” University of Victoria political science professor Colin Bennett [here] said. “Political affiliation is something that should only be captured with individual consent.” He said political parties “are the major organizations that fall between the cracks of a privacy regime that is either federal or provincial or is corporate or government.” Political parties identifying their voter bases can vacuum up personal information shared on social media. And that can start with something as simple as an election voters’ list readily available to political parties. Bennett said use of the list is excluded from no-phone-call regulations of the Canadian Radio-television and Telecommunications Commission designed to prevent nuisance calls. As well, Bennett explained, parties are not covered by federal anti-spam legislation. He said the proposed federal Election Modernization Act [Bill C-76 here] sections supposed to deal with privacy are “basic and incomplete.” Further, Bennett said, parties do have privacy policies but those are vague and don’t necessarily mesh with each other. Other speakers said greater oversight is needed over how Canadian political parties collect and use voters’ personal information. [Kamloops Matters]

US – U.S.-Mexico-Canada Pact Covers Data Privacy, Local Storage Rules

The U.S., Canada, and Mexico would have to adopt data protection measures under a deal aimed at replacing the North American Free Trade Agreement. Those measures should include provisions on data quality, collection restrictions, and transparency, according to text of the U.S.-Mexico-Canada Agreement released by the U.S. Trade Representative’s Office. Under the deal, governments would have to publish information on how businesses can comply with the rules and the remedies that individuals can pursue. The agreement reflects an increased awareness of data protection issues following the EU’s adoption of new privacy rules and the Cambridge Analytica scandal involving Facebook Inc. data. It would direct the three countries’ governments to exchange information on data protection policies and work together to promote digital trade. The agreement also would ban rules requiring data to be stored locally and prohibit restrictions on data flows for business purposes. Lawmakers in all three countries must approve the deal for it to take effect. Tech industry groups supported the pact’s digital trade and data privacy provisions. [Bloomberg BNA See also: Key takeaways from the new U.S.-Mexico-Canada Agreement

CA – USMCA Falls Short on Digital Trade, Data Protection and Privacy: Geist

The United States-Mexico-Canada Agreement (USMCA) is more than just an updated version of the North American Free Trade Agreement. With the inclusion of a digital trade chapter, the deal sets a new standard for e-commerce that seems likely to proliferate in similar agreements around the world. The chapter raises many concerns, locking in rules that will hamstring online policies for decades by restricting privacy safeguards and hampering efforts to establish new regulation in the digital environment. For example, the USMCA includes rules that restrict data localization policies that can be used to require companies to store personal information within the local jurisdiction. Jurisdictions concerned about lost privacy in the online environment have increasingly turned to data localization to ensure their local laws apply. These include the Canadian provinces of British Columbia and Nova Scotia, which have data localization requirements to keep sensitive health information at home that may be jeopardized by the agreement. It also bans restrictions on data transfers across borders. That means countries cannot follow the European model of data protection that uses data transfer restrictions as a way to ensure that the information enjoys adequate legal protections. In fact, countries could find themselves caught in a global privacy battle in which Europe demands limits on data transfers while the USMCA prohibits them. The chapter fails to reflect many global e-commerce norms, and may ultimately restrict policy flexibility on key privacy issues will have been quietly established as the go-to international approach. [The Washington Post | Experts say USMCA frees Canadian data — but with unknown risks


WW – Privacy Advocates Face Negative Stereotyping Online

New research from HideMyAss! has revealed that people around the world perceive privacy advocates as untrustworthy, paranoid, male loners with something to hide despite their own views towards privacy.[PR, blog post & report] The security software firm partnered with Censuswide to survey 8,102 people from the UK, US, France and Germany to compile its new report. Even though two fifths of those surveyed (41%) agreed that privacy is an indispensable human right, 80% believed their online history could be accessed without their knowledge by governments, hackers, police and partners. The research also highlighted a general apathy towards protecting privacy as more than one in five admitted they take no action to protect it. Of those who do take action, 78% rely on some form of password protection as their many privacy measure. More than half (56%) of respondents claim to never share their password with anyone and 22% do not save passwords on their browsers or devices. HideMyAss! also found that while there is overwhelming support for people using the Internet privately for legal actives (74%), 26% of respondents believe that people who aren’t willing to divulge what they do online have something to hide with 24% expecting them to be untrustworthy and more than a fifth (22%) of the opinion they are more likely to have a criminal record. When it comes to the particular traits of privacy advocates, respondents said they could be paranoid (52%), loners (37%) or people partial to spying on their neighbours (36%).  TechRadar


US – DOJ Releases “Best Practices for Victim Response and Reporting of Cyber Incidents,” Version 2.0

On September 27, 2018, the U.S. Department of Justice Computer Crime and Intellectual Property (CCIPS) Cybersecurity Unit released Version 2.0 of its “Best Practices for Victim Response and Reporting of Cyber Incidents“ [PDF] Originally issued in 2015, the updated guidance seeks to help organizations better equip themselves to be able to respond effectively and lawfully to cyber incidents. The updated version distills insights from private and public sector experts, incorporating new incident response considerations in light of technical and legal developments in the past three years. While the guidance is designed to mostly be applicable to small- and medium-sized businesses, it may be useful to larger organizations as well. Similar to Version 1.0 [PDF] (see previous analysis here), the updated guidance is divided into several parts, advising companies on steps to take before, during, and after a cybersecurity incident. While the document is not intended to have any regulatory effect, the guidance is a useful tool for organizations seeking to make sure their data security policies align with today’s best practices. [Privacy & Data Security Blog (Alston & Bird)]

Electronic Records

CA – Clinical Trial Data Not Quite Confidential: Federal Court

On July 9, 2018, the Federal Court released its decision ordering Health Canada to provide the results of certain clinical trials, including participant level datasets, to an American researcher: Doshi v Canada (Attorney General), 2018 FC 710 [PDF]. Health Canada requires researchers to sign a standard confidentiality agreement in order to release clinical trial data for the purpose of research. On the basis of the researcher’s refusal to sign the standard confidentiality agreement, Health Canada unsuccessfully attempted to keep confidential the requested reams of clinical trial data. .At issue was the interpretation of subsection 21.1(3) of the Protecting Canadians from Unsafe Drugs Act (“Vanessa’s Law”) [Overview & FAQ]. The case is interesting not only because it was the first time the court was called upon to apply Vanessa’s Law, but also because the court was required to decide other important ancillary issues, such as the confidential nature of clinical trial data and the bearing such nature may have on freedom of expression under section 2(b) of the Canadian Charter of Rights and Freedoms. In light of administrative law principles concerning the exercise of discretionary powers, Justice Grammond held that it was unreasonable for Health Canada to impose a confidentiality requirement as a condition for the disclosure of the data requested by Dr. Doshi (para 87). Following the Federal Court decision, Health Canada indicated that it is working on regulations to publicly release a large amount of information in clinical trial reports for a wide range of medications. Stakeholders should watch out for new developments on this front. [CyberLex Blog (McCarthy Tetrault)]

EU Developments

EU – CNIL Publishes Initial Assessment on Blockchain and GDPR

Recently, the French Data Protection Authority (“CNIL“) published its initial assessment of the compatibility of blockchain technology with the EU General Data Protection Regulation (GDPR) and proposed concrete solutions for organizations wishing to use blockchain technology when implementing data processing activities [see 11 pg PDF in French]. The CNIL made it clear that its assessment does not apply to (1) distributed ledger technology (DLT) solutions and (2) private blockchains. In its assessment, the CNIL first examined the role of the actors in a blockchain network as a data controller or data processor. The CNIL then issued recommendations to minimize privacy risks to individuals (data subjects) when their personal data is processed using blockchain technology. In addition, the CNIL examined solutions to enable data subjects to exercise their data protection rights. Lastly, the CNIL discussed the security requirements that apply to blockchain. The CNIL made a distinction between the participants who have permission to write on the chain (called “participants”) and those who validate a transaction and create blocks by applying the blockchain’s rules so that the blocks are “accepted” by the community (called “miners”). According to the CNIL, the participants, who decide to submit data for validation by miners, act as data controllers when (1) the participant is an individual and the data processing is not purely personal but is linked to a professional or commercial activity; and (2) the participant is a legal personal and enters data into the blockchain. According to the CNIL, the exercise of the right to information, the right of access and the right to data portability does not raise any particular difficulties in the context of blockchain technology (i.e., data controllers may provide notice of the data processing and may respond to data subjects’ requests of access to their personal data or data portability requests.) However, the CNIL recognized that it is technically impossible for data controllers to meet data subjects’ requests for erasure of their personal data when the data is entered into the blockchain: once in the blockchain system, the data can no longer be rectified or erased. The CNIL considered that the security requirements under the GDPR remain fully applicable in the blockchain.  In the CNIL’s view, the challenges posed by blockchain technology call for a response at the European level. The CNIL announced that it will cooperate with other EU supervisory authorities to propose a robust and harmonized approach to blockchain technology. [Privacy & Information Security Law Blog (Hunton Andrews Kurth) with coverage at: JDSUPRA and PaymentsCompliance]

Facts & Stats

WW – Data Breaches Compromised 4.5 Billion Records in the First Half of 2018

According to the latest figures from the Gemalto Breach Level Index, 4.5 billion records were compromised in just the first six months of this year [PR, infographic & download report] . The US comes out the worst, with 3.25 billion records affected and 540 breaches — an increase of 356% in the last month and 98% over the same period in 2017. A total of six social media breaches accounted for over 56% of total records compromised. Of the 945 data breaches, 189 (20% of all breaches) had an unknown or unaccounted number of compromised data records. Europe was well behind America seeing 36% few incidents, but there was a 28% rise in the number of records breached indicating growing severity of attacks. The United Kingdom was the worst hit in its region suffering 22 data incidents. [Information Age | Disclosure laws lead to spike in reported data breaches: Gemalto | A Massive Bump In Data Breaches Is Stoking Bot-Driven Attacks On Retailers | What Drives Tech Internet Giants To Hide Data Breaches Like The Google+ Breach


CA – More Than a Dozen Federal Departments Flunked Credit Card Security Test

The Canada Revenue Agency, the RCMP, Statistics Canada and more than a dozen other federal departments and agencies have failed an international test of the security of their credit card payment systems. Altogether, half of the 34 federal institutions authorized by the banking system to accept credit-card payments from citizens and others have flunked the test — risking fines and even the revocation of their ability to accept credit and debit payments. Those 17 departments and agencies continue to process payments on Visa, MasterCard, Amex, the Tokyo-based JCB and China UnionPay cards, and federal officials say there have been no known breaches to date. These institutions all fell short of a global data-security standard PCI DSS, for “Payment Card Industry Data Security Standards.” It was established by five of the big credit-card firms. That’s meant to foil fraud artists and criminal hackers bent on stealing names, numbers and codes for credit and debit cards. Federal departments must self-assess against the standard annually. CBC News obtained the briefing note, to the deputy minister of Public Services and Procurement Canada (PSPC), under the Access to Information Act. The document suggests the main culprit is Shared Services Canada (SSC), the federal IT agency created in 2011 that operates and maintains data systems for 13 of the 17 non-compliant institutions. Eleven of the 13 SSC clients who fell short of the credit card security standard say the agency itself has not fixed the security problems. The institutions that failed the credit card security checks are: Health Canada, RCMP, Industry Canada, Transport Canada, National Research Council, Canada Border Services Agency, Natural Resources Canada, Immigration Refugees and Citizenship, Statistics Canada, Fisheries and Oceans, Canada Revenue Agency, Canada Food Inspection Agency and Library and Archives Canada, all of which depend on SSC for their IT. The Library of Parliament, National Defence, the National Film Board of Canada and the Canadian Centre for Occupational Health and Safety are also non-compliant, but are responsible for the security of their own IT systems. [CBC News]


CA – Bowing to Pressure, Feds Urge Senate to Change Access to Information Bill

After pushback from Indigenous groups and the information commissioner, the federal government is backing down on a number of changes proposed to the Access to Information Act that critics have called “regressive” that part of Bill C-58 that required access to info requesters to describe the document time period, subject, and type. Witnesses had warned that level of detail, particularly with First Nations attempts to get land-claim records, would limit access to records where such detail is not known and almost certainly lead to departments denying requests. Information commissioner Caroline Maynard also successfully convinced the government to give her order-making power when the bill reaches royal assent and is formally approved, rather than a year after the bill becomes law, as it’s currently written. Critics hav also raised alarms about adding the ability for government departments and agencies to decline “vexatious,” or overly broad requests. At a Senate committee Oct. 3, Treasury Board President Scott Brison closed the door on removing that power from the bill, noting the government had already accepted changes from the House Ethics Committee to address fears it would limit access and “address any concerns” of “inappropriate” use. The House passed the changed bill in December 2017. Now, agencies won’t be able to give a request that label unless they have approval from the information commissioner at the beginning of the process. The Access to Information Act lets Canadians pay $5 to request government documents, but critics for years have said it’s dysfunctional, too slow, and allows for big loopholes that limit the information released. [The Hill Times]

CA – Privileged Records and Access to Information Reviews: When to Produce?

Solicitor-client privilege is intended to foster candid conversation between a client and legal counsel in order to ensure that the client receives appropriate legal advice and can make informed decisions. It protects the solicitor-client relationship. By comparison, litigation privilege attaches to records that are created for the dominant purpose of preparing for litigation. It offers protection for clients to investigate and prepare their case. Both privileges are vital to an effective legal system. Enter access to information legislation. Legislation in each Atlantic province provides some form of exception to disclosure for privileged records. In New Brunswick, see The Right to Information and Protection of Privacy Act, SNB 2009, c R-10.6 at s 27 [here]; in Newfoundland and Labrador, see Access to Information and Protection of Privacy Act, 2015, SNL 2015 c A-1.2 at s 30 [here]; in Nova Scotia, see Freedom of Information and Protection of Privacy Act, SNS 1993, c 5 at s 16 [here]; and in Prince Edward Island, see Freedom of Information and Protection of Privacy Act, RSPEI 1988, c 15.01 at s 25 [here]. But a public body’s application of access to information legislation is overseen by a statutory office in every jurisdiction. What happens when the public body’s application of the exception for privileged records is challenged? That question gave rise to the Supreme Court of Canada’s well-known decision in Alberta (Information and Privacy Commissioner) v University of Calgary [here] In that case, a delegate of the Alberta Information and Privacy Commissioner issued a notice to the University to produce records over which the University had claimed solicitor-client privilege. The majority of the Court agreed with the University and determined that the University was not obligated to produce solicitor-client privileged records to the delegate for review. The University of Calgary decision received a great deal of attention when it was released. But little attention has been paid to the Majority’s closing comments regarding the appropriateness of the Alberta OIPC’s decision to seek production of records over which solicitor-client privilege was claimed the Supreme Court emphasized that “even courts will decline to review solicitor-client documents to ensure that privilege is properly asserted unless there is evidence or argument establishing the necessity of doing so to fairly decide the issue” [see note 2 at para 68 here]. The Court was mindful of the fact that the University had identified the records in accordance with the practice in civil litigation in the province, and found that in the absence of evidence to suggest that the University had improperly claimed privilege, the delegate erred in determining that the documents had to be reviewed. While civil litigation practice can – and does – vary from province to province, should you find yourself in a positon where the Commissioner is seeking review of records over which you have claimed solicitor-client or litigation privilege, the Supreme Court’s commentary and the Alberta approach may provide a means by which to have the Commissioner resolve the claim without risking privilege and requiring production of the records in issue. [Mondaq]


WW – How Researchers Are Using DNA to Create Images of People’s Faces

Advancements in facial recognition and DNA sequencing technology have allowed scientists to create a portrait of a person based on their genetic information [A process called DNA phenotyping – wiki]. A study published last year and co-authored by biologist Craig Venter [wiki], CEO of San Diego-based company Human Longevity, showed how the technology works. The research team took an ethnically diverse sample of more than 1,000 people of different ages and sequenced their genomes. They also took high-resolution, 3D images of their faces and measured their eye and skin color, age, height and weight. This information was used to develop an algorithm capable of working out what people would look like on the basis of their genes. Applying this algorithm to unknown genomes, the team was able to generate images that could be matched to real photos for eight out of ten people. The success rate fell to five out of ten when the test was restricted to those of a single race, which narrows facial differences. The authors of the paper said the research has ‘significant ethical and legal implications on personal privacy, the adequacy of informed consent, the potential for police profiling and more’. Researchers have already produced images of faces based on genetic material or genome. For example, earlier this year, investigators in Washington State unveiled an image of a suspect created from DNA in the 30-year-old murder case of young Victoria (BC)-area couple Tanya Van Cuylenborg, 18, and Jay Cook, 20. [coverage here] And in Calgary in February police released a high-tech image they said was a likeness of the mother of a baby girl found dead in a dumpster on Christmas Eve. [CTV News]

Health / Medical

US – Fitbit Data Leads to Arrest of 90-Year-Old in Stepdaughter’s Murder

On Saturday, 8 September, at 3:20 pm, Karen Navarra’s Fitbit recorded her heart rate spiking. Within 8 minutes, the 67-year-old California woman’s heart beat rapidly slowed. At 3:28 pm, her heart rate ceased to register at all. She was, in fact, dead. Two pieces of technology have led the San Jose police police to charge Ms. Navarro’s stepfather, Anthony Aiello, with allegedly having butchered her. Besides the Fitbit records, there are also surveillance videos that undercut Aiello’s version of the events. When police compared the dead woman’s Fitbit data with video surveillance from her home, they discovered that Aiello’s car was still there at the point when her Fitbit lost any traces of her heartbeat. Later, police found bloodstained clothing in Aiello’s home. If Aiello turns out to be guilty, he certainly won’t be the first to learn a harsh lesson in how much of the quotidian technology that surrounds us these days can be used to contradict our version of events. One example was in April 2017, when a murder victim’s Fitbit contradicted her husband’s version of events. In another case, we’ve seen pacemaker data used in court against a suspect accused of burning down his house. The title of a paper by Nicole Chauriye says it all: Wearable devices as admissible evidence: Technology is killing our opportunity to lie. [Naked Security (Sophos) coverage at: The Mercury News, The New York Times, The Independent and Los Angeles Times]

US – Despite Patient Privacy Risks, More People Use Wearables for Health

Despite the patient privacy risks that collecting health data on insecure wearable devices could pose, the number of US consumers tracking their health data with wearables has more than doubled since 2013, according to the Deloitte 2018 Survey of US Health Care Consumers [PR – also see blog post]. The use of wearables and other tools for measuring fitness and health improvement goals jumped from 17 percent in 2013 to 42% in 2018. Of those who used wearables in the past year, 73 percent said they used them consistently. Sixty percent of the 4,530 respondents said they are willing to share PHI generated from wearable devices with their doctor to improve their health. 51% of respondents are comfortable using an at-home test to diagnose infections before seeing a doctor. More than one-third (35%) of respondents said they are interested in using a virtual assistant to identify symptoms and direct them to a caregiver. Close to one-third (31%) are interested in connecting with a live health coach that offers text messaging for nutrition, exercise, sleep, and stress management. “For health systems that are collecting this information, it is important that they safeguard the privacy of that information,” Sarah Thomas, managing director of Deloitte’s Center for Health Solutions, told HealthITSecurity.com. “If it is about their personal health, then it is clear that the information needs to be safeguarded and subject to HIPAA” [wiki here] she added. [HealthIT Security Additional coverage at: Health Populi, For The Record and Patient Engagement HIT]

WW – Study Finds Medical Records Are Breached Worryingly Often

A new study by two physicians from Massachusetts General Hospital has concluded that breaches to people’s health data are alarmingly frequent and large scale. Writing in the Journal of the American Medical Association [Temporal Trends and Characteristics of Reportable Health Data Breaches, 2010-2017], Dr Thomas McCoy Jr and Dr Roy Perlis state that 2,149 breaches comprising a total of 176.4 million records occurred between 2010 and 2017. Their data was drawn from the US Health and Human Services Office for Civil Rights breach database [last 24 months here & archive of earlier brecahes], where all breaches of American patient records must be reported under US law. With the except of 2015, the number of breach events has increased every year during that period paper and film-based information were the most commonly compromised type of medical record, with 510 breaches involving 3.4 million records, but the frequency of this type of breach went down across the study period and the largest share of breached records – 139.9 million – came from infiltration into network servers storing electronic health records (EHRs). The frequency of hacking-based breaches went up during the study period. The majority of breaches occurred due to the actions of health care providers, though compromised systems in health plan companies accounted for more total records infiltrated. The authors write that “Although networked digital health records have the potential to improve clinical care and facilitate learning [in] health systems, they also have the potential for harm to vast numbers of patients at once if data security is not improved” [IFLScience! Additional coverage at: Reuters and Healthcare Infomatics]

US – Eight Healthcare Privacy Incidents in September

Eight privacy incidents at healthcare organizations captured public attention last month. While media outlets reported on the following breaches in September, healthcare organizations experienced breaches as early as 2014. Here are the eight incidents presented in order of number of patients affected: 1) The Fetal Diagnostic Institute of the Pacific in Honolulu notified 40,800 patients about a potential data breach after it fell victim to a ransomware attack in June; 2) Blue Cross Blue Shield of Rhode Island notified 1,567 members that an unnamed vendor responsible for sending members’ benefits explanations breached their personal health information; 3) An employee at Kings County Hospital’s emergency room stole nearly 100 patients’ private information and sold it through an encrypted app on his phone; 4) Claxton-Hepburn Medical Center in Ogdensburg, N.Y., terminated an undisclosed number of employees after hospital officials identified breaches of patient health information during a recent internal investigation; 5) Reliable Respiratory in Norwood, Mass., discovered unusual activity on an employee’s email account in July, which may have allowed hackers to access an undisclosed number of patients’ protected health information; 6) Independence Blue Cross in Pennsylvania notified an undisclosed number of plan members about a potential compromise of their protected health information after an employee uploaded a file containing personal data to a website that was publicly accessible for three months; 7) Nashville, Tenn.-based Aspire Health lost some patient information to an unknown cyberattacker who gained access to its internal email system in September, federal court records filed Sept. 25 show; and 8) Lutheran Hospital in Fort Wayne, Ind., canceled all remaining elective surgeries Sept. 18 after its IT team discovered a computer virus on its systems. [Becker’s Hospital Review]

Horror Stories

WW – Google Exposed User Data, Feared Repercussions of Disclosing to Public

Google exposed the private data of hundreds of thousands of users of the Google+ social network and then opted not to disclose the issue this past spring, in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage, according to people briefed on the incident and documents reviewed by The Wall Street Journal. As part of its response to the incident, the Alphabet Inc. unit on Monday announced [see blog post] a sweeping set of data privacy measures that include permanently shutting down all consumer functionality of Google+. A software glitch in the social site gave outside developers potential access to private Google+ profile data including: full names, email addresses, birth dates, gender, profile photos, places lived, occupation and relationship status between 2015 and March 2018, when internal investigators discovered and fixed the issue. A memo prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica. Chief Executive Sundar Pichai was briefed on the plan not to notify users after an internal committee had reached that decision. The question of whether to notify users went before Google’s Privacy and Data Protection Office, a council of top product executives who oversee key decisions relating to privacy. In weighing whether to disclose the incident, the company considered “whether we could accurately identify the users to inform, whether there was any evidence of misuse, and whether there were any actions a developer or user could take in response. None of these thresholds were met here” a Google spokesman said in a statement During a two-week period in late March, Google ran tests to determine the impact of the bug, one of the people said. It found 496,951 users who had shared private profile data with a friend could have had that data accessed by an outside developer. Some of the individuals whose data was exposed to potential misuse included paying users of G Suite, a set of productivity tools including Google Docs and Drive. G Suite customers include businesses, schools and governments. In its contracts with paid users of G Suite apps, Google tells customers it will notify them about any incidents involving their data “promptly and without undue delay” and will “promptly take reasonable steps to minimize harm.” That requirement may not apply to Google+ profile data, however, even if it belonged to a G Suite customer. [The Wall Street Journal | Google exposed data for hundreds of thousands of users | Google+ shutting down after data leak affecting 500,000 users | Google+ Is Shutting Down After a Security Bug Exposed User Info | Google did not disclose security bug because it feared regulation, says report | Laughing at the Google+ bug? You’re making a big mistake | Here’s how to quickly check if you have a Google+ account — and delete it]

Online Privacy

WW – Instagram Prototypes Handing Your Location History to Facebook

Instagram has been spotted prototyping a new privacy setting that would allow it to share your location history with Facebook. That means your exact GPS coordinates collected by Instagram, even when you’re not using the app, would help Facebook to target you with ads and recommend you relevant content. The geo-tagged data would appear to users in their Facebook Profile’s Activity Log, which include creepy daily maps of the places you been. This commingling of data could upset users who want to limit Facebook’s surveillance of their lives. A Facebook spokesperson tells TechCrunch that “To confirm, we haven’t introduced updates to our location settings. As you know, we often work on ideas that may evolve over time or ultimately not be tested or released. Instagram does not currently store Location History; we’ll keep people updated with any changes to our location settings in the future.” That effectively confirms Location History sharing is something Instagram has prototyped, and that it’s considering launching but hasn’t yet. Delivering the exact history of where Instagram users went could assist Facebook with targeting them with local ads across its family of apps. If users are found to visit certain businesses, countries, neighborhoods, or schools, Facebook could use that data to infer which products they might want to buy and promote them. It could even show ads for restaurants or shops close to where users spend their days. Just yesterday, we reported that Facebook was testing a redesign of its Nearby Friends feature that replaces the list view of friends’ locations with a map. Pulling in Location History from Instagram could help keep that map up to date. [TechCrunch | Facebook tests Snapchat-like map for Nearby Friends

WW – Google’s New Chrome Extension Rules Improve Privacy and Security

Google has announced several rules aimed at making Chrome extensions safer and more trustworthy. Many extensions request blanket access to your browsing data, but you’ll soon have the option to whitelist the sites they can view and manipulate, or opt to grant an extension access to your current page with a click. That feature is included in Chrome 70, which is scheduled to arrive later this month and includes other privacy-focused updates.  Developers can no longer submit extensions that include obfuscated code. Google says 70% of malicious and policy-violating extensions use such code. More easily accessible code should speed up the review process too. Developers have until January 1st to strip obfuscated code from their extensions and make them compliant with the updated rules. Additionally, there will be a more in-depth review process for extensions that ask you for “powerful permissions”, Google says. The company is also more closely monitoring those with remotely hosted code. Next year, developers will need to enable two-step verification on their Chrome Web Store accounts. Google also plans to introduce an updated version of the extensions platform manifest, with the aim of enabling “stronger security, privacy and performance guarantees.” Google says half of Chrome users actively employ extensions, so the changes could make browsing the web more secure for millions of people. [engadget – additional coverage at: TechCrunch, CNET News and VentureBeat]

US – Tim Cook Chides Tech Companies for Collecting Personal Data -But Apple Does it Too (Opinion)

Apple CEO Tim Cook took aim at the tech industry’s privacy practices. In an interview with Vice News, he said, “The narrative that some companies will try to get you to believe is, ‘I’ve got to take all your data to make my service better.’ Well, don’t believe that. Whoever’s telling you that, it’s a bunch of bunk.”  Is this a case of the kettle calling the pot black? Apple has cultivated and established a reputation for concern over privacy. There’s a privacy webpage that lists the steps the company takes to safeguard user information and what it refrains from doing. And then there’s the legal privacy policy page that lists the things Apple can and does do with your information. Reading it is enlightening. The page, updated May 22, 2018, “covers how we collect, use, disclose, transfer, and store your personal information.” The details are important. The main one is the first definition: “Personal information is data that can be used to identify or contact a single person.” Is information about a person, such as activities on a website, personal in the sense of being able to identify an individual? No, but it is associated with personal information to become useful. According to Goldman Sachs analyst Rod Hal, Google pays Apple $9 billion a year to remain Safari’s default search engine [coverage]. At the very least, there is a financial incentive for Apple to allow Google access to all the search information. Here is a partial list of “non-personal information” that Apple collects, according to its posted terms: a) Occupation, language, ZIP code, area code, unique device identifier, the URL where your browser was previously, your location and time zone when you used the Apple product; b) product name and device ID; c) details of how you use Apple services, including search queries; d) data stored in Apple log files includes “Internet protocol (IP) addresses, browser type and language, internet service provider (ISP), referring and exit websites and applications, operating system, date/time stamp, and clickstream data”; and e) Apple and its partners “may collect, use, and share precise location data, including the real-time geographic location of your Apple computer or device.”  Perhaps Apple is more concerned with privacy than other companies. Certainly, there’s been no news of a Facebook-style fiasco. Don’t necessarily assume that means you get real privacy. [Inc.com] Coverage at: Apple’s Tim Cook: ‘Don’t believe’ tech companies that say they need your data  | ‘It’s a Bunch of Bunk.’ Apple CEO Tim Cook on Why Tech Firms Don’t Need All Your Data—and Why Apple Expelled Alex Jones | Apple’s Tim Cook is sending a privacy bat-signal to US lawmakers | Apple chief says firm guards data privacy in China | Tim Cook: Don’t Get Hung Up on Where Apple Stores iCloud Data | Tim Cook to talk consumer privacy and data ethics at European data protection conference later this month

WW – Privacy Search Engine Duckduckgo Up 50% Searches in a Year

Privacy-focused search engine DuckDuckGo [wiki] which has just announced it’s hit 30 million daily searches a year after reaching 20M — a year-on-year increase of 50% [see traffic stats]. Hitting the first 10M daily searches took the search engine a full seven years, and then it was another two to get to 20M. DDG’s search engine offers a pro-privacy alternative to Google search that does not track and profile users in order to target them with ads. Instead it displays ads based on the keyword being searched for at the point of each search — dispensing with the need to follow people around the web, harvesting data on everything they do to feed a sophisticated adtech business, as Google does. Google handles least 3BN+ daily searches that daily. This year it expanded from its core search product to launch a tracker blocker to address wider privacy concerns consumers have by helping web users keep more of their online activity away from companies trying to spy on them for profit. [TechCrunch | Privacy: A Business Imperative and Pillar of Corporate Responsibility | DuckDuckGo, the privacy-focused search engine, grows daily searches by 50% to 30 million]

Other Jurisdictions

CA – APEC Cross-Border Privacy Rules Enshrined in U.S.-Mexico-Canada Trade Agreement

On September 30, 2018, the U.S., Mexico and Canada announced a new trade agreement (the “USMCA”) aimed at replacing the North American Free Trade Agreement. Notably, the USMCA’s chapter on digital trade will require the U.S., Canada and Mexico to each “adopt or maintain a legal framework that provides for the protection of the personal information of the users” includeing key principles such as: limitations on collection, choice, data quality, purpose specification, use limitation, security safeguards, transparency, individual participation and accountability. Article 19.8(2) directs the Parties to consider the principles and guidelines of relevant international bodies, such as the APEC Privacy Framework overview here] and the OECD Recommendation of the Council concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data, and Article 19.8(6) formally recognizes the APEC Cross-Border Privacy Rules (the “APEC CBPRs”) [here] within their respective legal systems. In addition, Article 19.14(1)(b) provides that “the Parties shall endeavor to cooperate and maintain a dialogue on the promotion and development of mechanisms, including the APEC Cross-Border Privacy Rules, that further global interoperability of privacy regimes.” The USMCA must still pass the U.S. Congress, the Canadian Parliament, and the Mexican Senate. [Privacy & Information Security Law Blog (Hunton Andrews Kurth coverage at: Womble Bond Dickinson via National Law Review, The Washington Post, Michael Geist Blog, Private Internet Access blog | Data localization concerns in USMCA may be overblown

Privacy (US)

US – FTC Continues to Enforce EU-U.S. Privacy Shield

The U.S. Federal Trade Commission (FTC) recently settled enforcement actions [PR] against four companies accused of misleading consumers about their participation in the European Union-United States Privacy Shield framework [see here, here & wiki here], which allows companies to transfer consumer data from EU member states to the United States in compliance with EU law. These collective actions demonstrate the FTC’s ongoing commitment under new Chairman Joseph Simons to enforce U.S. companies’ filing obligations with the U.S. Department of Commerce as part of their efforts to comply with the Privacy Shield. These actions are also consistent with a recent statement [coverage here] by Gordon Sondlan, U.S. Ambassador to the European Union, that the U.S. is complying with EU data protection rules. Key Takeaways:

  • The FTC will continue to hold companies accountable for the promises they make to consumers regarding their privacy policies, including participation in the Privacy Shield;
  • Companies participating in the Privacy Shield should re-evaluate their privacy procedures and policies regularly to ensure compliance with the various requirements of the Privacy Shield framework;
  • Once a company initiates the Privacy Shield certification process, it must complete that process to claim participation in the Privacy Shield framework; and
  • Companies looking to participate in the Privacy Shield or a similar privacy program should consult counsel to ensure the program is the best option for their particular business needs.

[Dechert LLP Blog | FTC continues aggressive enforcement of Privacy Shield Additional coverage at: Privacy & Information Security Law Blog (Hunton Andrews Kurth), Privacy and Cybersecurity Perspectives (Murtha Cullina), Legal News Line]

US – Google Faces Mounting Pressure from Congress Over Google+ Privacy Flaw

In March, Google discovered a flaw in its Google+ API that had the potential to expose the private information of hundreds of thousands of users. Officials at Google opted not to disclose the vulnerability to its users or the public for fear of bad press and potential regulatory action [in an internal memo first reported here]. Now, lawmakers are asking to see those communications firsthand. Republican leaders from the Senate Commerce Committee are demanding answers from Google CEO Sundar Pichai about a recently unveiled Google+ vulnerability, requesting the company’s internal communications regarding the issued in a letter [PR & PDF]. Some of the senators’ Democratic counterparts on the committee reached out to the Federal Trade Commission to demand that the agency investigate the Google+ security flaw, saying in a letter [3 pg PDF here] that if agency officials discover “problematic conduct, we encourage you to act decisively to end this pattern of behavior through substantial financial penalties and strong legal remedies.” Google has until October 30th to respond to the senators’ inquiries, just weeks before Pichai is scheduled to testify in front of the House Judiciary Committee following the November midterm elections. An exact date for that hearing has yet to be announced. [The Verge | Senators demand Google hand over internal memo urging Google+ cover-up | Senators Demand Memo Behind Google+ Privacy Debacle Cover-Up | Google Draws Bipartisan Criticism Over Data Leak Coverup | Senator Blumenthal Wants FTC To Investigate Google Over Data Leak | Google+ vulnerability comes under fire in Senate hearing | Google facing scrutiny from Australian regulator over Google+ data breach | Google+ Glitch Revelation Sparks German Probe | U.S., European regulators investigating Google glitch

US – Privacy Advocates Tell Senators What They Want in a Data Protection Law

Privacy advocates and tech giants like Google, Amazon and Apple all want a federal privacy law. But while tech companies essentially want a federal privacy bill to be a ceiling that would limit how far states could go with their own privacy rules, privacy advocates want it to be more of a floor that states can build on. During the Oct 10 hearing before the Senate Committee on Commerce, Science and Transportation, privacy advocates stressed the need for a federal privacy law that could work in tandem with state laws instead of overwriting them. Representatives included Andrea Jelinek, the chair of the European Data Protection Board [statement]; Alastair Mactaggart, the advocate behind California’s Consumer Privacy Act [statement]; Laura Moy, executive director of the Georgetown Law Center on Privacy and Technology [statement]; and Nuala O’Connor, president of the Center for Democracy and Technology [statement]. [CNET News | Privacy Groups Urge Congress To Create New National Privacy Law | CDD to Senate: Privacy Legislation Should Be Tough, Comprehensive, Enforceable | Lawmakers Push to Rein In Tech Firms After Google+ Disclosure | Senator calls for FTC investigation into Google+ data exposure

US – Facebook Accused of Violating Children’s Privacy Law

Several US groups advocating public and children’s health have urged the FTC to take action against social media giant Facebook for allegedly violating children’s privacy law. The 18-member group led by the Campaign for a Commercial-Free Childhood (CCFC) have filed a complaint asserting that Facebook’s Messenger Kids, a controversial messaging application for children as young as five, collects kids’ personal information without obtaining verifiable parental consent [PR & Complaint] Messenger Kids is the first major social platform designed specifically for young children, but the complaint argues that] Facebook’s parental consent mechanism does not meet the requirements of the Children’s Online Privacy Protection Act (COPPA) because any adult user can approve any account created in the app and “even a fictional ‘parent’ holding a brand-new Facebook account could immediately approve a child’s account without proof of identity.” The complaint further accused Facebook of disclosing data to unnamed third parties for “broad, undefined business purposes.” In January the CCFC on behalf of the advocacy groups sent Facebook CEO Mark Zuckerberg a letter signed by over 100 experts and advocates asking him to remove Messenger Kids from its platform. Critics have been skeptic about Facebook’s Messenger Kids security measures in protecting children’s privacy, and have been pushing for its closure since its debut last year [see CCFC petition]. [Financial Express]

Privacy Enhancing Technologies (PETs)

WW – Blockchain’s Role as a Privacy Enhancing Technology

Many of us hear the word “blockchain” [wiki & beginner’s guide], mentally file it under “something to do with Bitcoin,” and then swiftly move on. But there is more to this new technology than the cryptocurrencies. Top of mind is blockchain’s potential to enable greater data privacy and data security, says Florian Martin-Bariteau, who runs the University of Ottawa’s Blockchain Legal Lab [here], a research team investigating the practical uses of the technology — and the legal issues those uses raise. He’s also on a panel at the forthcoming CBA Access to Information and Privacy Law Symposium in Ottawa (Oct. 19 and 20) that will compare uses of blockchain in other industries. “The blockchain technology is actually a protocol for information or asset exchange, and an infrastructure for data storage and management,” he says. “It is literally a chain of blocks of information which are interlinked in a secure way.” It was conceived as a kind of secure spreadsheet — a way to timestamp documents in a ledger that could not be edited or tampered with.  Martin-Bariteau describes it as a digital notary system. The technology has since developed to become “a secure, immutable database shared by all parties in a distributed network.” Its utility where privacy is an issue is plain to see.  But part of the attraction of blockchain — the notion that data can’t be edited, altered or erased — is also part of the challenge it creates. For example, in the European Union and elsewhere, GDPR compliance includes the right to erasure. This has enormous implications for any system that requires registered users as part of its design. Martin-Bariteau is clear about the risks involved. “You need to be very careful about the information you register on an immutable ledger,” he notes. “You want to avoid including any personal information, so you need to design your implementation, or advise your clients to design it, in a way that it can use personal information without storing it.” [CBA National and see also: CNIL Publishes Initial Assessment on Blockchain and GDPR

RFID / Internet of Things

US – NIST Seeks Public Comment on Managing Internet of Things Cybersecurity and Privacy Risks

The U.S. Department of Commerce’s National Institute of Standards and Technology recently announced that it is seeking public comment on Draft NISTIR 8228, Considerations for Managing Internet of Things (“IoT”) Cybersecurity and Privacy Risks (the “Draft Report”). The document is to be the first in a planned series of publications that will examine specific aspects of the IoT topic. The Draft Report identifies three high-level considerations with respect to the management of cybersecurity and privacy risks for IoT devices as compared to conventional IT devises: (1) many IoT devices interact with the physical world in ways conventional IT devices usually do not; (2) many IoT devices cannot be accessed, managed or monitored in the same ways conventional IT devices can; and (3) the availability, efficiency and effectiveness of cybersecurity and privacy capabilities are often different for IoT devices than conventional IT devices. The Draft Report also identifies three high-level risk mitigation goals: (1) protect device security; (2) protect data security; and (3) protect individuals’ privacy. Comments are due by October 24, 2018 [download the NIST Comment Template for submitting your comments] [Privacy & Information Security Law Blog (Hunton Andrews Kurth)]


WW –Two-Thirds of Data Security Pros Looking to Change Jobs

Nearly two-thirds of security pros are looking to leave their current jobs. That is one of the findings of a new study on IT security trends by staffing firm Mondo [PR & report] which says that 60% of these workers can be easily hired away. Lack of growth opportunities and job satisfaction are tied as the top reasons to leave a job, according to the survey. The study found several other top reasons why IT security experts leave a job. They include: 1) Unhealthy work environment (cited by 53%); 2) Lack of IT security prioritization from C-level or upper management (cited by 46%); 3) Unclear job expectations (cited by 37%); and 4) Lack of mentorship (cited by 30%). To help retain IT security experts, the study recommends that organizations offer the following benefits, based on responses from security pros: 1) Promoting work-life balance; 2) Taking worker security concerns seriously; 3) Sponsorship of certifications or courses; 4) Increased investment in emerging tech; and 5) CISO leadership/defined ownership of security needs Mondo gathered this data by surveying more than 9,000 IT security professionals and decision-makers. [Information Management]

Smart Cars / Cities

WW – Google’s Plans for First Wired Urban Community Raise Data-Privacy Concerns

A unit of Google’s parent company Alphabet is proposing to turn a rundown part of Toronto’s waterfront into what may be the most wired community in history — to “fundamentally refine what urban life can be.” [see overview here] Sidewalk Labs [here] has partnered with a government agency known as Waterfront Toronto [here] with plans to erect mid-rise apartments, offices, shops and a school on a 12-acre site — a first step toward what it hopes will eventually be an 800-acre development. But some Canadians are rethinking the privacy implications of giving one of the most data-hungry companies on the planet the means to wire up everything from streetlights to pavement. And some want the public to get a cut of the revenue from products developed using Canada’s largest city as an urban laboratory. “The Waterfront Toronto executives and board are too dumb to realize they are getting played,” said former BlackBerry Chief Executive Jim Balsillie who also said the federal government is pushing the board to approve it. “Google knew what they wanted. And the politicians wanted a PR splash and the Waterfront board didn’t know what they are doing. And the citizens of Toronto and Canada are going to pay the price,” Balsillie said. Julie Di Lorenzo, a prominent Toronto developer who resigned from the Waterfront Toronto board over the project [see coverage], said data and what Google wants to do with it should be front and center in the discussions. She also believes the government agency has given the Google affiliate too much power over how the project develops. “How can (Waterfront Toronto), a corporation established by three levels of democratically elected government, have shared values with a limited, for-profit company whose premise is embedded data collection?” Di Lorenzo asked.  Bianca Wylie, an advocate of open government, said it remains deeply troubling that Sidewalk Labs still hasn’t said who will own data produced by the project or how it will be monetized. Google is here to make money, she said, and Canadians should benefit from any data or products developed from it. “We are not here to be someone’s research and development lab,” she said, “to be a loss leader for products they want to sell globally.” Ottawa patent lawyer Natalie Raffoul said the fact that the current agreement leaves ownership of data issues for later shows that it wasn’t properly drafted and means patents derived from the data will default to Google. [The Seattle Times]


US – That Sign Telling You How Fast You’re Driving May Be Spying on You

According to recently released US federal contracting data, the Drug Enforcement Administration will be expanding the footprint of its nationwide surveillance network with the purchase of “multiple” trailer-mounted speed displays “to be retrofitted as mobile License Plate Reader (LPR) platforms.” The DEA is buying them from RU2 Systems Inc., a private Mesa, Arizona company. For overviews of LPRs see EFF] Two other, apparently related contracts, show that the DEA has hired a small machine shop in California, and another in Virginia, to conceal the readers within the signs. An RU2 representative said the company providing the LPR devices themselves is a Canadian firm called Genetec. DEA expects to take delivery of its new license plate-reading speed signs by October 15. The DEA launched its National License Plate Reader Program in 2008; it was publicly revealed for the first time during a congressional hearing four years after that. The DEA’s most recent budget describes the program as “a federation of independent federal, state, local, and tribal law enforcement license plate readers linked into a cooperative system, designed to enhance the ability of law enforcement agencies to interdict drug traffickers, money launderers or other criminal activities on high drug and money trafficking corridors and other public roadways throughout the U.S.” What is a game-changing crime-fighting tool to some, is a privacy overreach of near-existential proportion to others. License plate readers, which can capture somewhere in the neighborhood of 2,000 plates a minute, cast an astonishingly wide net that has made it far easier for cops to catch serious criminals. On the other hand, the indiscriminate nature of the real-time collection, along with the fact that it is then stored by authorities for later data mining is highly alarming to privacy advocates. [QUARTZ | How roadside speed signs in the U.S. could be tracking you using Canadian-made tech]





16–30 September 2018


US – Use of Facial-Recognition Technology Fuels Debate at Seattle School

RealNetworks is offering schools a new, free security tool “Secure, Accurate Facial Recognition — or SAFR, pronounced “safer” — is a technology that the company began offering free to K-12 schools this summer. It took three years, 8 million faces and more than 8 billion data points to develop the technology, which can identify a face with near perfect accuracy. The software is already in use at one Seattle school, and RealNetworks is in talks to expand it to several others across the country. But as the technology moves further into public spaces, it’s raising privacy concerns and calls for regulation — even from the technology companies that are inventing the biometric software. Privacy advocates wonder if people fully realize how often their faces are being scanned, and advocates and the industry alike question where the line is between the benefits to the public and the cost to privacy. “There’s a general habituation of people to be tolerant of this kind of tracking of their face,” said Adam Schwartz, a lawyer with digital privacy group Electronic Frontier Foundation. “This is especially troubling when it comes to schoolchildren. It’s getting them used to it.” School security is a serious issue, he agreed, but he said the benefits of facial recognition in this case are largely unknown, and the damage to privacy could be “exceedingly high.” Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law Center, but she finds the lack of transparency into how the technology is being used and the lack of federal laws troubling. Garvie was on a team that conducted a widespread study that found 54% of U.S. residents are in a facial-recognition database accessible by law enforcement [see PR here & study report here] — usually in the form of a driver’s license photo. “It is unprecedented to have a biometric database that is composed primarily of law-abiding citizens,” Garvie said. “The current trajectory might fundamentally change the relationship between police and the public,” she said. “It could change the degree to which we feel comfortable going about our daily lives in public spaces.” Alessandro Acquisti [here & here], a professor of information technology and public policy at Carnegie Mellon University pointed out that facial recognition can be used for good — to combat child trafficking — and for bad — to track law-abiding citizens anywhere they go. That doesn’t mean it’s neutral, he said. Anonymity is becoming more scarce with the proliferation of photos on social media and the technology that can recognize faces. [Seattle Times, See also: Are You on Board with Using Facial Recognition in Schools? | Is Facial Recognition in Schools Worth the High Price?

Big Data / Analytics

WW – ‘Predictive Policing’: Law Enforcement Revolution or Spin on Old Biases?

Los Angeles has been put on edge by the LAPD’s use of an elaborate data collection centre, a shadowy data analysis firm called Palantir, and predictive algorithms to try to get a jump on crime. Los Angeles isn’t the only place where concerns are flaring over how citizens’ data is collected and used by law-enforcement authorities. Police forces across the U.S. are increasingly adopting the same approach as the LAPD: employing sophisticated algorithms to predict crime in the hope they can prevent it. Chicago, New York City and Philadelphia use similar predictive programs and face similar questions from the communities they are policing, and even legal challenges over where the information is coming from and how police are using it. A sophisticated program called PredPol, short for predictive policing, is used to varying degrees by 50 police forces across the United States. The genesis of the program came from a collaboration between LAPD deputy chief Sean Malinowski and Canadian Jeff Brantingham, an anthropology professor at UCLA. Canadian police forces are very aware of what their U.S. counterparts are doing, but they are wary of jumping in with both feet due to concerns over civil liberties issues. Sarah Brayne, a Canadian sociologist, spent two years inside the LAPD studying its use of predictive policing. She says the LAPD has been using predictive policing since 2012, and crunching data on a wide range of activities — from “where to allocate your resources, where to put your cars, where to put your personnel, to helping investigators solve a crime. And even for some risk management, like tracking police themselves, for performance reviews and different accountability reasons.” But PredPol is just one of the police systems that community watchdogs are concerned about. The Rampart division of the LAPD uses another program to pinpoint individuals who are at risk of committing crimes in the future. This is known as person-based predictive policing. … The program is called Los Angeles Strategic Extraction and Restoration (LASER). At the moment it generates a list of approximately 20 “chronic offenders” that is updated monthly. LAPD documents show how LASER gives people specific scores, which increase with each police encounter. You get five points if you are a gang member. Five points if you are on parole or probation. Five point for arrests with a handgun. And one point for every “quality” police contact in the past two years, which includes what the LAPD calls “Field Interviews.” In Canada, field interviews are called “carding,” referring to the cards police use to record information about the people they have stopped — even when there are no grounds to think they’ve committed an offence. On the chronic offender bulletin there are names, addresses, scores ranging from six to 28, dates of birth and gang affiliations (Crazy Riders, Wanderers, 18th Street, and so on). The police try to track down the people on the bulletin and hand-deliver an “At Risk Behaviour” letter to each one — if they can find them. Officers are given instructions to contact the offenders on the list every month “to check their status” and to remind them to use the community services. They are also encouraged to door-knock on adjacent residences to “spark interest and gather info.” [CBC News]

CA – Q&A: Data Ownership Conundrum in the Data Driven World

Modern society is increasingly reliant upon data and driven by data gathering and data analytics. This leads to many questions that need to be unraveled relating to privacy, data rights and smart cities. One person well-placed to tackle these issues is Teresa Scassa [University of Ottawa law professor & fellow at the Waterloo-based Centre for International Governance Innovation]. In her latest research paper, Data Ownership, Scassa describes how in most jurisdictions the ownership of data is often based in copyright law or protected as confidential information. In Europe, database protection laws also play a role. However, there are limitations and major areas where laws fall short. For example, “Copyright protection requires a human author. Works that are created by automated processes in which human authorship is lacking cannot, therefore, be copyright protected. This has raised concerns that the output of artificial intelligence processes will not be capable of copyright protection,” warns Scassa. To discuss these important issues further, Digital Journal recently asked Teresa Scassa the following questions: 1) How important has data become for businesses?; 2) Are consumers too willing to provide personal data?; 3) How concerned should people be about what is done with personal data?; 4) How about data security issues. How secure is most personal data that is held by companies?; and 5) How are new technologies, like artificial intelligence, affecting data privacy? [Digital Journal] In a follow up interview, Teresa Scassa discusses data privacy laws, considering the recent changes affecting Europe and the possible implications for the U.S. [here]


CA – OPC Publishes Draft Guidelines for Mandatory Breach Reporting

On September 17, 2018, the Office of the Privacy Commissioner of Canada (OPC) published draft guidelines on mandatory breach reporting under the “Personal Information Protection and Electronic Documents Act” (PIPEDA). The guidelines are intended to assist organizations in meeting their breach reporting and record-keeping obligations under PIPEDA’s mandatory breach reporting regime, which comes into force on November 1, 2018. Organizations have until October 2, 2018 to provide feedback on these draft guidelines In April 2018, the federal government published the Breach of Security Safeguards Regulations setting out the requirements of the new regime, and announced that the Regulations would come into force on November 1, 2018 …organizations will be required to notify the OPC and affected individuals of “a breach of security safeguards” involving personal information under the organization’s control where it is reasonable in the circumstances to believe that the breach creates a “real risk of significant harm” to affected individuals. Other organizations and government institutions must also be notified where such organization or institution may be able to mitigate or reduce the risk of harm to affected individuals. Organizations must also keep and maintain records of all breaches of security safeguards regardless of whether they meet the harm threshold for reporting. Failure to report a breach or to maintain records as required is an offence under PIPEDA, punishable by a fine of up to C$100,000. The draft guidelines are intended to assist organizations in meeting their breach reporting and record-keeping obligations under PIPEDA. Unfortunately for stakeholders, much of the information in the draft guidelines is simply a reiteration of the legal requirements as set out in PIPEDA and the Regulations. However, the draft guidelines provide additional guidance in certain areas, including: 1) Who Is Responsible for Reporting a Breach?; 2) When Does a Breach Create a Real Risk of Significant Harm?; 3) Form of Report; and 4) What Information Must Be Included in a Breach Record? [Business Class (Blakes) Additional coverage at: BankInfo Security]

CA – Upcoming Canadian Breach Notification Requirements Still in Flux

Canada’s national breach notification requirements are coming online November 1st, meaning companies experiencing a data breach will soon have new reporting obligations. These requirements were created in 2015 by the Digital Privacy Act, which amended the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada’s main privacy statute. In April 2018, in preparation for the national implementation of the new law, the Office of the Privacy Commissioner of Canada (OPC), with authority to issue promulgating regulations under PIPEDA, issued Regulations that establish detailed requirements regarding the content and methodology of breach notifications to the OPC and affected individuals. After issuing those Regulations, the OPC continued to receive requests for further clarity and guidance regarding the breach notification requirements under PIPEDA and the OPC Breach Regulations. In response to those further requests for guidance, the OPC announced that it would issue further guidance (“What You Need To Know About Mandatory Reporting Of Breaches Of Security Safeguards”) on breach notification and reporting. On September 17th, the OPC invited public feedback on the draft guidance. The OPC will accept feedback until October 2, 2018. Comments can be sent to OPC-CPVPconsult2@priv.gc.ca and must be either in the body of the email or attached as a Word or PDF document. The OPC will publish the final guidance soon after the October 2nd deadline to ensure guidance is in place when the amendment becomes effective in November. … the OPC’s September 17th announcement indicates there is still uncertainty around what exactly will be required of companies that experience a breach. Companies that hold or control information on Canadian residents have one more opportunity to impact the final requirements or pose questions for clarity in the OPC’s guidance, and should submit their views before the October 2nd deadline. [Eye on Privacy (SheppardMullin) and at: BankInfo Security]

CA – OPC Denounces Slow Progress on Fixing Outdated Privacy Laws

Federal Privacy Commissioner Daniel Therrien’s annual report to Parliament was tabled. [see here, Commissioner’s Message here &103 pg PDF here] It outlines the work of the Office of the Privacy Commissioner of Canada (OPC) as it relates to both the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada’s federal private sector privacy law and the Privacy Act, which applies to the federal public sector. It covers important initiatives over the last year, including key investigations, work on reputation and privacy, new consent guidance as well as work on national security and Bill C-59 [here]. In his report, Therrien also reiterated calls for the government to increase his office’s resources. “My office needs a substantial budget increase to keep up our knowledge of the technological environment and improve our capacity to inform Canadians of their rights and guide organizations on how to comply with their obligations,” he says. “Additional resources are also needed meet our obligations under the new breach reporting regulations that come into force in November.” [see here] Under the regulations, companies will be required to report all privacy breaches presenting a real risk of significant harm. While imperfect, Therrien calls the regulations “a step in the right direction.” As breach notification regulations come into force on the private sector side, serious concerns have also emerged about the federal government’s ability to prevent, detect and manage privacy breaches within its own institutions. An OPC review of privacy breach reporting by federal government institutions found thousands of breaches occur annually, and while some go unreported, others likely go entirely unnoticed at many institutions. Therrien [also] warns privacy concerns are reaching crisis levels and is calling on the federal government to take immediate action by giving his office new powers to more effectively hold organizations to account. “Unfortunately, progress from government has been slow to non-existent … There’s no need to further debate whether to give my office new powers to make orders, issue fines and conduct inspections to ensure businesses respect the law. It’s not enough for the government to ask companies to do more to live up to their responsibilities. To increase trust in the digital economy, we must ensure Canadians can count on an independent regulator with the necessary tools to verify compliance with privacy law. If my Office had order making powers, our guidelines would be more than advice that companies can choose to ignore. They would become real standards that ensure real protection for Canadians.” Therrien says. [Office of the Privacy Commissioner of Canada Also see the OPC’s “Alert” Key lessons for public servants from the 2017-18 Annual Report Coverage: Canada’s privacy laws ‘sadly falling behind’ other countries: Privacy commissioner | Privacy commissioner slams ‘slow to non-existent’ federal action in light of major data breaches | Watchdog says Ottawa moving too slowly on privacy threats | Watchdog slams government’s ‘slow to non-existent’ action to protect Canadians’ privacy | Time of ‘self-regulation’ is over, privacy czar says in push for stronger laws]

CA – ‘Right to Be ForgottenCould Trigger Battle Over Free Speech in Canada

A push by some for a “right to be forgotten” for Canadians is setting up what could be a landmark battle over the conflict between privacy and freedom of expression on the internet. In his annual report issued September 27 – PR, Report, Commissioner’s Message &103 pg PDF] Privacy Commissioner Daniel Therrien served notice he intends to seek clarity from the Federal Court on whether existing laws already give Canadians the right to demand that search engines remove links to material that is outdated, incomplete or incorrect, a process called “de-indexing.” Following a round of consultations he launched in 2016, Therrien concluded in a draft report earlier this year that Canadians do have that right under PIPEDA. Google disagrees — and warns that a fundamental charter right is being threatened. [Section 2 (b) — expression & press freedom, wiki here, Charter here, guidance here] “The right to be forgotten impinges on our ability to deliver on our mission, which is to provide relevant search results to our users,” said Peter Fleischer [here], Google’s global privacy counsel. “What’s more, it limits our users’ ability to discover lawful and legitimate information.” University of Ottawa law professor Michael Geist, also blog posts here & here], who specializes in internet and e-commerce law, said “Given the complexity, given the freedom of expression issues that arise out of this, I think the appropriate place is within Parliament to explicitly go through the policy process and decide what’s right for Canada on this” Internet lawyer Allen Mendelsohn [blog posts here & here] worries about the “slippery slope” implied in a right to be forgotten. With no easy answers on how to move forward, he said it’s Parliament’s duty to debate the concept and decide on appropriate standards. “Parliament represents the people, and if the will of the people think this is a good thing to do, then there’s no good reason why they shouldn’t go ahead and do it,” he said. Google argues that freedom of expression is a fundamental human right. While the European court upheld the right to be forgotten, Chile, Colombia and the U.S. have all rejected it. According to Peter Fleischer “As the privacy commissioner considers translating the European model to Canada, it will also have to confront the challenges of how to balance one person’s right to privacy with another’s right to know, and whether the European right to be forgotten would be consistent with the rights outlined in Canada’s Charter of Rights and Freedoms, which assures Canadians ‘freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication.’“ [CBC News | Privacy watchdog to seek ruling on ‘right to be forgotten’

CA – Liberals Won’t Put Political Parties Under Privacy Laws

The Liberal government will not accept a recommendation — endorsed by MPs from the three major parties on Access to Information, Privacy and Ethics Committee [see here & report here also 56 pg PDF] — to develop a set of privacy rules for political parties or bring them under existing laws. Instead, under the Liberals’ electoral rule changes, parties will simply have to post a privacy policy online. Bill C-76 [here] does not allow for any independent oversight, however, to ensure parties are actually following their policies. Because they’re specifically exempted from federal privacy laws, parties are also not required to report if they’ve been hacked or suffered a data breach involving sensitive information about Canadians. The decision means federal political parties can continue to collect, store and use the personal information of Canadian citizens without limitations, laws or independent oversight. Federal Privacy Commissioner Daniel Therrien — along with his counterparts at the provincial and territorial levels — issued a joint statement calling on all levels of government to put some form of restrictions on parties’ data operations — an increasingly crucial aspect of electioneering in Canadian politics [see PR here & Joint Resolution here]. In exempting political parties from privacy laws, Canada is largely an outlier. The United Kingdom, New Zealand, and much of the European Union subjects parties to privacy rules. [Toronto Star coverage at: Toronto Star Editorial | Political parties excused from privacy laws: Why Albertans’ personal information is at risk]

CA – Buyers’ Privacy Top Priority, Says Ontario’s Online Pot Retailer

Ontario’s government-run cannabis retailer is assuring its future customers that their privacy is the top priority, an issue ranked as a major concern for marijuana users in a recent report which ranked privacy and data security among the top demands of Canadian marijuana consumers, noting one in five listed it as the most important feature. [see Deloitte’s 2018 cannabis report, PR]. Critics have raised concerns about how Ontario Cannabis Store (OCS) [here] customers’ data will be used and stored after the online delivery service launches on Oct. 17. There are worries the data may be stored in the United States, where American border agents could access it and ban travellers from entering the U.S. for using a drug that’s illegal there under federal law. The OCS this week announced it’s taking steps to safeguard customers’ privacy and keep their buying history confidential. Ensuring data is stored within Canada and other privacy considerations were key factors in deciding to partner with Shopify, the Ottawa-based e-commerce platform. All information collected will be deleted and no information will be sold to third parties after it’s held for a minimum time, the company says. While dispensaries across the country are getting ready to open their doors on Oct. 17 — when Canada becomes the second country in the world to legalize recreation marijuana — Ontario residents will be able to legally buy pot only through a government-run delivery service. However, new Ontario Premier Doug Ford has rejected the government-monopoly on cannabis sales — a model set up under the previous Liberal government — [and] storefront pot sales are to begin on April 1. [The London Free Press]

CA – TREB CEO Concerned About Homeowner Privacy, Security

The Toronto Real Estate Board is “pressing ahead” with the Competition Bureau’s demand to make home sales data available on realtors’ password-protected websites, but that doesn’t mean the board’s concerns around privacy are gone. In his first interview since the Supreme Court of Canada refused in August to hear TREB’s seven-year fight [read Competition Bureau PR here & TREB PR here] to keep the numbers under wraps – effectively forcing them to be made public – the board’s chief executive officer John DiMichele told The Canadian Press, “the element of privacy in our opinion hasn’t been settled completely yet.” DiMichele is particularly concerned because he claims to have seen evidence of brokers’ remarks about homeowners being posted online, information that is not included in the home sales data feed TREB had to make available to realtors. DiMichele wouldn’t reveal how he discovered such violations [and he did not] discuss in detail what kind of action will be taken against anyone who is caught posting unauthorized information or home sales data without password protections – conditions mandated in a Competition Tribunal ruling [5 pg PDF here] that came into effect recently, after the Competition Bureau argued that TREB’s refusal to release the data was anti-competitive and stifled innovation. In early September, the board sent cease-and-desist letters to real estate companies warning it will revoke data access and TREB memberships or bring legal action against members it believes are violating its user agreement by posting sales numbers online “in an open and unrestricted fashion.” [The Globe & Mail Additional coverage at: The Toronto Star]


WW – Yes Facebook is Using Your 2FA Phone Number to Target You With Ads

Facebook has confirmed it does in fact use phone numbers that users provided it for security purposes to also target them with ads. Specifically a phone number handed over for two factor authentication (2FA) — a security technique that adds a second layer of authentication to help keep accounts secure. Facebook’s confession follows a story Gizmodo ran related to research work carried out by academics at two U.S. universities [Northeastern University and Princeton University] who ran a study [see Investigating sources of PII used in Facebook’s targeted advertising – 18 pg PDF here] in which they say they were able to demonstrate the company uses pieces of personal information that individuals did not explicitly provide it to, nonetheless, target them with ads. Some months ago Facebook did say that users who were getting spammed with Facebook notifications to the number they provided for 2FA was a bug. “The last thing we want is for people to avoid helpful security features because they fear they will receive unrelated notifications,” Facebook then-CSO Alex Stamos wrote in a blog post at the time. Apparently not thinking to mention the rather pertinent additional side-detail that it’s nonetheless happy to repurpose the same security feature for ad targeting. [TechCrunch coverage at: DeepLinks Blog (EFF), The Mercury News and Tom’s Harware]

Facts & Stats

CA – Federal Workers Cited 3,075 Times for Lapses in Document Security

Office workers at Public Services and Procurement Canada were cited 3,075 times last year for failing to lock up documents, USB keys and other storage devices containing sensitive information, says a new security report. And six of those employees were found to be chronic offenders during a “security sweep” at the department in 2017-2018, with each of them leaving confidential material unsecured at least six times over the 12-month period. According to a June 2018 briefing note, obtained by CBC News under the Access to Information Act. [CBC News]

WW – Cyber Crime’s Toll: $1.1 Million in Losses and 1,861 Victims per Minute

Every minute more than $1.1 million is lost to cyber crime and 1,861 people fall victim to such attacks, according to a new report [Evil Internet Minute 2018] from threat management company RiskIQ [see PR, Blog Post & Infographic]. Despite the best efforts of organizations to guard against external cyber threats, spending up to $171,000 every 60 seconds, attackers continue to proliferate and launch successful campaigns online, the study said. Attacker methods range from malware to phishing to supply chain attacks aimed at third parties. Their motives include monetary gain, large-scale reputational damage, politics and espionage. One of the biggest security threats is ransomware. The report said 1.5 organizations fall victim to ransomware attacks every minute, with an average cost to businesses of $15,221. [Information Management]


CA – N.S. Premier Calls Election Promise to Increase OIPC Powers “a Mistake’

In 2013, Stephen McNeil said that if he became premier, he would “expand the powers and mandate of the Office of the Information and Privacy Commissioner, particularly through granting her order-making power.” At the time he responded to a report by the Centre of Law and Democracy [12 pg PDF] that recommended a complete overhaul of the province’s freedom-of-information policy, writing “If elected Premier, I will expand the powers and mandate of the Review Officer, particularly through granting her order-making power” Nearly five years later and with no follow-through on that commitment, he says the pledge was a “mistake.” He said that he thinks the office is functioning “properly” the way it is and that it has all the power it needs. But experts say that McNeil’s failure to institute meaningful reforms in government transparency five years after taking office indicate a larger failure to take government transparency seriously. Catherine Tully, the province’s current privacy commissioner, has issued her own calls to update the legislation including giving her order-making power. She has said that legislation written in 1993 is outdated for the current digital world. [Global News]

US – Privacy Group Sues Archives for Kavanaugh Surveillance Records

The Electronic Privacy Information Center [EPIC] has filed a federal Freedom of Information Act lawsuit seeking records related to U.S. Supreme Court nominee Brett Kavanaugh’s involvement in the George W. Bush administration’s government surveillance programs between 2001 and 2006 during enactment of the Patriot Act and while the administration was conducting warrantless surveillance for counter-terrorism purposes. [see announcement here & 21 pg PDF claim here] The group alleged that Kavanaugh said in 2006 Senate testimony on his nomination to the U.S. Court of Appeals for the District of Columbia Circuit that he didn’t know anything about the warrantless wiretapping program, which was carried out in secret until 2005. His White House email communications and records related to the program have not been made available to the public, the group alleged. Bloomberg BNA


WW – Please Don’t Give Your Genetic Data to AncestryDNA as Part of Their Spotify Playlist Partnership

Ancestry, the world’s largest for-profit genealogy company, has announced a new partnership with Spotify to create playlists based on your DNA. The partnership combines Spotify’s personalized recommendations with Ancestry’s patented DNA home kit data to give users recommendations based on both their Spotify habits and their ancestral place of origin. A ThinkProgress investigation last year found that buried in their terms of service, Ancestry claims ownership of a “perpetual, royalty-free, worldwide license” that may be used against “you or a genetic relative” as the company and its researchers see fit. Upon agreeing to the company’s terms of service, you and any genetic relatives appearing in the data surrender partial legal rights to the DNA, including any damages that Ancestry may cause unintentionally or purposefully. At the same time, maybe their mission isn’t all that different from Spotify’s, who’ve spent the last few years preaching the Big Data gospel in their aim to deliver the most highly-personalized experience to users through data collection. However you feel about data privacy, the Ancestry partnership feels like another big move for Spotify, who have continued to partner with auto manufacturers, telecom behemoths, video providers and more in recent months. [SPIN coverage at: Jezebel, Quartzy, Complex and Campaign]

Health / Medical

US – Congress Urged To Align 42 CFR Part 2 with HIPAA Privacy Rule

The Partnership to Amend 42 CFR Part 2 is urging Congress to include the Overdose Prevention and Patient Safety Act (HR 6082), which would align 42 CFR Part 2 with the HIPAA Privacy Rule, in compromise opioid legislation that the House and Senate are considering. HR 6082 would allow the sharing of information about a substance abuse patient without the patient’s consent. The House passed its comprehensive opioid crisis legislation (HR 6) [here & 9 pg PDF overview here] in June, while the Senate just passed its legislation (S 2680). The two chambers are working on compromise legislation thaty they hope to pass before the mid-term elections. Currently, 42 CFR Part 2 prevents providers from sharing any information on a patient’s substance abuse history unless the patient gives explicit consent. The Partnership to Amend 42 CFR Part 2 wants current law to be amended because, it argues, the stricter confidentiality requirements have a negative effect on medical treatment of individuals undergoing treatment for addiction. They emphasized their case] In a Sept. 18 letter to the Senate and House majority and minority leaders. Not everyone in healthcare favors changing 42 CFR Part 2. The American Medical Association (AMA) has come out against the effort to change current law [arguing in a letter sent to Congress – coverage here] that amending 42 CFR Part 2 would discourage addicted individuals from seeking treatment out of concern that their addiction treatment information will be shared without their permission. [HealthIT Security]

Horror Stories

CA – Proposed Class Action Lawsuit Launched After Alleged NCIX Data Breach

Kipling Warner, a Vancouver software engineer has launched a proposed class action lawsuit in the wake of an alleged data breach involving personal information belonging to former customers of bankrupt computer retailer NCIX. [The issue is being investigated by the RCMP and the BC OIPC see here]. The notice of civil claim filed in B.C. Supreme Court [here] says he gave the company his name and address along with his debit and credit card details in the course of purchasing computer products. He’s seeking to certify a lawsuit against NCIX and the company tasked with auctioning off the computer firm’s old equipment. Warner claims NCIX failed to properly encrypt the information of at least 258,000 people. And he claims the auctioneer failed to take “appropriate steps to protect the private information on its premises.” Warner is suing for loss including damage to credit reputation, mental distress, “wasted time, frustration and anxiety” and time lost “engaging in precautionary communication” with banks, credit agencies and credit car companies. His lawyer, David Klein [here], told CBC that customers dealing with a technology company would expect anyone who comes into contact with their information to take steps to ensure confidentiality. The provincial privacy act says organizations doing business in British Columbia have a duty to protect the personal information entrusted to them. The federal regulation says personal information that is “no longer required to fulfil the identified purposes should be destroyed, erased or made anonymous.” The proposed class action lawsuit says millions of customers could be affected. [CBC News]

US – Uber Agrees to $148M Settlement With States Over Data Breach

Uber will pay $148 million and tighten data security after the ride-hailing company failed for a year to notify drivers that hackers had stolen their personal information, according to a settlement reached with all 50 states and the District of Columbia after a massive data breach in 2016 [here] announced Wednesday. [see California AG PR here, Illinois AG PR here, Alaska AG PR here, New York AG PR here & New Mexico AG PR here] Instead of reporting [the breach], Uber hid evidence of the theft and paid ransom to ensure the data wouldn’t be misused. “This is one of the most egregious cases we’ve ever seen in terms of notification; a yearlong delay is just inexcusable,” Illinois Attorney General Lisa Madigan [wiki here] told The Associated Press. “And we’re not going to put up with companies, Uber or any other company, completely ignoring our laws that require notification of data breaches.” Uber, whose GPS-tracked drivers pick up riders who summon them from cellphone apps, learned in November 2016 that hackers had accessed personal data, including driver’s license information, for roughly 600,000 Uber drivers in the U.S. The company acknowledged the breach in November 2017, saying it paid $100,000 in ransom for the stolen information to be destroyed. The hack also took the names, email addresses and cellphone numbers of 57 million riders around the world. The settlement requires Uber to comply with state consumer protection laws safeguarding personal information and to immediately notify authorities in case of a breach; to establish methods to protect user data stored on third-party platforms and create strong password-protection policies. The company also will hire an outside firm to conduct an assessment of Uber’s data security and implement its recommendations. The settlement payout will be divided among the states based on the number of drivers each has. [The Washington Post coverage at: TechCrunch, PYMNTS, The Wall Street Journal and engadget]

US – Wendy’s Faces Lawsuit for Unlawfully Collecting Employee Fingerprints

A class-action lawsuit has been filed in Illinois against fast food restaurant chain Wendy’s accusing the company of breaking state laws in regards to the way it stores and handles employee fingerprints. The lawsuit was filed on September 11, in a Cook County court [here], according to a copy of the complaint obtained by ZDNet. [The case is: Martinique Owens and Amelia Garcia v. Wendy’s International LLC, et al., Case No. 2018­-ch-­11423, in the Circuit Court of Cook County — complaint here.] The complaint is centered around Wendy’s practice of using biometric clocks that scan employees’ fingerprints when they arrive at work, when they leave, and when they use the Point-Of-Sale and cash register systems. Plaintiffs, represented by former Wendy’s employees Martinique Owens and Amelia Garcia, claim that Wendy’s breaks state law — the Illinois Biometric Information Privacy Act (BIPA) [here] — because the company does not make employees aware of how the company handles their data. Wendy’s does not inform employees in writing of the specific purpose and length of time for which their fingerprints were being collected, stored, and used, as required by the BIPA, and nor does it obtain a written release from employees with explicit consent to obtain and handle the fingerprints in the first place. Nor does it provide a publicly available retention schedule and guidelines for permanently destroying employees’ fingerprints after they leave the company, plaintiffs said. The class-action also names Discovery NCR Corporation [here], which is the software provider that supplies Wendy’s with the biometric clocks and POS and cash register access systems used in restaurants. Plaintiffs said they believe NCR may hold fingerprint information on other Wendy’s employees [ZDNet coverage at: Top Class Actions, The Daily Dot, Human Capital (HRD), Gizmodo and Biometric Update]

WW – Facebook Forces Mass Logout After Breach

Facebook logged 90 million users out of their accounts after the company discovered that hackers had been exploiting a flaw in Facebook code that allowed them to steal Facebook access tokens and take over other people’s accounts. The stolen tokens could also be used to access apps and websites linked to the Facebook accounts. The hackers exploited a trio of flaws that affected the “View As” feature, which lets users see how their profiles appear to other people. Facebook has fixed the security issue; it has also reset the access tokens for 90 million accounts. Facebook became aware of the issue on September 16, when it noticed an unusual spike in people accessing Facebook. [newsroom.fb.com: Security Update | Wired.com: The Facebook Security Meltdown Exposes Way More Sites Than Facebook | Wired: Everything We Know About Facebook’s Massive Security Breach | eWeek: Facebook Data Breach Extended to Third-Party Applications | – ZDnet: Facebook discloses network breach affecting 50 million user accounts |  krebsonsecurity: Facebook Security Bug Affects 90M Users | The Register: Facebook: Up to 90 million addicts’ accounts slurped by hackers, no thanks to crappy code]

WW – Facebook Says Big Breach Exposed 50 Million Accounts to Full Takeover

Facebook Inc said [notice & details here] that hackers stole digital login codes allowing them to take over nearly 50 million user accounts in its worst security breach ever given the unprecedented level of potential access, adding to what has been a difficult year for the company’s reputation. It has yet to determine whether the attacker misused any accounts or stole private information. It also has not identified the attacker’s location or whether specific victims were targeted. Its initial review suggests the attack was broad in nature. Chief Executive Mark Zuckerberg described the incident as “really serious” in a conference call with reporters [see transcript]. His account was affected along with that of Chief Operating Officer Sheryl Sandberg, a spokeswoman said. The vulnerability had existed since July 2017, but the company first identified it on Tuesday after spotting a “fairly large” increase in use of its “view as” [here] privacy feature on Sept. 16, executives said. “View as” allows users to verify their privacy settings by seeing what their own profile looks like to someone else. The flaw inadvertently gave the devices of “view as” users the wrong digital code, which, like a browser cookie, keeps users signed in to a service across multiple visits. That code could allow the person using “view as” to post and browse from someone else’s Facebook account, potentially exposing private messages, photos and posts. The attacker also could have gained full access to victims’ accounts on any third-party app or website where they had logged in with Facebook credentials. Facebook fixed the issue. It also notified the U.S. Federal Bureau of Investigation, Department of Homeland Security, Congressional aides and the Data Protection Commission in Ireland, where the company has European headquarters. Facebook reset the digital keys of the 50 million affected accounts, and as a precaution temporarily disabled “view as” and reset those keys for another 40 million that have been looked up through “view as” over the last year. About 90 million people will have to log back into Facebook or any of their apps that use a Facebook login, the company said. [Reuters See also: Facebook Security Bug Affects 90M Users | Facebook’s spam filter blocked the most popular articles about its 50m user breach | Here’s what to do if you were affected by the Facebook hack | Facebook Says Three Different Bugs Are Responsible For The Massive Account Hacks | Facebook warns that recent hack could have exposed other apps, including Instagram, Tinder, and Spotify | Facebook Faces Class Action Over Security Breach That Affected 50 Million Users | Facebook Could Face Up to $1.63 Billion Fine for Latest Hack Under the GDPR | Facebook could be fined up to $1.63 billion for a massive breach which may have violated EU privacy laws | Until data is misused, Facebook’s breach will be forgotten]

Internet / WWW

EU – Report Warns of Smart Home Tech Impact on Children’s Privacy

Dr. Veronica Barassi of Goldsmiths, University of London, leads the Child Data Citizen research project, and submitted a report on “Home Life Data and Children’s Privacy“ to the Information Commissioner’s Office (ICO), arguing that data collected from children by home automation devices is both personal data and is “home life data,” which is made up of family, household, biometric and highly contextual data. She calls for the ICO to launch a review the impact of home life data on children’s privacy, and to include the concept in future considerations. [Biometric Update coverage at: TechCrunch]

Law Enforcement

CA – RCMP’s Ability to Police Digital Realm ‘Rapidly Declining’

Privacy watchdogs have warned against any new encryption legislation. A note tucked into the briefing binder prepared for RCMP Commissioner Brenda Lucki when she took over the top job earlier this year obtained by CBC News may launch a renewed battle between the national police service and privacy advocates. “Increasingly, criminality is conducted on the internet and investigations are international in nature, yet investigative tools and RCMP capacity have not kept pace. Growing expectations of policing responsibilities and accountability, as well as complexities of the criminal justice system, continue to overwhelm the administrative demands within policing” [says the memo]. Encryption of online data has a been a persistent thorn in the RCMP’s side. “Approximately 70% of all communications intercepted by CSIS and the RCMP are now encrypted. 80 organized crime groups were identified as using encryption in 2016 alone,” according to the 274-page [briefing binder]. Lucki’s predecessor lobbied the government for new powers to bypass digital roadblocks, including tools to get around encryption and warrantless access to internet subscriber information. Some critics have noted that non-criminals — journalists, protesters and academics, among others — also use encryption tools online and have warned any new encryption legislation could undermine the security of financial transactions and daily online communication. Ann Cavoukian …called the RCMP’s push for more online policing power “appalling.” … “I guess we should remind them that we still live in a free and democratic society where people have privacy rights, which means that they should be in control of their personal information … If you’re a law abiding citizen, you get to decide how your information is used and to whom it’s disclosed. The police have no right to access your personal information online, unless of course they have a warrant” she said. [CBC News]

Online Privacy

US – Facebook Scolds Police for Using Fake Accounts to Snoop on Citizens

In a September 19 letter, addressed to Memphis Police Department Director Michael Rallings, Facebook’s Andrea Kirkpatrick, director and associate general counsel for security, scolded the police for creating multiple fake Facebook accounts and impersonating legitimate Facebook users as part of its investigations into “alleged criminal conduct unrelated to Facebook.” Facebook’s letter was sent following a civil rights lawsuit filed by the American Civil Liberties Union (ACLU) of Tennessee that accused the MPD of illegally monitoring activists to stifle their free speech and protests. The lawsuit claimed that Memphis police violated a 1978 consent decree that prohibits infiltration of citizen groups to gather intelligence about their activities. After two years of litigation, the city of Memphis had entered into a consent decree prohibiting the government from “gathering, indexing, filing, maintenance, storage or dissemination of information, or any other investigative activity, relating to any person’s beliefs, opinions, associations or other exercise of First Amendment rights.” Before the trial even began over the ACLU’s lawsuit last month, US District Judge Jon McCalla issued a 35-page order agreeing with the plaintiffs, but he also ruled that police can use social media to look for specific threats: a ruling that, one imagines, would condone the use of fake profiles during undercover police work… butt not the illegal surveillance of legal, Constitutionally protected activism. the ACLU lawsuit uncovered evidence that Memphis police used a fake “Bob Smith” account to befriend and gather intelligence on Black Lives Matter activists. According to the Electronic Frontier Foundation (EFF), Facebook deactivated “Bob Smith” after the organization gave it a heads-up. Then, Facebook went on to identify and deactivate six other fake accounts managed by Memphis police. [Naked Security (Sophos)]

WW – Google Promises Chrome Changes After Privacy Complaints

Google, on the defensive from concerns raised about how Chrome tracks its users, has promised changes to its web browser. Complaints in recent days involve how Google stores data about browsing activity in files called cookies and how it syncs personal data across different devices. Google representatives said there’s nothing to be worried about but that they’ll be changing Chrome nevertheless. In a recent blog post [Zach Koch, Chrome Product Manager said – here] that it will add new options and explanations for its interface and reverse one Chrome cookie-hoarding policy that undermined people’s attempts to clear those cookies. [CNET News Coverage of complaints at: Bloomberg (video), CNBC, WIRED, TechCrunch, Forbes and Popular Mechanics]

WW – Privacy and Anonymity in the Modern World — CyberSpeak Podcast

On this episode of the CyberSpeak with InfoSec Institute podcast [YouTube here], Lance Cottrell, chief scientist at Ntrepid, talks about the evolution of privacy and anonymity on the Internet, the impact of new regulations and laws, and a variety of other privacy-related topics.In the podcast, Cottrell and host Chris Sienko discuss:

  • What about the early Internet drove you to focus on online anonymity and security? (1:45)
  • Do the early privacy tools and concepts hold up in today’s environment? (3:50)
  • When it did become apparent that fraudsters and phishers were taking over the Internet? (5:00)
  • What are some of the most effective social engineering attacks being used? (8:10)
  • Have you ever been scammed or phished? (11:35)
  • Why is online anonymity important? (14:50)
  • What are some examples of privacy and security issues while traveling? (20:50)
  • How will GDPR and California’s new privacy law affect anonymity and privacy? (23:25)
  • What would be your dream privacy regulation or law? (24:55)
  • What are your thoughts on privacy certifications? (28:50)
  • What’s the future of online privacy and anonymity? (29:40)

[Security Boulevard]

Privacy (US)

US – In Senate Hearing, Tech Giants Push Lawmakers for Federal Privacy Rules

A recent hearing at the Senate Commerce Committee [here] with Apple, Amazon, Google and Twitter, alongside AT&T and Charter, marked the latest in a string of hearings in the past few months. This time, privacy was at the top of the agenda. The problem, lawmakers say, is that consumers have little of it. The hearing said that the U.S. was lagging behind Europe’s new GDPR privacy rules and California’s recently passed privacy law, which goes into effect in 2020, and lawmakers were edging toward introducing their own federal privacy law. Here are the key takeaways: 1) Tech giants want new federal legislation, if not just to upend California’s privacy law; 2) Google made “mistakes” on privacy, but evades China search questioning; and 3) Startups might struggle under GDPR-ported rules, companies claim …Committee chairman, Sen. John Thune (R-SD) said [here] that the committee won’t “rush through” legislation, and will ask privacy advocates for their input in a coming hearing. [Watch the full hearing here and read witness statements: Len Cali of ATT – 6 pg PDF here; Andrew DeVore of Amazon – 5 pg PDF here; Keith Enright of Google – 6 pg PDF here & 3pg PDF here; Damien Kieran of Twitter 5 pg PDF here; Guy (Bud) Tribble of Apple 2 pg PDF here; and Rachel Welch of Caharter Communications 5 pg PDF here TechCrunch Coverage: During Senate Hearing, Tech Companies Push for Lax Federal Privacy Rules | Tech Execs Offer Senate Help Writing a Toothless National Privacy Law | US privacy law is on the horizon. Here’s how tech companies want to shape it | Here’s why tech companies are in favor of *federal* regulation | Google confirms Dragonfly project in Senate hearing, dodges questions on China plans | Google confirms secret Dragonfly project, but won’t say what it is]

US – EFF Opposes Industry Efforts to Have Congress Roll Back State Privacy Protections

The Senate Commerce Committee is holding a hearing on consumer privacy [here & PR here], but consumer privacy groups like EFF were not invited. Instead, only voices from big tech and Internet access corporations will have a seat at the table. In the lead-up to this hearing, two industry groups (the Chamber of Commerce and the Internet Association) have suggested that Congress wipe the slate clean of state privacy laws in exchange for weaker federal protections. EFF opposes such preemption, and has submitted a letter to the Senate Commerce Committee to detail the dangers it poses to user privacy. Current state laws across the country have already created strong protections for user privacy. Our letter identifies three particularly strong examples from California’s Consumer Privacy Act, Illinois’ Biometric Privacy Act, and Vermont’s Data Broker Act. If Congress enacts weaker federal data privacy legislation that preempts such stronger state laws, the result will be a massive step backward for user privacy. … The companies represented at Wednesday’s hearing rely on the ability to monetize information about everything we do, online and elsewhere. They are not likely to ask for laws that restrain their business plans. [DeepLinks Blog (Electronic Frontier Foundation)]

US – NTIA Seeks Comment on New Approach to Consumer Data Privacy

The U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) issued a Request for Comments on a proposed approach to consumer data privacy designed to provide high levels of protection for individuals, while giving organizations legal clarity and the flexibility to innovate [see PDF]. The Request for Comments is part of a transparent process to modernize U.S. data privacy policy for the 21st century. In parallel efforts, the Commerce Department’s National Institute of Standards and Technology is developing a voluntary privacy framework [here & here] to help organizations manage risk; and the International Trade Administration is working to increase global regulatory harmony. The proposed approach focuses on the desired outcomes of organizational practices, rather than dictating what those practices should be. With the goal of building better privacy protections, NTIA is seeking comment on the following outcomes: 1) Organizations should be transparent about how they collect, use, share, and store users’ personal information: 2) Users should be able to exercise control over the personal information they provide to organizations; 3) The collection, use, storage and sharing of personal data should be reasonably minimized in a manner proportional to the scope of privacy risks. 4) Organizations should employ security safeguards to protect the data that they collect, store, use, or share; 5) Users should be able to reasonably access and correct personal data they have provided; 6) Organizations should take steps to manage the risk of disclosure or harmful uses of personal data; and 7) Organizations should be accountable for the use of personal data that has been collected, maintained or used by its systems. Comments are due by October 26, 2018 [Newsroom (National Telecommunications and Information Administration) coverage at: Multichannel News, Reuters, CBS News and engadget]

US – NTIA Seeks Comment on New, Outcome-Based Privacy Approach

The U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) [here] issued a Request for Comments [4 pg PDF Federal Register post — also here & PR here] on a new consumer privacy approach that is designed to focus on outcomes instead of prescriptive mandates. The RFC presents an important opportunity for organizations to provide legal and policy input to the administration, and comments are due October 26. The RFC proposes seven desired outcomes that should underpin privacy protections: 1) Transparency, 2) control, 3) reasonable minimization (of data collection, storage length, use, and sharing), 4) security, 5) access and correction, 6) risk management, and 7) accountability. According to the RFC, the outcome-based approach will provide greater flexibility, consumer protection, and legal clarity. Additionally, the RFC describes eight overarching goals for federal action on privacy: 1) Regulatory harmonization; 2) Legal clarity while maintaining the flexibility to innovate; 3) Comprehensive application; 4) Risk and outcome-based approach; 5) Interoperability; 6) Incentivize privacy research; 7) FTC enforcement; and 8) Scalability. The NTIA is seeking comments on the listed outcomes and goals, as well as other issues such as if the FTC needs additional resources to achieve the goals. [Chronicle of Data Protection (Hogan Lovells) coverage at: Multichannel News, Reuters, CBS News and engadget]

US – SEC Brings First Enforcement Action for Violation of ID Theft Rule

On September 26, 2018, the SEC brought its first ever enforcement action [PR] for violations of Regulation S-ID (the “Identity Theft Red Flags Rule”), 17 C.F.R. § 248.201 [here & here also guidance here], in addition to violations of Regulation S-P, 17 C.F.R. 30(a) (the “Safeguards Rule”) [see here & here]. Regulation S-ID and Regulation S-P apply to SEC-registered broker-dealers, investment companies, and investment advisers, and require those entities to maintain written policies and procedures to detect, prevent and mitigate identity theft, and to safeguard customer records and information, respectively. The SEC’s action against Voya Financial Advisors (“Voya”) cements the SEC’s focus on investment adviser and broker-dealer cybersecurity compliance, both in terms of its examination program—which referred the matter to Enforcement—as well as the Division of Enforcement’s Cyber Unit, which investigated and resolved the matter with Voya. The SEC’s enforcement action against Voya arose out of an April 2016 “vishing” intrusion (voice phishing) that allowed one or more persons impersonating Voya representatives to gain access to personal identifying information of approximately 5,600 Voya’s customers. The SEC’s action against Voya was resolved through a settled administrative order, in which Voya neither admitted nor denied the SEC’s findings, but agreed to engage and follow the recommendations of an independent compliance consultant for two years, certify its compliance with the consultant’s recommendations, and pay a $1 million fine. Voya was also enjoined from future violations of Regulation S-P or Regulation S-ID and was censured by the SEC. The SEC noted that, in reaching the settlement, it considered the remedial actions that Voya promptly undertook following the attack. [Privacy & Data Security (Alston & Bird) and at: Reuters, Infosecurity Magazine, Business Record, InvestmentNews and Law 360]

US – Google Releases Framework to Guide Data Privacy Legislation

Google released a set of privacy principles [3 pg PDF & blog post here] to guide Congress as it prepares to write legislation aimed at governing how websites collect and monetize user data. The framework largely consists of privacy principles that Google already abides by or could easily bring itself into compliance with. It calls for allowing users to easily access and control the data that’s collected about them and requiring companies to be transparent about their data practices. The set of proposals is designed to be a baseline for federal rules regarding data collection. Google appears to be the first internet giant to release such a framework, but numerous trade associations have published their own in recent weeks. The industry has gotten on board with the idea of a national privacy law in the weeks since California passed its own strict regulations aimed at cracking down on data collection and increasing user control. Internet companies have universally opposed the measure and have begun pushing Congress to establish a national law that would block states from implementing their own. [The Hill coverage at: AdWeek Coverage at: Charter: Parity Is Key to Online Privacy Protection | In Reversal, IAB Says Congress Should Consider Privacy Legislation


US – Revealed: DoJ Secret Rules for Targeting Journalists With FISA Court Orders

Revealed for the first time are the Justice Department’s rules for targeting journalists with secret FISA court orders. The documents [PDF] were obtained as part of a Freedom of Information Act lawsuit brought by Freedom of the Press Foundation and Knight First Amendment Institute at Columbia University. While civil liberties advocates have long suspected secret FISA court orders may be used (and abused) to conduct surveillance on journalists, the government—to our knowledge—has never acknowledged they have ever even contemplated doing so before the release of these documents today. [These DOJ] FISA court rules are entirely separate from—and much less stringent—than the rules for obtaining subpoenas, court orders, and warrants against journalists as laid out in the Justice Department’s “media guidelines,” which were strengthened in 2015 after scandals involving surveillance of journalists during the Obama era. The DOJ only must follow its regular FISA court procedures (which can be less strict than getting a warrant in a criminal case) and get additional approval from the Attorney General or Assistant Attorney General. FISA court orders are also inherently secret, and targets are almost never informed that they exist. The documents raise several concerning questions: 1) How many times have FISA court orders been used to target journalists?; 2) Why did the Justice Department keep these rules secret — even their very existence — when the Justice Department updated its “media guidelines” in 2015 with great fanfare? and 3) If these rules can now be released to the public, why are the FBI’s very similar rules for targeting journalists with due process-free National Security Letters still considered classified? And is the Justice Department targeting journalists with NSLs and FISA court orders to get around the stricter “media guidelines”? [Freedom of the Press Foundation coverage at: The Intercept]

CA – Cameras on School Buses Are an Option, Says N.L. Privacy Commissioner

The privacy commissioner of Newfoundland and Labrador says the English School District has the right to put cameras on school buses. The issue came up last week when CBC News reported on allegations of sexual assault on a school bus in Western Newfoundland … [where] a teenaged boy has been charged and faces three counts in relation to incidents involving two alleged victims. The family of one of the alleged victims — an eight-year-old girl — is calling on the school district to install cameras on school buses. … “The school district has the ability to put cameras on school buses. They have lots of cameras in many schools across the province,” information and privacy commissioner Donovan Molloy told CBC’s Corner Brook Morning Show [listen here]. School board CEO Tony Stack has said cameras would only be considered as a “last resort” due to privacy reasons. But Privacy Commissioner Molloy says there’s nothing in the law that says cameras are not allowed. He did say, however, that] other measures should be attempted first, such as assigned seating to separate younger and older students, and the use of student monitors, which is permitted under the law. Molloy emphasized that the Office of the Information and Privacy Commissioner has not forbidden the use of cameras on school buses. At the same time he cautioned that he is not advocating for such a change, because constant surveillance may do more harm than good, taking away children’s sense of independence. [CBC News see also: Teenage boy charged with sexual assaults after incidents on school bus | Renewed Calls for Cameras After Alleged School Bus Sexual Assault | North Shore parent starts petition over safety concerns for children riding school buses | School Bus Cameras Not a Cure-All, says Privacy Commissioner

CA – Maps Show All Secret Surveillance Cameras Spying on Canadians

Canadian police agencies have taken part in the increasingly intense law enforcement protocols that have become common across North America and Europe. The most controversial of these efforts, of course, is public surveillance. While Canada’s public surveillance system is less famous than those in the United States and United Kingdom, it does exist. Road cameras are the most well-known and there are potentially thousands of them across the country, all of which are regularly if not constantly monitored. The cameras are designed to catch traffic violations, but they can also be used as a method of public surveillance more broadly, according to Wired. The cameras, of course, also capture activity on sidewalks and public open spaces. According to the Office of the Privacy Commissioner of Canada, Canadian law enforcement agencies “increasingly view it as a legitimate tool to combat crime and ward off criminal activity—including terrorism … however, they present a challenge to privacy, to freedom of movement and freedom of association.” [see here] While the locations of the cameras are (now) public information, most Canadians are unaware that authorities have placed them so extensively in every Canadian city. To give you a sense of the scope of road surveillance in Canada, we’ve compiled these maps, which depict the exact locations of road cameras in every major Canadian city Including: Vancouver, Calgary, Edmonton, Winnipeg, Toronto, Ottawa and Montreal. [MTL Blog coverage at: CBC News]

US Legislation

US – California Approves Bills Tightening Security, Privacy of IoT Devices

Gov. Jerry Brown has signed two bills that could make manufacturers of Internet-connected devices more responsible for ensuring the privacy and security of California residents. Gov. Jerry Brown’s office announced on September 28 that Brown had signed the legislation, Assembly Bill 1906 and Senate Bill 327. The two bills could make manufacturers of Internet-connected devices more responsible for ensuring the privacy and security of California residents. Both pieces of legislation specified they must be signed by the governor and can only become law if the other bill is also signed. Both bills will become law in about 15 months, on Jan. 1, 2020. Senate Bill 327 is the older of the two and was introduced in Feb. 2017 by state Sen. Hannah-Beth Jackson [wiki here], but as currently amended, the senator told Government Technology, is “pretty much a mirror” of AB 1906, introduced in January by Assemblywoman Jacqui Irwin [wiki here] … Both require manufacturers of connected devices to equip them with a “reasonable security feature or features” that are appropriate to their nature and function, and the information they may collect, contain or transmit — and are designed to protect the device and its information from “unauthorized access, destruction, use, modification or disclosure.” The bills also specify that if such a device has a “means for authentification outside a local area network,” that will be considered a reasonable security feature if either the preprogrammed password is unique to each device made; or the device requires a user to create a new “means of authentication” before initial access is granted. The question of what defines a “reasonable security feature or features” is one of several that industry groups cited in their opposition to AB 1906. In a statement provided to GT, the CMTA [California Manufacturers and Technology Association] said the bills are an attempt to “create a cybersecurity framework by imposing undefined rules on California manufacturers,” but instead create a loophole allowing imported devices to “avoid implementing any security features.” This, it said, makes the state less attractive to manufacturers, less competitive and increases the risk of cyberattacks. The Entertainment Software Association in opposition to SB 327, said existing law already requires manufacturers to set up “reasonable privacy protections appropriate to the nature of the information they collect.” [Government Technology See also: California governor signs country’s first IoT security law | Hey, Alexa, California’s New IoT Law Requires Data Protections

US – Amendments to the California Consumer Privacy Act of 2018

Amendments to California’s expansive Consumer Privacy Act of 2018 [AB – 375 here] include new provisions that may significantly impact the timing of enforcement and provide exemptions for large amounts of personal data regulated by other laws. Because the Act was hastily passed [in June, 2018] … it was expected that the Act would undergo significant amendments before it enters into effect on January 1, 2020. The first amendments were passed by the California State Legislature on August 31, 2018, in the form of SB-1121, and Governor Brown [signed it into law September 23, 2018 – see here]. While SB-1121 is labeled as a “technical corrections” bill designed to address drafting errors, ambiguities, and inconsistencies in the Act, in fact, it creates new provisions in addition to those already contained within the Act. One notable provision of the Bill is that it grants a six-month grace period from the date the California AG issues regulations or July 1, 2020, whichever is earlier, before enforcement actions can be brought. Another key effect of the Bill is that it fully exempts data that is regulated by the Gramm-Leach-Bliley Act, the California Financial Information Privacy Act, HIPAA, the California Confidentiality of Medical Information Act, the clinical trials Common Rule, and the Driver’s Privacy Protection Act from the privacy requirements of the Act. However, these industries are still subject to the privacy provisions of the Act if they engage in activities falling outside of their applicable privacy regulations (except for the health care industry, if it treats all data as PHI, then it remains exempt as to all data). As we previously predicted, the Act will continue to evolve prior to its January 1, 2020 enactment. While the current Bill attempts to clarify the Act, it does not address all of the ambiguities and uncertainties. We anticipate further changes and guidance regarding the Act and will continue to monitor the latest developments. [Security & Privacy Bytes (Squire Patton Boggs) Additional coverage at: Privacy and Cybersecurity Perspectives (Murtha Cullina), Workplace Privacy Report (Jackson Lewis), Privacy & Data Security (Alston & Bird) and Data Privacy Monitor (BakerHostetler)]

US – California Consumer Privacy Act: What to Expect

This is the fourth installment in Hogan Lovells’ series [here] on the California Consumer Privacy Act [see installment 1 here, installment 2 here and installment 3 here]. It discusses litigation exposure that businesses collecting personal information about California consumers should consider in the wake of the California Legislature’s passage of the California Consumer Privacy Act of 2018 (CCPA). [AB – 375 here] For several years, the plaintiffs’ bar increasingly has relied on statutes like the Confidentiality of Medical Information Act, Cal. Civ. Code § 56 et seq. [here], and the Customer Records Act, Cal. Civ. Code § 1798.81, et seq. [here], to support individual and classwide actions for purported data security and privacy violations. The CCPA creates a limited private right of action for suits arising out of data breaches. At the same time, it also precludes individuals from using it as a basis for a private right of action under any other statute. Both features of the law have potentially far-reaching implications and will garner the attention of an already relentless plaintiffs’ bar when it goes into effect January 1, 2020. [This post covers] what you need to know [under two headings]: 1) The CCPA Provides a Limited Private Right of Action for Data Breach Suits; and 2) Plaintiffs Likely Will Argue the CCPA Provides a Basis for Unfair Competition Law Claims. Chronicle of Data Protection (Hogan Lovells)

Workplace Privacy

WW – Many Employee Work Habits Seem Innocent but Invite Security Threats

While most employees are generally risk averse, many engage in behaviors that could lead to security incidents, according to a new report from Spanning Cloud Apps LLC [here], a provider of cloud-based data protection. [see Trends in U.S. Worker Cyber Risk-Aversion and Threat Preparedness here] The company surveyed more than 400 full-time U.S. employees, and found that more than half (55%) admitted to clicking links they didn’t recognize, while 45% said they would allow a colleague to use their work computer and 34% were unable to identify an unsecure ecommerce site. The results paint a picture of a workforce that has a general understanding of security risks, but is underprepared for the increasing sophistication and instance of ransomware and phishing attacks, the report said. Employees would rather be “nice” than safe, the study said. Of workers with administrative access, only 35% responded that they would refuse to allow a colleague to access their device. And they like to shop from work, with more than 52% saying they shop online from their work computer. Workers are underprepared for sophisticated phishing emails. When presented with a visual example, only 36% correctly identified a suspicious link as being the key indicator of a phishing email, the study said. [Information Management coverage at: BetaNews]




01-15 September 2018


CA – Case on Privacy Rights Headed To SCC

An Ontario case — Tom Le v. Her Majesty The Queen — headed to the Supreme Court of Canada [see docket] will focus on whether guests in a backyard have a reasonable expectation of privacy in police searches.  The Supreme Court decision could have wide-ranging effects for people who currently do not have standing to challenge a search or detention when they are an invited guest on a property. According to Emily Lam, a partner at Kastner Law and one of the lawyers representing Le: “Richer people can essentially purchase their privacy, in terms of building taller fences, walls, gates. People in poverty don’t have that same ability. If privacy rights are linked to ownership and control, that means that people living communally or in social housing might not get the same privacy protection as more affluent people.” The case, which will be heard by the SCC on Oct. 12, revolves around Le, who in 2012 was visiting friends in a fenced backyard at the Atkinson Housing Co-operative, a subsidized housing complex in Toronto. The factum said that police did a “walk-through” of the common area around the edge of the backyard, looking for two people that were neither Le or his friends.  The police “started questioning the young men in the backyard, asking who they were, if they lived there, and what was going on.” When questioned, Le ran and two officers tackled him to the ground nearby. The police found a gun, cash and 13 grams of crack cocaine on Le’s person and in his bag, the factum said (Le was subsequently convicted and the conviction was upheld on appeal). At the Court of Appeal, Justice Peter Lauwers [here] dissented, writing: “The police entry was an unlawful trespass and this tainted everything that followed. I doubt that the police would have brazenly entered a private backyard and demanded to know what its occupants were up to in a more affluent and less racialized community” [also see earlier news coverage here].  Samara Secter [here], an associate at Addario Law Group and one of the lawyers representing Le, says the case “is really about the competing interests of community policing versus privacy rights.” [Law Times NewsCBC News]

CA – Ontario Sued for $32 Million-Plus for Alleged Wrongful Retention of Exonerated Suspects’ DNA Test Results

On Sept. 14, Kirk Baert and Jody Brown, of Koskie Minsky LLP in Toronto, moved in Ontario Superior Court to certify a class proceeding for $32 million-plus, alleging that the government is wrongfully retaining — rather than destroying — DNA test results in the Ontario Centre of Forensic Sciences that were obtained from potential suspects who were exonerated by the DNA samples they volunteered to give police during criminal investigations during the past 18 years. Tthey allege the government committed torts, and statutory and Charter privacy violations, and to appoint Micky Granger as the representative plaintiff. Granger’s statement of claim describes him as a “migrant worker” who voluntarily provided a bodily sample to the Ontario Provincial Police (OPP) during their investigation into an unspecified violent crime that occurred in Bayham, Ont., in 2013. He seeks a declaration that the Ontario Ministry of Community Safety and Correctional Services, which operates and oversees the Ontario Centre of Forensic Sciences (OCFS), has unlawfully stored and retained the class members’ DNA results, including DNA profiles, contrary to s. 487.09(3) [see here] of the Criminal Code [here] which requires that “bodily substances that are provided voluntarily by a person and the results of forensic DNA analysis shall be destroyed or, in the case of results in electronic form, access to those results shall be permanently removed, without delay after the results of that analysis establish that the bodily substance referred to in paragraph 487.05(1)(b) was not from that person.” The statement of claim contends that certain OCFS officials still retain access to such results stored in a database. The claim for damages includes $2 million in punitive damages. [The Lawyer’s Daily]

CA – Understanding Ontario’s Civil Privacy Rights: Reasonable Doubt

On August 23, 2018, the Supreme Court of Canada announced that it would not hear an appeal from the Toronto Real Estate Board (TREB) [PR here] in TREB’s long-running legal battle against the federal Commissioner of Competition [PR here]. As a result of this decision, TREB can no longer prevent the dissemination of listing and sold price data for properties in Toronto.  One of TREB’s primary (and ultimately unsuccessful) arguments against the release of this data related to what TREB characterized as the privacy interests of the individual purchasers and sellers of property. Specifically, TREB argued that the Personal Information Protection and Electronic Documents Act (PIPEDA) [here], which prohibits companies from distributing the personal information of their customers without their customers’ consent, applied to prevent the distribution of this data. In dismissing this argument, the Federal Court of Appeal, which made the decision that TREB was seeking to appeal further to the Supreme Court, noted that purchasers and sellers had consented to the distribution of this information when they signed their respective agreements with TREB agents and brokerages. The court also noted that the way in which TREB had raised this issue made it appear to be after-the-fact justification for anticompetitive behaviour, rather than a legitimate concern on the part of TREB. [Now Toronto,The Globe and Mail, CBC News, Global News and Toronto Star]

CA – OMA Turns to Supreme Court to Stop Release of Names of Highest Paid MDs

The Ontario Medical Association [here] has announced plans to ask the Supreme Court of Canada to hear an appeal of a lower court decision [here] to make the names of top billing doctors public. The association represents the province’s 28,100 practising doctors …in a written statement [here] OMA president Dr. Nadia Alam said: “Physician billings constitute private, personal information. Privacy is an important and fundamental right in Canada that is protected by legislation and the Charter of Rights and Freedoms.” If such information is to be made public, it should be up to the provincial Legislature to do, she said. Reporting billings without context would provide an incomplete and sometimes misleading picture of physician pay structure. In 2016, the Information and Privacy Commissioner of Ontario ruled [see blog post here & order PO-3617 here] in the Toronto Star’s favour in ordering the release of names of doctors paid the most from the publicly funded Ontario Health Insurance Plan. The OMA twice appealed — first to the Ontario Divisional Court, then to the province’s Court of Appeal [read decision here] — losing both times. …The case originated in 2014 when the Star submitted a freedom-of-information request to Ontario’s health ministry for information on top billers. [The Toronto Star]

CA – Preparing for Compliance with New Privacy Consent Guidelines

Commencing January 1, 2019, the Privacy Commissioner of Canada will begin enforcing [the May 2018] “Guidelines for obtaining meaningful consent” [see here], which impose new requirements for private sector organizations to obtain legally valid privacy consents. The Guidelines criticize “the use of lengthy, legalistic privacy policies” that too often make individual control enabled by consent “nothing more than illusory”, and explain that the requirements and best practices summarized in the Guidelines are intended to “breathe life” into the ways that consent is obtained… Compliance with the Guidelines will likely require many organizations to revise their privacy policies/notices and adjust some of their personal information practices and procedures. The Guidelines identify seven principles for private sector organizations to follow to obtain meaningful consent including: 1) Emphasize key elements;  2) Allow individuals to control the level and timing of detail;  3) Provide individuals with clear options to say “yes” or “no”;  4) Be innovative and creative;  5) Consider the consumer’s perspective;  6) Make consent a dynamic and ongoing process; and  7) Be accountable.  The Guidelines also provide guidance regarding issues related to consent, including:  1) Form of Consent;  2) Consent and Children;  3) Appropriate Purposes;  4) Withdrawal of Consent; and  5) Other Obligations  The Guidelines are generally consistent with previously issued guidance — For example: “Ten Tips for a Better Online Privacy Policy and Improved Privacy Practice Transparency” (October 2013); “Interpretation Bulletin: Form of Consent” (March 2014); Guidelines for Online Consent and Frequently Asked Questions for Online Consent (May 2014); Ten Tips for Communicating Privacy Practices to Your App’s Users (September 2014); and Interpretation Bulletin: Openness (August 2015) — but impose new requirements for the form and content of privacy policies/notices, and for providing individuals with clear and easily accessible choices for the collection, use or disclosure of their personal information beyond what is necessary for requested products and services. The Guidelines will likely be a key enforcement tool for the PIPEDA Compliance Directorate, which was established in 2018 to investigate PIPEDA complaints by individuals and complaints initiated by the Privacy Commissioner of Canada. [Privacy Bulletin (BLG)]

CA – Commissioner Seeks Feedback on Breach Reporting Guidance

The Office of the Privacy Commissioner of Canada (OPC) is inviting public feedback on draft guidance to help businesses comply with new mandatory breach reporting requirements under the federal private sector privacy law.  Amendments to the Personal Information Protection and Electronic Documents Act (PIPEDA) to create new provisions requiring organizations to report breaches of security safeguards will come into force November 1, 2018. Prior to coming into force, Innovation, Science, and Economic Development undertook two public consultations and the final regulations were published in the Canada Gazette in April 2018. The OPC has developed guidance and a breach reporting form to help organizations meet their new obligations under the law. [News and announcements]

CA – Political Parties Need Privacy Rules, Watchdogs Say

A joint statement [see PR here & Joint Resolution here] from federal, provincial and territorial watchdogs released Monday morning [says] political parties should not be allowed to collect and use Canadians’ private information without rules or oversight …There are currently no laws restricting how federal political parties collect, store and use Canadians’ private information, even as those parties are increasingly reliant on sophisticated data operations to win elections.  British Columbia is the only jurisdiction at the provincial level that has concrete rules around how parties can amass private information on voters.  In June, MPs on the House of Commons’ privacy and access to information committee unanimously endorsed a recommendation to subject federal parties to privacy laws, and called for more transparency on how parties use big data and analytics. The recommendation was endorsed by MPs from all three major parties on the committee. [The Toronto Star, The Canadaina Press (via G&M)]

CA – Federal Court Refuses to Authorize Abusive “Fishing Expedition” by Canada Revenue Agency

The recent Federal Court decision in Canada (National Revenue) v. Hydro-Québec, 2018 FC 622 [summary] made a strong statement against an interpretation of the CRA’s powers that would allow virtually unlimited invasions of taxpayer privacy. The decision considered the scope of the CRA’s power to compel information about unnamed taxpayers from third parties under section 231.2 of the “Income Tax Act” (and the analogous section 289 of the “Excise Tax Act”). In this context, the decision held that the Court will both strictly interpret the CRA’s powers, and exercise its discretion in appropriate cases, to protect taxpayers from unjustified intrusions by the government and to prevent abusive “fishing expeditions”. This case highlights the CRA’s attempt to construe its powers in the broadest possible terms. The Court found the CRA’s request was “a full-fledged fishing expedition”, of “unprecedented magnitude”, of “practically unlimited scope” and “a complete lack of consideration for the invasion of privacy and the consequences for all taxpayers involved in the request.” Not only did the CRA interpret its own powers in s. 231.2(3) as practically limitless, but its interpretation of the Court’s discretion was so narrow as to render the judicial involvement useless, making the protection of taxpayers “deceptive in practice.” The force with which the Court rejected the self-serving interpretation advanced by the CRA should be encouraging for taxpayers. The case serves as an important reminder that despite its considerable powers, the CRA is not entitled to act outside the bounds of law and it is the courts, not the CRA, that interpret the law. [Thorsteinssons Blog at: Finacial Post]


WW – Controversy Erupts Over Five Eyes Countries’ Statement on Encryption

Are Canada, the U.S. and other members of the Five Eyes intelligence alliance preparing to sacrifice online privacy to increase security? Are the five countries about to increase pressure on telecom and software companies to install ways of defeating encryption?  Yes, if you believe privacy advocates after seeing a communique issued last week by security and public safety ministers following their annual meeting in Australia [who] …”agreed to the urgent need for law enforcement to gain targeted access to data, subject to strict safeguards, legal limitations, and respective domestic consultations” [communique] No, if you believe a spokesperson for Public Safety Ralph Goodale. In an email Scott Bardsley, the minister’s senior communications advisor, noted the statement also says the Five “have no interest or intention to weaken encryption mechanisms,” and that any action on the ministers’ statement “will adhere to requirements for proper authorization and oversight, and to the traditional requirements that access to information is underpinned by warrant or other legal process.” The communique …does includes a separate Statement of Principles on Access to Evidence and Encryption, [part of which reads] ”The increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security,” the ministers “encourage information and communications technology service providers to voluntarily establish lawful access solutions to their products and services that they create or operate in our countries. Should governments continue to encounter impediments to lawful access to information necessary to aid the protection of the citizens of our countries, we may pursue technological, enforcement, legislative or other measures to achieve lawful access solutions.”  While some Five Eyes countries are hotter about the issue than others. In June, Australia introduced legislation that would force tech companies to give access to customer encrypted data to its security agencies. In 2016 the U.K. government of the day talked about legislation giving the Home Secretary the power to force telcos to remove or disable end-to-end encryption.  Bardsley pointed out a 2017 committee report [76 pg PDF] called for “no changes to the lawful access regime for subscriber information and encrypted information be made.” In response to that report Public Safety Minister Goodale said that while encryption poses challenges to law enforcement and intelligence agencies the government doesn’t believe in a legislative solution. It is looking at other solutions. [IT World Canada]

EU Developments

UK – Top European Court Says British Spies Broke Human Rights Rules With Their Mass Surveillance Tactics

British spy agencies broke human rights by conducting mass surveillance without proper oversight or safeguards, the European Court of Human Rights has ruled. According to the court, the spies were able to find out far too much about people’s habits and contacts, by examining their online activities. It also said the surveillance had an illegally chilling effect on the free press, by monitoring journalists’ communications. The case was brought to the court a couple years back by more than a dozen human rights groups, including Amnesty International and the American Civil Liberties Union, who were frustrated that the revelations of NSA whistleblower Edward Snowden had not sufficiently reined in the U.K.’s GCHQ intelligence agency. The human rights groups did not get what they wanted from the U.K.’s intelligence services watchdog, the Investigatory Powers Tribunal, which said GCHQ’s use of NSA-intercepted data had been illegal, but became legal when people found out about it, thanks to Snowden. So the groups turned to the European Court of Human Rights.  The court said Thursday that GCHQ’s mass surveillance scheme was not intrinsically illegal, but its design broke two crucial elements of the European Convention on Human Rights: Article 8, the part that guarantees privacy; and Article 10, which guarantees freedom of expression.  The spies infringed on people’s privacy rights because there wasn’t enough oversight or safeguards regarding how data was selected for surveillance. They infringed on free-expression rights because the system did not include proper safeguards for protecting confidential journalistic material—effectively limiting what the press can do without the authorities finding out.  However, the court ruled that GCHQ had not broken European human rights law by using data gathered by U.S. spies, as the safeguards around those procedures were sufficient. It also threw out a set of complaints about the U.K. Investigatory Powers Tribunal being insufficiently independent and impartial.  This is the first time the European Court of Human Rights has dealt with a case involving intelligence sharing. It has however examined cases involving mass surveillance before, and this ruling is in line with those earlier rulings—the court seems to take a somewhat more permissive stance on the issue than the Court of Justice of the European Union, which has repeatedly stamped down on the practice due to the fact that it is indiscriminate. [Fortune]

EU – Google Blasts French Bid to Globalize Right to Be Forgotten

Google shot down efforts by France’s privacy watchdog [CNIL] to globalize the so-called right to be forgotten, telling European Union judges that the regulator “is out on a limb.”  In a hearing at the EU Court of Justice, Google said extending the scope of the right all over the world was “completely unenvisagable.” Such a step would “unreasonably interfere” with people’s freedom of expression and information and lead to “endless conflicts” with countries that don’t recognize the right to be forgotten. “The French CNIL’s global delisting approach seems to be very much out on a limb,” Patrice Spinosi, a French lawyer who represents Google, told a 15-judge panel at the court in Luxembourg. It is in “utter variance” with recent judgments. The hearing will help judges to clarify the terms of the EU tribunal’s landmark 2014 ruling that forced the search engine to remove links to information about a person on request if it’s outdated or irrelevant. Google’s arguments received some support from newspapers, who have often battled the search engine in Europe on other issues. Removing links globally gives too much power to private companies, such as Google, to decide “what pieces of news the public should find or not,” the World Association of Newspapers and News Publishers told the French court in a 2016 letter. Newspapers get dozens of requests every day to remove information from online archives by claiming a “right to be forgotten,” it said. Microsoft Corp. and groups like the Internet Freedom Foundation [here] and the Wikimedia Foundation [here] intervened in Tuesday’s hearing, as well as legal representatives for France, Ireland, Greece, Austria and Poland. Lawyers for the U.K. didn’t show up. While the right to be forgotten concerns all search engines, Google’s dominance in Europe means the company has taken center stage in the wake of the 2014 EU ruling [see here & wiki here]. An advocate general at the EU court is scheduled to deliver an advisory opinion on Dec. 11. [Bloomberg and at: Politico, Business Insider, TechCrunch, and GIZMODO]

EU – When Do Organisations Need to Carry Out a Data Protection Impact Assessment? German Authorities Provide Guidance

The German data protection authorities (German DPAs) have jointly released a list of processing activities [4 pg PDF] that are subject to a data protection impact assessment (DPIA) [see Article 35 GDPR here]. DPIAs shall help identifying, assessing and minimising the data protection risks of a project in which personal data are processed. Especially broader risks to the rights and freedoms of individuals, resulting from the processing, shall be assessed and mitigated by appropriate countermeasures.  The List provides 16 examples and thereby the areas that German DPAs consider constituting “high risk” processing activities. The List gives organisations a first overview over the various use cases of DPIAs. However, the List is not exhaustive and is subject to future revisions. The fact that a process is not mentioned in the List does not necessarily mean that a DPIA will not have to be carried out nonetheless. Other member states have also released their lists. For example, the list of the ICO can be accessed here. [Technology Law Dispatch (ReedSmith)]

EU – German Court Issues GDPR Ruling on Data Subject’s Consent for Persons Under Custodianship

On 16 July 2018, the District Court of Gießen, Germany, ruled that a custodian’s representation rights also cover consent to data processing activities related to the person under custodianship. Under the EU General Data Protection Regulation (GDPR) [here], the processing of personal data is, in principle, prohibited unless there is a legal basis for such processing. Pursuant to Art. 6 para. 1 lit. a) GDPR, one possible legal basis is the data subject’s consent [see here]. However, the legitimacy of a declaration of consent may be in doubt if the data subject lacks the capabilities to declare consent. [All About IP Blog (Mayer/Brown)]

UK – Sir Cliff Richard v the BBC: a Landmark Case on Privacy Rights

In “Sir Cliff Richard OBE v the BBC and SYP” [see 122 pg PDF here & overview here], Mann J addresses the question of whether a suspect in a criminal investigation has a right to privacy, either under Article 8 of the European Convention on Human Rights (ECHR) [see wiki here] as against a public body, in this case South Yorkshire Police (SYP) or under the tort which is developing out of Article 8 jurisprudence as against non‑public authorities, in this case the BBC.  The question as to whether there is a right to privacy, more formally, ‘a reasonable expectation of privacy’, of a suspect in a criminal investigation (Murray v Express Newspapers plc) [see here] has not previously been judicially determined. Hence, the court’s ruling on this important subject represents, at least until any appeal is decided, a landmark decision in human rights jurisprudence. [This essay is a thorough discussion of how Mann J came to his decision] Lexology [8 pg PDF version here]


US – Credit Freezes Will Be Fee-Free Starting September 21

After Sept. 21, all of the three major consumer credit bureaus will be required to offer free credit freezes to all Americans and their dependents [Equifax, Experian and TransUnion]. A credit freeze – also known as a “security freeze” – restricts access to your credit file, making it far more difficult for identity thieves to open new accounts in your name. Maybe you’ve been holding off freezing your credit file because your home state currently charges a fee for placing or thawing a credit freeze, or because you believe it’s just not worth the hassle. If that accurately describes your views on the matter, this post may well change your mind. Currently, many states allow the big three bureaus to charge a fee for placing or lifting a security freeze. But thanks to a federal law enacted earlier this year [see S.2155 here & coverage here & here], after Sept. 21, 2018 it will be free to freeze and unfreeze your credit file and those of your children or dependents throughout the United States. If you’d like to go ahead with freezing your credit files now, this Q&A post from the Equifax breach explains the basics, and includes some other useful tips for staying ahead of identity thieves. Otherwise, check back here later this month for more details on the new free freeze sites. [Krebs on Securit and at: KOMO News]


US – EFF Urges Gov. Brown to Sign Sensible California Bill Imposing Stricter Requirements for DNA Collection from Minors

When the San Diego police targeted black children for DNA collection without their parents’ knowledge in 2016, it highlighted a critical loophole in California law. The California State Legislature recently passed a new bill, A.B. 1584, to ensure that law enforcement cannot stop-and-swab youth without either judicial approval or the consent of a parent or attorney. The bill, introduced by Assemblymember Lorena Gonzalez Fletcher, is now on Gov. Jerry Brown’s desk. EFF has strongly supported this bill from the start and now urges the governor to sign the bill into law.  California’s existing DNA collection law, Proposition 69, attempts to place limitations on when law enforcement can collect DNA from kids, but SDPD found a gaping loophole in the law and crafted a policy to take advantage of that loophole. Under Proposition 69, law enforcement can collect DNA from minors only in extremely limited circumstances. That includes after a youth is convicted or pleads guilty to a felony, or if they are required to register as a sex offender. But here’s the loophole: this only applies to DNA that law enforcement seizes for inclusion in statewide or federal databases. That means local police departments have been able to maintain local databases not subject to these strict limitations.  A.B. 1584 will fix this loophole by requiring law enforcement to obtain a court order, a search warrant, or the written consent of both the minor and their parent, legal guardian, or attorney before collecting DNA directly from the minor. In cases where law enforcement collects a minor’s DNA with proper written consent, A.B. 1584 also requires law enforcement to provide kids with a form for requesting expungement of their DNA sample. Police must make reasonable efforts to promptly comply with such a request. Police must also automatically expunge after two years any voluntary sample collected from a minor if the sample doesn’t implicate the minor as a suspect in a criminal offense. [DeepLinks Blog (EFF), ACLU San Diego and Courthouse News Service]

Health / Medical

CA – Yukon IPC Advises on Security Audits

The Yukon Information and Privacy Commissioner is reminding healthcare custodians of their obligation to conduct security audits, pursuant to the Health Information Privacy and Management Act. Healthcare custodians must identify information management practices that must be audited every 2 years, determine how current policies and procedures measure up against minimum standards, and address gaps within a specified period; audit documentation should be maintained, and can be voluntarily submitted to the IPC. [Yukon IPC – News Release – Reminder to Custodians About Duty to Audit]

CA – Health Care Worker Wins Lawsuit for Being Wrongfully Accused of Accessing Patient Records

An east-central Alberta woman feels vindicated after winning a wrongful termination case against a medical centre society where she worked as a receptionist. The woman claimed she was terminated without just cause and publicly humiliated. Red Deer Judge Andreassen agreed and awarded her $25,600 in compensation [$15,000 in punitive damages and $10,600 in compensatory damages].  The Consort and District Medical Centre Society, months after terminating Sherri Galloway, claimed she violated privacy laws by viewing confidential patient medical records. Judge Andreassen, however, not only ruled there was no evidence to back up the board’s claims but also slammed their actions. In his ruling on this civil matter in Red Deer provincial court he wrote: “In effect the (board) publicly accused (Galloway) of distributing the confidential records of people from this tight-knit community, thus accusing (Galloway) of a whole new level of reprehensible behaviour … a marked departure from ordinary standards of decent behaviour. The conduct of the Defendant (the board) was deliberate, motivated by the search for evidence to criticize (Galloway) and vindicate (the board), took place over a lengthy period, and was known by (the board) to be deeply personal to (Galloway).” Galloway fought for two years to prove she did nothing wrong. The board suspended Galloway on Feb. 2, 2016. On Feb. 11 the board terminated her without cause.  Judge Andreassen said the board accused Galloway of accessing patient records but didn’t have evidence of her doing so. He added the board started the privacy investigation without any allegations of a privacy breach being made against Galloway or even any reason to suspect there were privacy breaches. The board had 30 days after the June 5 ruling to file an appeal but did not. [Red Deer Advocate]

Horror Stories

WW – Spy App for Parents Leaks Millions of Sensitive Records of Customers and Targets Online

Mobile spyware maker mSpy accidentally leaked millions of personal and sensitive records of users and targets online. The software-as-a-service bills itself as the “ultimate monitoring software for parental control” to spy on the mobile devices of their children or partners. According to a report by cybersecurity expert Brian Krebs, security researcher Nitish Shah alerted him to an open online database without password protection that allowed anyone to look for up-to-the-minute mSpy records for both customers and targeted mobile devices. The exposed database contained millions of records including passwords, text messages, call logs, contacts, notes and even location data covertly collected from phones running mSpy. It also included the username, password and private encryption key of every mSpy customer who logged into the site or purchased an mSpy license over the past six months. The private encryption key allows anyone to view and track details of the mobile device running the software. Apple iCloud usernames, authentication tokens, references to iCloud backup files as well as WhatsApp and Facebook messages uploaded from mobile devices running mSpy could be viewed as well. Other exposed records included transaction details of mSpy licenses purchased over the past six months such as customer name, email address, mailing address and amount paid. mSpy user logs including browser and Internet address information of people visiting the mSpy website were also listed in the database. Official response to the incident. Shah said he attempted to alert mSpy of his findings, but was reportedly ignored and blocked by the firm’s support team. [Cyware]

Law Enforcement

WW – Apple Launches Global Law Enforcement Web Portal for Data Access

Apple [in a letter, dated Sept. 4, from Apple General Counsel Kate Adams to U.S. Sen. Sheldon Whitehouse (D. RI) according to a report by Reuters here] is launching a portal that law enforcement agents can use to file requests for data, track their historical requests and be granted access to information if Apple thinks it’s a worthy cause.  However, the company has stressed that it wouldn’t interfere with its commitment to protect its customers, vowing it will ensure law enforcement organisations will only be able to access data that Apple sees fit to provide them with.  The new portal will allow police forces to submit requests and these will be assessed by Apple’s legal teams. Apple will also create another team to train law enforcement officers around the world and online training about how to put through data requests and how they deal with such demands.  Apple’s policies state that anyone requesting the information must properly request access only to the data they specifically need or feel that may help their investigation rather than asking for everything. [IT Pro and at: Naked Security (Sophos) and CNET News]

WW – Ungagged Google Warns Users About FBI Accessing Their Accounts

Dozens of people say they’ve received an email from Google informing them that the FBI has been sniffing around for information on their accounts. Now that a gag order has been lifted, the company is able to “disclose the receipt of the legal process” to any affected users, Google said. The gag orders that often accompany FBI information requests keep organizations such as Google, Microsoft, Facebook and Apple from disclosing the order for a given period of time. Any email provider worth its salt nowadays issues transparency reports, and the biggest companies have called for increased transparency in government surveillance requests.  The emails lack specific details about whatever the FBI was investigating, though they did contain a case number that corresponded to a sealed case on PACER [here]. Some of the recipients have a hunch regarding what it’s all about. In threads on Reddit, Twitter, and Hack Forums, conjecture is that the FBI was looking for information on people associated with LuminosityLink: an easy to use, remote access Trojan (RAT) that Europol snuffed out in February, following a UK-led dragnet in September 2017 that involved over a dozen law enforcement agencies in Europe, Australia and North America that went after hackers linked to the tool [details here & here]. Buying LuminosityLink doesn’t necessarily brand somebody a cybercrook. It had a split personality when it came to its marketing: it was sold as a legitimate tool for Windows admins and also a cheap, easy-to-use, multi-purpose pocket knife with a slew of malware tools you could flip out. While it’s not unusual for a gag order to be subsequently lifted, it is perhaps unusual for the FBI to try to track down every person who purchased software that may not be considered illegal. [Naked Security (Sophos), Motherboard and Daily Mail]

Online Privacy

CA – Researchers Scrutinize Apps for Undisclosed Ties to Advertisers, Analytics Firms

If you want to better understand how an app or a service plans to use your personal information, its privacy policy is often a good place to start. But a recent study [funded by the Office of the Privacy Commissioner of Canada] found there can be a gap between what’s described in that privacy policy, and what the app actually collects and shares.  An analysis by University of Toronto researchers found hundreds of Android apps that disclosed the collection of personal information for the app developer’s own purposes — but, at the same time, didn’t disclose the presence of third-party advertising or analytics services that were collecting the personal information, too. …To generate revenue, app developers often embed software code, known as ad libraries [see this FTC discussion], allowing them to display ads within their app. Because they want to make the ads relevant to individual users, ad libraries often want specific information about those users. For those who may be more familiar with the cookies that track your online browsing habits, that on mobile devices, “you’re being tracked through these ad libraries and these analytics libraries in a very similar way” says Lisa Austin [here], a U of T law professor and one of the study’s co-authors. The researchers have been working on a software project called AppTrans, with the goal of making undisclosed data collection practices more transparent. The software looks for evidence of data collection that isn’t spelled out in a privacy policy by comparing the policy’s language with an analysis of the app’s code …in part, using machine learning — artificial intelligence — to automatically scour privacy policies …Of the 757 apps analyzed, the researchers found nearly 60 per cent of apps collected more information than stated in their privacy policies. For Austin, it’s also an example of how artificial intelligence can be used to society’s benefit, in spite of very legitimate concerns about algorithmic bias and automated decision-making run amok. “This is technology that we can use to to make the digital world more transparent,” she said. “And that’s a real win.” [CBC News]

W – Business Travelers Highlight Public Wifi Security Risk

The latest research by global travel management company Carlson Wagonlit Travel indicates that the majority of business travelers have grave reservations about the safety of their data when using public Wifi networks. Public WiFi security has long been an area that has concerned security professionals – but these concerns are now shared by those who access public WiFi hotspots in airports and other stopover points (such as coffee shops) during their global travels. According to the research 72% of travelers in the Asia Pacific region were not confident about the safety of employer data during their trips. U.S. travelers on the other hand were the most sanguine about the safety of their data with 46% were confident that public WiFi security was adequate. European travelers were least confident with only 27% unworried about their data security when using public WiFi networks. Of the 2,000 global business travelers surveyed, 65% were less than confident about public WiFi security issues. However, it was not only public WiFi security that concerned these travelers. The top three concerns were first and foremost physical theft of devices (or simply losing the devices) and secondly exposing company data to prying eyes while working on their devices. However, by far and away the greatest concern was being hacked while using public WiFi. The concerns around public WiFi were echoed in some of the other issues that were voiced by these travelers. Many were worried about cyber security while using email or even opening company documents. [Anti-Corruption Digest]

Privacy (US)

US – Tech Industry Group Calls for a National Privacy Framework

40 major internet and technology firms, called for the establishment of a national privacy framework [see PR here] anchored by six privacy principles including: 1) Transparency; 2) Controls; 3) Access; 4) Correction; 5) Deletion; and 7) Portability.  In describing the context for the principles the IA noted that its members comply with the range of strong federal privacy, data security, consumer protection, and anti-discrimination laws. Coupled with following state laws, and self-regulatory principles that govern how they do business, this “patchwork” leads to inconsistent experiences for individuals. Accordingly, a new, comprehensive national framework would create more “consistent privacy protections that bolster consumers’ privacy and ease compliance for companies.”  Further, the IA identified key components of a National Privacy Framework to include: a) Fostering privacy and security innovation; b) A national data breach notification law; c) Technology and sector neutrality; d) Performance standard-based approach; e) Risk-based framework; and f) A modern and consistent national framework for individuals and companies.  The U.S. Senate Committee on Commerce, Science, & Transportation will hold a hearing examining consumer privacy protection on September 26, 2018.  [DBR on Data]

US – Chamber Proposes Curbs on Consumer Lawsuits Over Data Privacy

The U.S. Chamber of Commerce released a set of policy positions on internet privacy [Press Release, Issue Brief], including a proposal that companies be shielded from lawsuits if they violate laws governing how they collect and use data on their customers.  The group’s proposal, which would also override stricter state regulations such as ones put in place by California, is all-but-certain to anger consumer advocates who have argued that people should be able to control data about themselves. In addition to calling for the preemption of state law “to provide certainty and consistency to consumers and businesses alike,” the 10-point framework says consumers should not be given a right to sue “for privacy enforcement, which would divert company resources to litigation that does not protect consumers.” In lieu of a consumer right to sue, the Chamber suggested a “reasonable opportunity for businesses to cure deficiencies in their privacy compliance practices before government takes punitive action.” The plan also urges companies to be transparent on their data “collection, use and sharing” while calling for “privacy innovation” and saying that protections “should be considered in light of the benefits provided and the risks presented by data.” [Bloomberg News at: Axios and The Hill]

US – Big Tech Calls On Congress for Privacy Regulation, Pushing Back On State Mandates

Google, Microsoft, Facebook and IBM are lobbying the Trump administration for a federal privacy law in a bid to overrule California’s newly minted state privacy bill [California Consumer Privacy Act of 2018 – AB-375 here]. While big tech views the majority of regulations as a threat, privacy is a simpler issue. California’s Consumer Privacy Act of 2018, boils down to having a granular level understanding of where sensitive data lives and how efficiently it can be accessed. …Big tech wants to change a lot of the language in California’s bill, if not remove it completely. Among the key changes are where information is stored, how quickly businesses need to respond to a consumer’s data request and steep fines, according to Callum Corr, data analytics specialist at ZL Technologies [here], in an interview with CIO Dive. No text has been revealed yet, making the tech industry’s specific desires unknown …Organizations are trying to find a compromise of sorts. The U.S. Chamber of Commerce, Internet Association and Information Technology Industry Council are making efforts to craft voluntary standards in place of legal mandates. The National Institute of Standards and Technology (NIST) announced a collaborative project modeled after its Cybersecurity Framework. The evolving framework is to “provide an enterprise-level approach” for aiding organizations in developing privacy strategies. The first public workshop takes place in October.  The framework is a voluntary tool for organizations to model and big tech is encouraged to participate, said Naomi Lefkovitz, senior privacy policy advisor and lead for the project at NIST, in an interview with CIO Dive. The program allows organizations to “pick your outcomes,” which helps optimize privacy procedures in a collaborative manner. [CIO Dive]

US – Stolen Federal Device Wasn’t Encrypted, Violating Government Rules

The federal government now says the device with personal information on 227 employees of Infrastructure Canada that was reported stolen last month was an unencrypted USB key. News of the theft was first revealed Sept. 13 A Public Services and Procurement Canada (PSPC) [here] employee notified Ottawa police of the theft August 20, and then told their government supervisor the next day, Rania Haddad, a PSPC spokesperson said in an email. The statement didn’t detail what was on the device, but on Sept. 17 PSCP said it was a USB key, which, contrary to government rules, wasn’t encrypted. “An internal investigation is underway to examine why and how this happened and identify measures to ensure this does not happen again.” Deputy Minister Marie Lemay sent an email Sept. 7 to affected staff that “no banking or social insurance information was affected. However, your name, personal record identifier (PRI), date of birth, home address and salary range may have been on the stolen device.” …so far no incident has been reported about malicious use of the stolen information. …The federal privacy commissioner has also been notified. PSPC hasn’t explained why it took 17 days for employees to be notified. [IT World Canada, Global News, Ottawa Citizen and CBC News]


US – Consumers Have Most Confidence in Physician’s Health Data Security

A full 87% of consumers surveyed by Rock Health [see results here] said that they had confidence in the health data security of their physician, but that number dropped to 68% for pharmacies and 60% for health insurance companies.  Confidence in data security correlated closely with the consumers’ willingness to share their health data. Eighty-six percent of consumers said they were willing to share their health data with their physician, but only 58% were willing to share their data with health insurance companies and 52% were willing to share data with pharmacies.  Those numbers dropped significantly for other organizations, according to the survey of close to 4,000 consumers. 47% of respondents had confidence in the health data security of research institutions, 35% had confidence in pharmaceutical companies, 26% had confidence in government organizations, and 24% had confidence in tech companies. These results closely correlate to willing[ness] to share health data. [HealthIT Security]

WW – Vendors Could Be the Weakest Link in Many Cyber Defense Strategies

The growing frequency and intensity of cyberattacks, combined with new data privacy and security regulations, has made cybersecurity a top priority for most organizations. But while there is plenty of attention paid to a firm’s own data, many forget that their partners and suppliers hold a wealth of information on them.  Information Management recently spoke with Jessica Ortega, a web security research analyst at SiteLock, about the risks that may be posed by a firm’s vendor partners. These are the questions she answered:

1) What level of responsibility do organizations have to ensure the cyber hygiene of their supply chain partners?;

2) In what ways could vendors potentially put the data of a client organization at risk, or endanger the client’s compliance status?;

3) What questions should an organization ask a potential vendor to evaluate that company’s cybersecurity practices and cyber hygiene?;

4) Once an organization has started to work with a vendor, what can be done to ensure the security of that vendor’s data?; and

5) Have any of your clients learned about vendor cybersecurity the hard way? Can you provide some lessons that our readers can take away from those mistakes? [Information Management]

WW – Targeted Ransomware on the Rise

Targeted ransomware attacks has been gathering pace and size, a trend for stealthier and more sophisticated ransomware attacks – attacks that are individually more lucrative, harder to stop and more devastating for their victims than attacks that rely on email or exploits to spread. And they do it in a way that’s hard to stop and easy to reproduce. The criminals behind targeted ransomware attacks rely on tactics that can be repeated successfully, commodity tools that are easily replaced, and ransomware that makes itself hard to analyse by staying in its lane and cleaning up after itself. Targeted attacks can lock small businesses out of critical systems or bring entire organisations to a grinding halt, just as a recent SamSam attack against the city of Atlanta showed. For every Atlanta-style attack that hits the headlines though, many more go unreported. Attackers don’t care if the victims are big organisations or small ones, all that matters is how vulnerable they are. All businesses are targets, not just ones that hit the headlines. Ths post considers the anatomy of a targeted attack and looks at some real ransomware — including Dharma, SamSam and BitPaymer — and gives some recommendations about responding.  [Naked Security]


US – Privacy Concerns Cancel Planned Wi-Fi Kiosks in Seattle

The plan to install kiosks with free Wi-Fi and create bus stops with Wi-Fi in Seattle was stopped by Seattle Mayor Jenny Durkan. The mayor’s spokesperson told KIRO 7 she has concerns about privacy.  The transit advertising company Intersection proposed a plan to the city in 2016 that would pay the City of Seattle $97-$167 million dollars for exclusive access for Link Wi-Fi kiosks in Seattle for the next 20 years, according to the company. Intersection’s proposal said the city could make another $100 million in ad revenue during that same period. The kiosks are already installed in New York City and London. Intersection also say they only ask for an email address to log on, not your name or other identifying information. Jen Hensley, president of Link, Intersection says the kiosks do not collect any phone or wireless data. They do have video cameras on the devices to help maintain and protect against vandalism. Link says the cameras are similar to what is used in an ATM. Intersection owns the footage, not the city, and for it to be obtained by law enforcement it would require a subpoena or court order. The video is destroyed after seven days, according to the company. The ACLU applauded Mayor Durkan’s decision to scrap the plan. [KIRO 7 Seattle,The Urbanist and GeekWire]

US Government Programs

US – NSA Metadata Program “Consistent” With 4th Amendment, Kavanaugh Argued

During the second-to-last day of hearings before the Senate Judiciary Committee, Sen. Patrick Leahy (D-Vt.) had an interesting exchange over recent privacy cases with the Supreme Court judicial nominee, Judge Brett Kavanaugh.  Opening their six-minute tête-à-tête, Leahy began by asking Kavanaugh about what he wrote in November 2015 in a case known as “Klayman v. Obama” [805 F.3d 1148 (2015) here]. The complaint argued that the National Security Agency’s telephone metadata program (“Section 215”) [see overviews here & here & wiki here], which gathered records of all incoming and outgoing calls for years on end, was unconstitutional.  [In his concuring oppinion, Kavanaugh said] that even if the Section 215 metadata program was a search, it should be considered “reasonable” in the name of national security. “The Fourth Amendment allows governmental searches and seizures without individualized suspicion when the Government demonstrates a sufficient ‘special need’—that is, a need beyond the normal need for law enforcement—that outweighs the intrusion on individual liberty,” he wrote. “Examples include drug testing of students, roadblocks to detect drunk drivers, border checkpoints, and security screening at airports.” Responding to Leahy, Kavanaugh said “I was trying to articulate what I thought based on precedent at the time, when your information went to a third party and when the government went to a third party, the existing privacy Supreme Court precedent was that your privacy interest was essentially zero. The opinion by Chief Justice Roberts this past spring in the “Carpenter” case [119 pg PDF here & legal coverage here] is a game changer.””Do you think if Carpenter had been decided you would have written the concurrence you did in Klayman?” Leahy asked. “I don’t see how I could have,” Kavanaugh said.  While he didn’t come right out and say it, Leahy seemed to be probing whether Kavanaugh ascribes to what many legal scholars call the “mosaic theory.” This is the notion that, while a series of discrete surveillance or near-surveillance actions in and of themselves may be legal, there comes a point when those are aggregated over a long enough period of time that they become an unreasonable search in violation of the Fourth Amendment.  But when Kavanaugh addressed whether or not he agreed with the mosaic theory, he was measured in his answer. Kavanaugh seemed to suggest that he disagreed with his DC appeals court colleagues on this point.  The Senate Judiciary Committee is expected to vote on his nomination on September 17 [Ars Technica and at: The Washington Times]

US – Federal Court Says NSA PRISM Surveillance Good and Legal Because the Gov’t Said It Was Good and Legal

Three years after its inception, a prosecution [United States of America v. Yahya Farooq Mohammad, case # 3:15-cr-385 – N.D. Ohio Sep. 12, 2018] involving possibly unlawful FISA-authorized surveillance, hints of parallel construction, and a very rare DOJ notification of Section 702 evidence has reached a (temporary) dead end. The defendants challenged the evidence on multiple grounds — many of which weren’t possible before the Snowden leaks [wiki here] exposed the breadth and depth of the NSA’s domestic surveillance.  The federal judge presiding over the case — which involved material support for terrorism charges — has declared there’s nothing wrong with anything the NSA or FISA Court did, so long as the surveillance was authorized and possibly had something to do with national security. …The court says these more-recent exposures are no reason to upset the precedential apple cart.  So, to add this all up: leaked documents from 2013 onward, exposing routinely-abused programs that massively expanded following the 2008 FISA Amendments Act, mean nothing when stacked up against a 2005 case predating the NSA’s admissions of surveillance abuse and the exposure of the FBI’s backdoor searches of domestic communications. Furthermore, the court declares — based on documents provided by the government directly to the court, but not to the defendants (in ex parte hearings) — the FISA-authorized surveillance was on the up-and-up because the government provided documents declaring the FISA-authorized surveillance was on the up-and-up. As for the Fourth Amendment challenge to Section 702 surveillance generally, the court says there’s really no Fourth Amendment issues as this does not apply to “aliens in foreign territory.” The court goes even further, though, suggesting the collection of communications outside of the country does not even require a warrant, even if it “inadvertently” sweeps up Americans’ communications during the process. [TechDirt]

US Legislation

US – House Bill Would Create Financial Data Breach Notification Standard

A bill introduced Sept. 7 by Rep. Blaine Luetkemeyer, R-Mo., of the House Subcommittee on Financial Institutions and Consumer Credit [here & wiki here], aims to create a national standard for financial institutions to notify consumers of data security breaches [“Consumer Information Notification Requirement Act” – see notice here & 5 pg PDF text here]. The legislation would amend the Gramm-Leach-Bliley Act [145 pg PDF here, FTC here & wiki here] to require financial institutions to issue breach notices “in the event of unauthorized access that is reasonably likely to result in identity theft, fraud, or economic loss.” Banks would be covered, along with non-banking financial institutions “to the extent appropriate and practicable,” according to the bill’s language.  The bill does not appear to have companion legislation in the Senate, and its chances for becoming law in the short term are unlikely in the current session of Congress. [MeriTalk]

US – California Passes Bill That Regulates Security for Internet of Things Devices

The California State Legislature recently passed a first-of-its-kind bill on Internet of Things (IoT) security titled SB-327 Information Privacy: Connected Devices and sent it to the governor for his signature. The bill introduces regulations for all connected devices sold in the United States. A quick read-through shows the bill leaves a lot to be desired. Specific guidelines are not established, and many features that need to be included in a bill centered around security are not present. For example, manufacturers should be required to perform a security audit on components purchased from overseas. Despite not being complete, this legislation is a step toward much-needed oversight of security measures. Manufacturers like Google and Amazon place strong security protocols on their products, but even these can be broken by a determined hacker or via a weak link in a connected system. A bill like this will place pressure on American manufacturers to ensure all connected devices provide device-level protection against attacks. A connected device is defined as any device that connects to the Internet and has an IP or Bluetooth address. [Digital Trends,The Washington Post]




16-31 August 2018


CA – Mandatory Data Breach Response Obligations Effective November 1, 2018

Every organization subject to Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) must act now to ensure they’re ready to comply with the Digital Privacy Act’s [here] new mandatory data breach response requirements as of November 1, 2018 – or face significant non-compliance consequences [also see coverage]. Basic data breach risk management planning, including steps to reduce the risk of breaches in the first place and creating action plans to ensure readiness for when breaches do occur, are key to ensuring compliance in this evolving legal landscape. But complying with these new obligations won’t happen overnight: the new record-keeping, reporting and notification rules are strict and onerous and the advance preparation necessary to reduce the associated liability and reputational risks when a breach does occur requires time and coordination of external expertise and internal stakeholders. This post explores five key areas to focus on when preparing to comply with the Digital Privacy Act’s new mandatory data breach response requirements including: 1) Understand the New Obligations – Well; 2) Deal With Third Party Contractor Risks; 3) Deal With Employee Risks; 4) Record-Keeping and Reporting Requirements; and 5) Protect Your Legal Privilege. [Insights (McInnes Cooper)]

CA – Privacy Laws Prevent Disclosure of Sexual Misconduct by University Staff

Administrators at the University of Manitoba want to re-examine the provincial privacy laws that they say prevent them from sharing details about the sexual misconduct of past employees with potential new employers. Last year, jazz professor Steve Kirby retired from the university after an internal investigation report said he repeatedly made inappropriate sexual comments and unwanted sexual contact with a female student. Kirby was subsequently hired by the Berklee College of Music in Boston, but was fired after the U of M complainants told Berklee administration about the sexual harassment allegations [CBC Report]. The university says Manitoba’s Freedom of Information and Protection of Privacy Act [FAQ] and labour laws prohibit sharing internal investigations with other potential employers. This interpretation of the law is sparking student concern that professors could continue the behaviour with other students. A University of Manitoba law professor Karen Busby who has researched sexual assault policies across Canada says Manitoba institutions are bound by the same privacy laws that exist in every province and territory. “It’s really hard for people to understand that, but privacy law is clear that a disciplinary matter is a private matter and the employer is not free to disclose disciplinary matters to the media, to future employers, to other employees, unless they need to know for compelling and safety reasons,” she said. [CBC News and at: The Blobe and Mail]

Electronic Records

US – CISOs Unite to Improve IT Security in Healthcare Supply Chain

Healthcare CISOs have set up a council to develop, recommend, and promote security best practices to bolster IT security throughout the healthcare supply chain [hress Release]. Founding members of the Provider Third Party Risk Management Council include CISOs from Allegheny Health Network, Cleveland Clinic, University of Rochester Medical Center, University of Pittsburgh Medical Center, Vanderbilt University Medical Center, and Wellforce/Tufts University. Healthcare organizations rely on a plethora of vendors of all sizes for support, including processing and maintaining data, providing analytics, and performing operational tasks. Vendor security is one of the biggest risk for healthcare organizations and one of the biggest sources of frustration for CISOs. To address this challenge, the council is working with the Health Information Trust Alliance (HITRUST) to improve third-party vendor security. The HITRUST Common Security Framework (CSF) will serve as the security standard for the council. [Health IT Security, HelpNetSecurity, Healthcare Informatics and Health Data Management]


US – Feds Want to Wiretap Facebook Messenger Voice Calls

The U.S. federal government is trying to force Facebook to break the encryption of its Messenger app in a lawsuit that’s under seal. The government wants to be able to intercept Messenger voice calls in its investigations, but Facebook is reportedly contesting the government’s request. On August 14, the judge in the Messenger case heard the government’s arguments to hold Facebook in contempt of court over the company’s refusal to break the Messenger app’s encryption. The government wishes to carry out a surveillance request in a case involving a criminal group of undocumented immigrants. Facebook claimed in court that the Messenger app uses end-to-end encryption for all voice calls, which means the company itself can’t intercept those calls and neither can the government. The only way for Facebook and the government to intercept those conversations would be for Facebook to cripple or remove the end-to-end encryption between all users. Alternatively, either Facebook or the government would need to hack the users’ devices in order to obtain the conversations that are automatically decrypted locally by the application. Both Facebook and the U.S. government have refused to comment on this case. [Tom’s Hardware, Reuters, The Verge and Fortune]

US – Tech Industry Told ‘Privacy Is Not Absolute’ and End-to-End Encryption ‘Should Be Rare’

An international network of intelligence agencies has told the tech industry that ‘privacy is not an absolute’ and that the use of end-to-end encryption ‘should be rare’. The statements were made in a joint communiqué and statement of principles following a meeting of the so-called Five Eyes nations – the US, UK, Canada, Australia and New Zealand. The statement on privacy contains a veiled threat to tech companies that they may face legislation if they don’t take steps to ensure that they can allow access to ‘appropriate government authorities.’ The documents acknowledge the importance of encryption, but effectively argue that end-to-end encryption should not be made routinely available for messaging. [9 to 5 Mac and at [AppleInside, CSO Online, SiliconRepublic, IT Pro and The Register]

EU Developments

EU – Top Human Rights Court Denies Right to Be Forgotten in Old Murder Case

On June 28, 2018, the European Court of Human Rights decided that Germany had correctly denied two individuals their “right to be forgotten” requests in connection with press archives relating to a 1991 murder [in “M.L. and W.W. v. Germany” – see Press Releasee]. The two individuals were convicted of the murder of a well-known German actor. They were released from prison in 2008 and brought actions against a German radio station and a weekly magazine asking that articles and radio interviews relating to the 1991 murder be removed from their website archives. The matter reached the German Supreme Court, which held that the interests of the public in having access to the information outweighed the interference with the plaintiffs’ privacy rights. The two individuals then sued Germany before the European Court of Human Rights (ECtHR) arguing that Germany had violated their privacy rights under Article 8 of the European Convention on Human Rights. The ECtHR found that the German Supreme Court had correctly applied the balancing test relating to right to be forgotten claims. Although the Court analyzed extensively the CJEU’s Google Spain case law, the ECtHR’s finding is based solely on Article 8 of the European Convention on Human Rights, which provides for a broad right to privacy. The ECtHR said that the availability of the press articles on the 1991 murder created an interference with the plaintiffs’ privacy rights under Article 8, and that consequently a right to be forgotten request of this type can potentially be made under the European Convention. However, the Court then pointed out that the privacy right under Article 8 had to be balanced against freedom of expression and freedom to access information under Article 10 of the European Convention. [Chronicle of Data Protection (Hogal Lovells) and at: Inforrm’s Blog]

EU – Privacy Shield on Shaky Ground

One of the most pressing privacy and data protection issues is the uncertain fate of Privacy Shield, the framework governing the flow of data between the EU and the U.S. for commercial purposes. The Trump Administration has been given an ultimatum: comply with Privacy Shield, or risk a complete suspension of the EU-U.S. data sharing agreement. In a letter dated July 26, EU commissioner for justice Věra Jourová wagered to U.S. commerce secretary Wilbur Ross that suspension of the EU-U.S. Privacy Shield system would incentivize the U.S. to comply fully with the terms of the agreement. But Jourová’s urging that Ross “be smart and act” in appointing senior personnel to oversee the data sharing deal is hardly new. The July letter closely echoes a European Parliament (EP) resolution passed just three weeks earlier, and the European Commission (EC) voiced similar sentiments in its review of the Privacy Shield Framework last September. Further adding to the chorus of voices raising concerns about Privacy Shield compliance are tech and business groups, which jointly called for the nomination of a Privacy Shield ombudsperson in an Aug. 20 letter. In addition to admonishing the EC’s failure to hold the U.S. accountable thus far, the EP resolution calls for a suspension of Privacy Shield if the U.S. has not fully complied by Sept. 1—though no such suspension has yet been announced. It also expresses serious concerns regarding the U.S.’s recent adoption of the Clarifying Lawful Overseas Use of Data (Cloud) Act and the legislation’s potential conflict with EU data protection laws. With the General Data Protection Regulation (GDPR) having come into effect on May 25, 2018, the EP considers the EC in contravention of GDPR Article 45(5). This article requires the EC to repeal, amend, or suspend an adequacy decision to the extent necessary once a third country no longer ensures an adequate level of data protection— until the U.S. authorities comply with its terms. The immediate tug-of-war between the U.S. and the EU on the validity of Privacy Shield will signal quite a bit about the strength of the EU’s convictions and the future of global privacy legislation. [Lawfare Blog and at: The Register, Cloud Tech, ComputerWeekly and Computer Business Review]


WW – Google Employees Push Back Over Plans to Build a Censored Search Engine for China

A war is raging inside Google over the company’s plans to launch a censored version of its search engine in China [see The Intercept]. Thousands of its employees reportedly oppose the move [see the Intercept here]. A letter condemning the plan has been circulated on Google’s internal communication systems, signed by more than 1,400 employees, according to the New York Times [and The Intercept here], which first reported the uproar. According to the letter [see GIZMODO], Project Dragonfly, as the secret operation is known, raised “urgent moral and ethical questions” and the signatories asked Google’s leadership to be more transparent on the move. Earlier this week, Brandon Downey, a former Google engineer who says he worked on an earlier version of its censored Chinese search platform, published an essay criticizing the plans. The app would conform to the Chinese government’s strict censorship rules and remove content on sensitive topics such as political dissidents, free speech, democracy, human rights, and peaceful protest. After its existence was made public, a source at Google said it was unclear if the product would ever get the green light. Google CEO Sundar Pichai said the company is “not close” to launching a search product in China but added that it was very interested in the market and that the company is “exploring many options,” according to sources speaking to CNBC. [Vice News, Media Post, Vox, Naked Security and Finacial Times and also: A majority of Google employees are content with offering a censored search engine in China


US – Fintech Apps: Consumer Privacy Concerns Remain High

Nearly one-third of U.S. banking consumers use online and mobile fintech apps to help manage their money, according to a new survey by The Clearing House [see PR, key findings here]. But those users are concerned about data privacy and want more control over the financial data their apps can access, says David Fortney, the organization’s executive vice president. The survey asked app users: “What’s your level of comfort sharing data with the fintech apps?” And virtually all “had some level of concern or discomfort,” Fortney says in an interview with Information Security Media Group [listen here]. So who would consumers trust as custodians of their data? The research shows they would trust financial institutions. Fortney also discusses: 1) The types of data that consumers are most and least comfortable providing to fintech apps; 2) Demographic trends in fintech app privacy and security; and 3) How to get consumers to feel safer using fintech apps. [GovInfo Security and at: Bankrate.com, FinExtra Blog and Financial Regulation News]


CA – Nova Scotia re-launches FOIPOP website after 152 days of being offline

A 152-day saga came to an end on September 5 as the Nova Scotia government brought its Freedom of Information and Protection of Privacy (FOIPOP) website back online after it was revealed in April that a data breach had exposed social insurance numbers, birth dates and personal addresses to the general public. However, the new website does not currently have the same features its predecessor did. “Only publicly released access to information requests are available on the site. The site does not host any personal information and is not connected to the case management system,” said a press release announcing the launch. Any releases made since April 1 will soon be available on the site. With the service at least partially restored, the remainder of this post includes everything we know about the breach, the website and what has happened behind the scenes, detailed through internal emails, briefing documents and reports obtained through FOIPOP requests. [Global News]

CA – OIPC AB Permits Utility Regulator to Disregard Request

The OIPC AB responded to a request by the Alberta Energy Regulator to disregard an access request pursuant to section 55(1) of the Freedom of Information and Protection of Privacy Act. An email request by landowners was vexatious because the purpose of their request was not to obtain access but to use their email request, copied to dozens of unrelated email addresses, to have a public platform to insult and degrade the regulator and its staff; the landowners may clearly confirm in writing and using non-abusive language that they want access to information already specified by the regulator. [OIPC AB – Request for Authorization to Disregard an Access Request under section 55(1) of the Freedom of Information and Protection of Privacy Act – Alberta Energy Regulator]


US – FPF Best Practices for Consumer Genetic Testing Services

The Future of Privacy Forum issues best practices for the use of genetic data generated by consumer genetic and personal genomic testing services. Consumer genetic testing services (e.g., Ancestry, MyHeritage) must obtain express consent for collection, analysis and marketing of genetic data (parental consent is required for consumers under 18), and secure data through encryption, data user agreements and access controls; genetic data may be disclosed to law enforcement without consent only where required by valid legal process. [FPF – Privacy Best Practices for Consumer Genetic Testing]

US – 23andMeSays Privacy-Loving Customers Need to Opt Out of its Data Deal With GlaxoSmithKline

Customers of 23andMe (the genetics testing company) need to be aware of how the company is using data that users may have earlier consented to give without anticipating its newer initiatives. One new tie-up was a particular point of interest at TechCrunch’s massive Disrupt show in San Francisco. 23andMe CEO and co-founder Anne Wojcicki was asked a series of questions about 23andMe’s pact with pharmaceutical giant GlaxoSmithKline, which announced in July that it acquired a $300 million stake in 23andMe. As part of the four-year-deal, GSK gains exclusive rights to mine 23andMe’s customer data to develop drug targets. 23andMe has for the last three-and-a-half years been sharing insights with GSK and six other pharmaceutical and biotechnology firms. Now, GSK alone will be able to access the aggregated and wholly anonymized customer information. 23andMe customers have expressed some chagrin about the deal, and Wojcicki’s appearance today might not assuage them. The reason: she underscored that 23andMe customers aren’t being asked to opt in to this data-sharing agreement, but rather, they are being told they can opt-out via email. To people who treasure their privacy, that’s not enough. [TechCrunch and at: Medium]

Health / Medical

US – Unsecured Medical Record Systems and Devices Put Patient Lives at Risk

A team of physicians and computer scientists at the University of California has shown that it is easy to modify medical test results remotely by attacking the connection between hospital laboratory devices and medical record systems. These types of attacks might be more likely used against high-profile targets, such as heads of state and celebrities, than against the general public. But they could also be used by a nation-state to cripple the United States’ medical infrastructure. Dubbed Pestilence, the attack is solely proof-of-concept and will not be released to the general public. While the vulnerabilities the researchers exploited are not new, this is the first time that a research team has shown how they could be exploited to compromise patient health. These vulnerabilities arise from the standards used to transfer patient data within hospital networks, known as the Health Level Seven standards, or HL7 [see wiki]. Essentially the language that allows all devices and systems in a medical facility to communicate, HL7 was developed in the 1970s and has remained untouched by many of the cybersecurity advances made in the last four decades. [Science Daily]

WW – Insider Threats Account for Almost 1/3 of Healthcare Breaches

A Protenus breach study, in collaboration with DataBreaches.net, examined breaches in the healthcare sector April-June 2018. The majority were first-time offenders, who are more than 30% likely to commit a second offence in 3 months’ time and a third offence in 1 years’ time; almost 3/4 of offenders snoop into patient records belonging to a family member. [Q2 2018 Breach Barometer – Protenus]

CA – OIPC NS Finds Multiple Violations by Pharmacist

A OIPC NS report investigated breaches of personal health information in the provincial Drug Information System pursuant to the Personal Health Information Act. Organizations authorized to access the provincial drug information system were not sufficiently monitoring their staff’s access (lookups without user notes and not associated with dispensing activity were not audited), resulting in the pharmacist snooping into the PHI of 46 individuals in an EHR database, including her doctor, co-workers, and her child’s teachers. [OIPC NS – Investigation Report IR18-01 – Drug Information System Privacy Breaches, Department of Health and Wellness]

Horror Stories

CA – Air Canada Resets 1.7 Million Accounts After App Breach

Air Canada has been forced to issue a password reset for all 1.7 million users of its Android, iOS and BlackBerry mobile app after up to 20,000 accounts were compromised by hackers last week. According to this alert, the company detected “unusual login behaviour,” between August 22 and 24, after which it blocked further access. For the 20,000 people believed to be directly affected by the breach, two types of data were put at risk: a) Name, email address, telephone number, and Air Canada Aeroplan account number; and b) Potentially also passport number, NEXUS number (a system allowing rapid crossing of some borders), Known Traveler number, gender, birth date, nationality, passport expiration date, country of issuance, and country of residence. Credit card numbers were encrypted and were not compromised. Passwords associated with the company’s Aeroplan points program were also not at risk, but users should still monitor transactions Air Canada said. Arguably, caution dictates that passengers should cancel their passports and buy new ones. If customers wish to go down this route, it’s hard to see how Air Canada won’t be expected to reimburse that cost. [Naked Security and at: CBC News, CTV News, The Canadian Press, The Register and BBC News]

Identity Issues

CA – OIPC SK Reluctantly Finds Driver’s License Data Not PI

This OIPC SK report reviewed the Saskatchewan Government Insurance’s response to a request for records pursuant to the Freedom of Information and Protection of Privacy Act. An insurance body properly withheld some parts of an internal privacy breach report that might sway jury member in impending legal proceedings; however, they must disclose driver’s licence details, which the OIPC SK believes should, but does not, fall under the definition of PI in the FOI legislation (such data should be protected as it can be used for fraud and identity theft). [OIPC SK – Review Report 146/2017 – Saskatchewan Government Insurance]

CA – OIPC SK Stands by Current Victim Identity Law

Saskatchewan’s privacy commissioner Ronald Kruzeniski does not believe the legislation behind the Regina police’s policy on naming murder victims needs to be changed. The option was not ruled out by Justice Minister Don Morgan to settle a difference in the way he and the Regina Police Service interpret the Local Authority Freedom of Information and Protection of Privacy Act (LAFOIP) [PDF] which came into effect in January. Regina Police Chief Evan Bray has decided to release the names of murder victims on a case-by-case basis [watch video]. Police will only release names in situations where it will help an investigation, to protect someone’s health or safety, after the first court appearance of someone charged in the crime or if it’s in the public interest. Minister Morgan argued the chief is taking the wrong approach in his interpretation and explained the starting point should be that names are released except in rare cases, like pending next-of-kin notification or if it would compromise an ongoing investigation. Kruzeniski said regardless of which direction is taken, it still ends up being a case-by-case determination. He believes you could draft the policy either way and end up with the same result. “It’s hard to put a real good definition on public interest and I think you have to rely on the police chief at the time to say ‘When I hear the summary of the facts, I think it’s in the public interest to release it,’” said Kruzeniski. [News Talk 980 CJME , The Canadian Press and see also opinion: The killing of someone, who shall remain nameless

IN – Edward Snowden on Aadhaar Privacy: “The system is already going bad”

“You’ll be tracked, you’ll be monitored, you’ll be recorded in a hundred different ways and not by UIDAI [Unique Identification Authority of India], but by the Aadhaar number they created that is being used by every other company and every other group in society,” Edward Snowden said during a recent ‘Talk Journalism’ event. A video of Snowden’s live stage interview via Google Hangouts was published on YouTube recently [29:13 min – at Talk Journalism event on August 11 at Hotel Fairmont, Jaipur]. “The biggest crime behind this system is that it’s being used for things that are unrelated to what the Government is paying for. If you want to open an account, buy a train ticket, more and more of these services are demanding an Aadhaar number. Not just the number, they are demanding that you show them the physical card. This is creating a systemisation of society, of the public and this was not the intention of the programme,” Snowden said. He called for criminal penalties on companies that ask for a person’s Aadhaar number for a service that the Govt is not paying for. Snowden concluded his comments on Aadhaar by saying that the system in already going bad and that privacy of Indians (digital or otherwise) is not adequately protected. [Digit and at: International Business Times, All India Roundup and SocialPost]

Law Enforcement

CA – Police Chiefs Push for New Data-Sharing Treaty With U.S.

Canada’s police chiefs are pressing the Trudeau government to sign a new electronic data-sharing agreement with the United States to overcome hurdles in the fight against crimes ranging from fraud to cyberterrorism. But the government and the federal privacy commissioner say more consultation and study are needed to ensure appropriate protection of personal information before taking such a step. The Canadian Association of Chiefs of Police recently passed a resolution [see pg 7 of PDF] urging the federal government to negotiate an updated sharing agreement with the U.S. They say cross-border access to information is one of the most pressing issues for law enforcement agencies. The chiefs see an opportunity for a virtual leap forward following Washington’s passage of the Clarifying Lawful Overseas Use of Data (CLOUD) Act [H.R.4943 & wiki]. The new law allows the U.S. to sign bilateral agreements with other countries to simplify the sharing of information on criminal justice matters, as long as signatories have proper safeguards in place. The Liberal government has conducted consultations on cybersecurity, but it has yet to address some key questions about how to ensure police and spy agencies have access to information that will help them solve crimes in the digital realm without trampling on privacy or charter rights. A spokeswoman for privacy commissioner Daniel Therrien, said that while the watchdog has not yet studied the police chiefs’ proposal, alternative arrangements to the current international legal assistance process should not undermine privacy protections in Canadian law. [CTV News]


US – Lawsuit Over Google’s Sneaky Location Tracking Could Be a Game-Changer

The Associated Press revealed that Google continues to collect location data from users even when “Location History” is disabled in its options. The company was unapologetic, but did change its location policy. Now, a Californiaian named Napoleon Patacsil has filed a lawsuit [5:18-cv-05062 – here & here – read complaint] against Google in federal court and requested a judge grant the case class-action status so that other Google users could join. If the suit is granted class-action status, practically every breathing American could potentially join in as a Plaintiff. The suit accuses Google of violating California’s privacy laws on three counts. It cites section 637.7 of the penal code that “prohibits the use of an electronic tracking device to determine the location or movement of a person.” The second count builds on the first, and claims Google violated the plaintiff’s reasonable expectation of privacy. This claim goes on to say that “Google engaged in true tracking of location history deceptively and in direct contradiction of the express instructions of Plaintiff and the members of the Class.” The third count goes further, saying that the Plaintiff’s “solitude, seclusion, right of privacy, or private affairs” were violated “by intentionally tracking their location.” The suit claims that Google has caused harm to its users “because they disclosed sensitive and confidential location information, constituting an egregious breach of social norms” and were the victims of an “intrusion into their private affairs.” On the same day that Patacsil filed his lawsuit, activists from the Electronic Privacy Information Center sent a letter [3 pg PDF here] to the Federal Trade Commission encouraging it to investigate Google for potentially violating a consent decree it signed with the agency in 2011. [GIZMODO and at: Reuters, Ars Technica, BGR, Tom’s Guide and Courthouse News Service | Napoleon Patacsil et al. v. Google, Inc. – Class Action Complaint – United States District Court Northern District Of California, San Francisco/Oakland Division]

Online Privacy

US – Facebook Users Are Changing Their Social Habits amid Privacy Concerns

The Pew Research Center has released the results [also see 3 pg PDF PDF] of a survey that shows many Facebook users have changed how they interact with the site over the past year. The center found that 54% said they had adjusted their privacy settings, 42% had taken a break from the platform for at least several weeks and 26% said they deleted the Facebook app from their phone in the past year. In all, 74% of those surveyed had taken at least one of those actions over the past 12 months, though it’s unclear if that’s a typical rate or a response to recent privacy-related scandals. Pew also found a difference between older and younger users. While 44% of Facebook users between 18 and 29 years old said they deleted the app sometime in the last year, only 12% of users 65 years of age or older said they had done the same. Similarly, while around 64% of users aged 18 to 49 said they had changed their privacy settings, only 33% of users 65 years old or older said they’d done so. Pew didn’t find any major differences between Democrats and Republicans. [engadget and at: The Washington Post and Bloomberg]

WW – Apple App Store Data Privacy Policy Changes

Apple’s new privacy policy for its Apple App Store takes effect on October 3, 2018. After that date, developers will have to submit privacy policies for new apps and updates before they can be submitted for distribution. To prevent surreptitious policy changes, developers will be permitted to edit policies only when they submit a new version of the app. The privacy policies must include clear information about what data are collected; how the data are collected; how the data are stored; what is done with the data’ and how users can revoke their consent and demand that their data be deleted. Apple also requires that the policy promise that any third-party entities with which the data are shared abide by the same rules.

  • ZDdnet: Apple looks to plug App Store privacy hole with new personal data policy
  • Computerworld: Apple insists developers ramp up their privacy commitments
  • apple.com: 5.1.1 Data Collection and Storage

WW – Google Selling 2FA Security Keys

Google is now selling its USB and Bluetooth Titan FIDO-based security keys for two factor authentication (2FA). Google has been using the keys internally; last month, the company said that since the keys’ use has been adopted more than eight months ago, none of its employees’ accounts has been phished.

  • The Verge: Google’s in-house security key is now available to anyone who wants one
  • CNet: You can buy Google’s $50 set of Titan security keys now
  • Bleeping Computer: Google’s FIDO Based Titan Security Key Now Available for $50 USD

Other Jurisdictions

AU – Australian Commissioner Provides Final Guidance on Access

The New South Wales Information Privacy Commissioner has issued final guidance on patient access requests. The IPC issued a draft version in June 2018. Providers must respond to an access request within 45 days and generally provide access to an individual’s health information upon request, pursuant in some cases to identity verification and a reasonable fee; exceptions include where providing access would pose a serious threat to the health of the requester or others, have an unreasonable impact on others’ privacy, or there have been repeated, unreasonable requests. [IPC New South Wales – Access to Health Information: Fact Sheet for Health Care Providers | Checklist for Private Sector Staff

Privacy (US)

US – More States Appoint ‘Chief Privacy Officers’ to Protect People’s Data

In this age of hackers and cybercriminals, every state has a top security official focused on preventing breaches and protecting the vast amounts of data it collects. Now, a growing number also are hiring a top official to make sure that the privacy of residents’ personal data is protected as well. Many large companies have employed ‘chief privacy officers’ for years, but they were rare in state government. A decade ago, there were only a few; today, at least eight states have them — Arkansas, Indiana, Kentucky, Ohio, South Carolina, Utah, Washington and West Virginia, according to the National Association of State Chief Information Officers [NASCIO]. Arkansas hired its first in June. States collect reams of confidential information from residents. Chief privacy officers are tasked with ensuring that state agencies safeguard that information and comply with privacy regulations. That means state employees who handle data must know how to protect sensitive information when they use or share it. Chief privacy officers typically create statewide privacy policies that apply to every agency and require that staffers be trained. They meet regularly with state agencies’ privacy teams and evaluate new technology to make sure it doesn’t conflict with privacy protections. Some also offer services to consumers to educate them about protecting their privacy. State chief privacy officers work closely with chief information security officers, who oversee cybersecurity. Chief privacy officers also must make sure their efforts don’t impede the public’s right to know. A lot of data collected by states isn’t private; it’s public information that should be accessible to anyone. [Stateline Blog (The Pew Charitable Trusts)]


US – Want to Hack the WA government? Try ‘Password123’

A staggering 60,000 out of 234,0000 active accounts at a range of WA government agencies were potentially at risk of a dictionary attack due to their weak passwords, a review by the WA Office of the Auditor General, Caroline Spencer, has found. For the report [see notice, report summaries & video here & PDF report] the Auditor General obtained encrypted password data from 23 Active Directory environments across 17 agencies. Using a selection of password dictionaries it found that tens of thousands of users had chosen weak passwords including “Password123” (1464 accounts), “password1” (813), “password” (184), “password2” (142) and “Password01” (118). The auditor also assessed the information security controls surrounding key business applications at five government agencies. All five “had control weaknesses with most related to poor information security and policies and procedures”. Earlier this year the WA government transformed the Office of the Government Chief Information Officer into the Office of Digital Government [here] and moved it to the Department of the Premier and Cabinet [here]. In its response to the audit report, DPC said that the move would help “ensure that ICT performance, data sharing and cyber security are strengthened”.[Computerworld]

US – Augusta University Health Exposed 417K Records Due to Phishing Attacks

Once again, a medical data breach has exposed thousands of patients. This time, the victims primarily include citizens of the state of Georgia. Reportedly, the Augusta University Health suffered data breach due to multiple phishing attacks over the year. Regretfully, the breach has exposed around 417,000 records [see AUH notice & FAQ]. Sophisticated phishing attacks targeted Augusta University in two different instances. The first incident took place on September 10-11, 2017. Initially, the college suspected that the breach exposed a “small number of internal email accounts”. However, this year, they realized that those accounts allegedly exposed 417,000 records. Whereas, the second phishing attack happened on July 11, 2018, with a much smaller scope. The breach data includes explicit personal information about the patients, as well as their medical and health records. In some cases, breach of financial records and Social Security numbers is also suspected. [Latest Hacking News aandt: DARK Reading, Atlanta Journal Constitution, Healtcare IT News and SC Magazine]

US – 1000 GAO Cybersecurity Recommendations Remain Unaddressed

Since 2010, the Government Accountability Office (GAO) has made over 3,000 recommendations [see some here] to agencies aimed at addressing cybersecurity shortcomings in each of these action areas. However, as of this month, about 1,000 have not been implemented. Until these shortcomings are addressed, federal agencies’ information and systems will be increasingly susceptible to the multitude of cyber-related threats that exist. There is much work to do to protect the public by both government and the private sector. The GAO has been examining federal efforts on several cybersecurity fronts including protecting Americans’ privacy, protecting critical infrastructure such as telecommunications and financial markets, and protecting the federal government’s own operational IT systems, such as those that are essential to the day-to-day workings of government. Urgent actions are needed to address several cybersecurity challenges facing the nation. The risks to IT systems supporting the federal government and the nation’s critical infrastructure are increasing as security threats continue to evolve and become more sophisticated. These risks include escalating and emerging threats from around the globe, steady advances in the sophistication of attack technology, the emergence of new and more destructive attacks, and insider threats from disaffected or careless employees. The GAO has identified a range of critical cyber challenges [see report to Congress] facing the federal government today and critical actions needed now to address them. GAO has had this issue of information security on our High Risk list since 1997 and we will continue to track it as part of that list identifying programs that need concentrated attention from the Congress and the Administration. [The Hill]


WW – Five Eyes Countries Want Tech Companies’ Help to Access Encrypted Communications

The countries known as the Five Eyes – the US, the UK, Canada, Australia, and New Zealand – have issued a joint statement suggesting that unless tech companies help law enforcement access communications protected by end-to-end encryption, they “may pursue technological, enforcement, legislative or other measures to achieve lawful access solutions.”

  • CNET: US and intelligence allies take aim at tech companies over encryption
  • NextGov: Five Eyes Intel Alliance Urges Big Tech to Help Break Encrypted Messages
  • Infosecurity-magazine: Five Eyes Talk Tough on Encryption Backdoors
  • Homeaffairs,gov.au: Statement of Principles on Access to Evidence and Encryption

Workplace Privacy

CA – Ontario Law Limits Employee Screening

McCarthy Tetrault examines Ontario’s Bill 113, the Police Record Checks Reform Act, 2015, effective November 1, 2018. The law firm says that employers cannot obtain non-conviction information unless for a vulnerable sector check, or use/disclose the results of the check other than for the original purpose or as authorized by law; an individual’s written consent must be obtained before the check is conducted and specify the check being consented to. [New Rules Regarding Police Record Checks: Employers Take Note – Jessican Wuergler, Associate, McCarthy Tetrault]

CA – OIPC AB Finds Significant Harm from Disclosure of Employee PI

The Office of the Information and Privacy Commissioner of Alberta was notified by La Coop fédérée of unauthorised disclosure of personal information, pursuant to the Personal Information Protection Act. Three employees in an organisation clicked on a malicious link in phishing emails and provided their authentication information (allowing the hacker access to PI in their inbox); mitigation steps were taken (retaining an external consultant, extensive monitoring, refresher training), however, the breach was deliberate, and compromised information (salary, bonus considerations, benefit plan types) can be used for identity theft and fraud. [OIPC AB – Breach Notification Decision – P2018-ND-083 – La Coop fédérée]

US – California Employers Must Get Applicant OK for Background Check

California employers, lenders, and landlords must obey the tougher of two privacy laws and inform applicants before investigating their background, the state Supreme Court ruled Aug. 20 [see “Connor v. First Student Inc” & opinion summary]. The 7-0 decision affects the thousands of credit, employment, and housing decisions made daily in California under two laws. One of them requires prior notice and authorization before certain types of background investigative reports are ordered. The other covers more consumer-oriented information that doesn’t require advance disclosure or consent. The justices upheld a 2015 lower court ruling [see discussion] that school bus transportation company First Student Inc., part of FirstGroup plc, failed to adequately notify and obtain consent from former Laidlaw International Inc. bus drivers and aides before it conducted background checks on 54,000 workers. The reports were ordered after First Student bought Laidlaw in 2007. [Bloomberg BNA and at: The Record (Law.com) and At The Lectern (Horvitz & Levy)]

CA – Liberals Consider ‘Right to Disconnect’ Outside Work Hours: Report

The federal Liberals are considering whether a reshaping of federal labour standards should include giving workers the right to ignore their job-related emails at home. The idea of putting into law a “right to disconnect” [wiki] is one of several policy areas the Liberals identify as meriting further study in a new report [“What we heard: Modernizing federal labour standards” – see PDF]. The report which provides results of a year-long consultation on changes to the federal labour code showed a split between employer and labour groups over whether the Liberals should set rules for workers in federally-regulated industries. That includes employees in transport, banking and telecommunications, and could also influence provincial labour laws. Labour groups argued that a legal right to turn off work devices, or workplace policies to limit the use of work-related devices when not at the office, would improve rest and not bite into family time. Employers were more cautious, telling federal officials that some companies need employees available to be on call after hours. And some employees choose to stay connected because they don’t work a traditional “9-to-5” workday. At least one employer group called any government action a “legislative over-step,” the report said. [National Post and at: The Canadian Press]




01-15 August 2018


CA – OIPCs to Investigate Use of Facial Recognition at Calgary Malls

The privacy commissioners of Alberta and Canada are launching investigations into the use of facial recognition technology, without the public’s consent, in at least two malls in Calgary. A notice posted to the OIPC AB website says the investigation will look to determine, “what types of personal information are being collected, whether consent for collection or notice of collection is required or would be recommended, for what purposes personal information is collected, whether the data is being shared with other businesses, law enforcement or third parties, and what safeguards or security measures are in place to protect personal information.” Alberta’s privacy commissioner, Jill Clayton, opened the investigation based on the level of public interest. A similar notice was also posted to the Office of the Privacy Commissioner of Canada (OPC) website. Provincial and federal privacy offices will co-operate with each other. The owner of the mall, Cadillac Fairview, said the software was also running in other malls across Canada. The company said the cameras in the mall directories are used to better understand traffic flow and they “do not record or store any photo or video content.” [CBC News Sources report Cadillac Fairview has confirmed it has suspended use of the system and has promised to “co-operate fully with the investigations” – see The Globe and Mail, CBC News]

AU – Australian Governments Continue Expanding Use of Facial Recognition

The Australian state of Western Australia is planning to trial facial recognition technology for enforcing bans on purchasing alcohol by certain individuals. A trial will be conducted in the Pilbara region, where high rates of violence have been blamed on alcohol consumption, with vendors informed by the system when a person has been banned from purchasing alcohol as a consequence of intoxicated driving or domestic abuse. Racing and Gaming Minister Paul Papalia said he would like the Scantek system which is currently used to scan drivers’ licenses to integrate facial recognition or something similar in the future. Australia’s Department of Home Affairs will begin loading driver’s license images into its new biometric database within months, and police are being trained on the new Driver License Facial Recognition Solution. Rights advocacy group Access Now has criticized the Australian Federal Government for its use of biometrics in public surveillance, while multiple state governments have challenged what they say is the expanding scope of the facial recognition systems set out in the Identity-matching Services Bill 2018 and related legislation. [The West Australian, The Courier-Mail, Biometric Update]

US – GAO to Examine Government and Commercial Use of Facial Recognition

Four US Senators and a House Judiciary Committee Member requested the GAO to investigate the government use of commercial facial recognition tools. Facial recognition technologies raise serious concerns about individual privacy rights; a survey should examine which law enforcement agencies are using such technologies and how (how is the technology audited for accuracy, and is there transparency regarding use and are there redress procedures), what data commercial vendors use to “train” the algorithms used to match images, and whether data brokers buy from multiple vendors to create individualized profiles for marketing. [Letter to GAO Regarding Facial Recognition – Senator Ron Wyden et al.  | Press release]

US – Facial Recognition Not Accurate Enough for Policing Decisions: Body Cam Company

Facial-recognition technology has been facing public scrutiny in past weeks, especially since an American Civil Liberty Union experiment using Amazon’s facial recognition erroneously matched members of US Congress to a directory of mugshots of alleged criminals [ACLU blog post]. Axon, the country’s biggest supplier of body cameras, doesn’t want to face similar backlash. The company’s CEO Rick Smith told investors that Axon isn’t yet working on facial recognition to integrate into its products. The “accuracy thresholds,” said Smith, are not “where they need to be to be making operational decisions off the facial recognition.” Smith suggested that rolling out facial-recognition at this point could scuttle the tech’s future in body cameras. “This is one where we think you don’t want to be premature and end up either where you have technical failures with disastrous outcomes or there’s some unintended use-case where it ends up being unacceptable publicly in terms of long-term use of the technology.” In addition, Smith noted there are accountability and privacy measures that haven’t yet been worked out with the tech’s application in body cameras. Commercialization would only come, Smith said, once all those issues had been resolved, and Axon had ensured “that it will be acceptable by the public in large.” [Quartz, GOZMODO] See also: Can US Law Enforcement Be Trusted With Facial Recognition Technology? ]

Big Data / Analytics / Artificial Intelligence

EU – Starting Point for a Big Data Project: The Privacy Impact Assessment

The use of big data has also brought much controversy, particularly when it involves sensitive information, concerns children, minorities or other vulnerable people, or where the decision-making has a significant impact on individuals. As both public interest and regulatory scrutiny in artificial intelligence, machine learning and big data continues to build, it is increasingly becoming important for businesses to be aware of individuals’ rights over their data and be prepared to demonstrate compliance with data protection laws. The data protection impact assessment (DPIA), also called privacy impact assessment (PIA), is an important tool that organisations have at their disposal to ensure that their processing of personal data complies with data protection law and minimises the impact on privacy. The guide [“The Starting Point for a Big Data Project: The Privacy Impact Assessment”] is intended to explain why, when and how PIAs should be carried out in the context of a big data project. It also discusses some of the key issues that are likely to be identified in a PIA on a big data project and factors to consider when making risk-based decisions on the basis of a PIA. [Global Media and Communications Watch (Hogan/Lovells)]

WW – Weaponized AI and Facial Recognition Enter the Hacking World

The open-source intelligence-gathering tool Social Mapper — developed by Trustwave’s Jacob Wilkins — uses facial recognition to automatically search for targets across eight social media sites: Facebook, Twitter, LinkedIn, Instagram, Google+, the Russian social networking service VKontakte, and the Chinese social networking sites Weibo and Douban. Its purpose is to help pen testers and red teamers with social engineering attacks. Instead of manually searching social media sites for name and pictures, Social Mapper makes it possible to automate such scans “on a mass scale with hundreds or thousands of individuals.” After searching, it spits out a report such as a spreadsheet with links to targets’ profile pages or an HTML report that also includes photos. From there, your attacks are limited “only by your imagination” If everyday malware is not considered evasive enough, then think about weaponized artificial intelligence (AI) and then meet the new attack tool DeepLocker [created by the IBM team] a “highly evasive new breed of malware, which conceals its malicious intent until it reached a specific victim.” DeepLocker “unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition.” To show off DeepLocker’s capabilities, the researchers camouflaged WannaCry ransomware in a video conferencing app. Going undetected by security tools, DeepLocker did not unlock and execute the ransomware until it recognized the face of the target. [CSO Online. For Social Mapper see The Verge & FindBiometrics. For DeepLocker see: ZDNet & eWeek — for both see Forbes]


CA – Federal Bill Regulates Collection at Border

Bill C-21, an Act to Amend the Customs Act, passed the House of Commons and is being reviewed by the Canadian Senate. If passed, the Canadian Border Services Agency may collect from any person leaving Canada personal information (including nationality, sex, and travel history), travel documents and itinerary information, which can be retained for 15 years beginning on the date on which the information is collected. [Bill C-21 – An Act to Amend the Customs Act – See also Blaney McMurtry: Does the CBSA Have Authority to Search Your Electronic Devices? ]

CA – Canada Legislation Permits Drug and Alcohol Testing by Police

Bill C-46, an Act to Amend the Criminal Code Offences (Relating to Conveyances), received royal assent on June 21, 2018. Police officers may require a person to provide a blood, urine or breath sample for testing if there is reasonable belief that an individual has operated a car, aircraft or railway equipment after consuming drugs and alcohol, or has consumed drugs or alcohol within 3 hours of committing an offence; disclosure of test results to third parties is generally prohibited unless an exception applies. [Bill C-46 – An Act to Amend the Criminal Code Offences Relating to Conveyances – Parliament of Canada

CA – OPC and OIPC Guide Companies on Consent

The Office of the Privacy Commissioner of Canada and the Offices of the Information and Privacy Commissioner of Alberta and British Columbia have jointly issued guidelines for obtaining meaningful consent. To obtain express and informed consent, companies must involve users when designing the consent process and conduct regular audits of privacy communications to ensure they reflect management policies; mobile apps guidance recommends limiting data to that which is needed by the app to function and providing a dashboard for users to easily tighten privacy settings. [OIPC AB – Guidelines for Obtaining Meaningful Consent | Mobile Apps Guidance

CA – Ontario Psychologist’s Telephone Recording Lawful: Review Board

The Ontario Health Professions Appeal and Review Board reviewed an Ontario psychologist’s recording of a contentious telephone conversation. Another party to a phone conversation complained to a regulator that she was not asked or told by the psychologist that the telephone call between them could be recorded; however, such recording is not contrary to professional standards and is permitted in Ontario pursuant to the Criminal Code (an individual can record a call in which he/she is participating). [H. S., Ph.D., C.Psych. v. the Catholic Diocese of London – 2018 CanLII 55890 HPARB – Health Professions Appeal and Review Board of Ontario]

CA – OIPC BC Requires Ministry to Sever Records

An OIPC BC order examined the Ministry of Attorney General’s decision to withhold records requested pursuant to BC’s Freedom of Information and Protection of Privacy Act. The OIPC rejected the Ministry’s argument that attachments to privileged briefing notes are therefore automatically privileged; the Ministry was ordered to sever information from third party correspondence that would not reveal the substance of legal advice, [OIPC BC – Order F18-18 – Ministry of Attorney General]

CA – BC Law Society Properly Withheld Billing Information

An OIPC BC order examined the Law Society of BC’s handling of a request for access to records pursuant to BC’s Freedom of Information and Protection of Privacy Act. The OIPC BC concluded that a statement of an account is billing information that is presumptively privileged; descriptions of professional services could reveal privileged communications, and disclosure of some information could allow inferences to be made about privileged communications (i.e. the hours spent providing services on each date and the total amount of fees, taxes and disbursements). [OIPC BC – Order F18-29 – Law Society of British Columbia]

CA – IPC ON Clarifies Applicability of GDPR in Ontario

The IPC Ontario published an overview of the GDPR as applicable to institutions and healthcare information custodians in Ontario. Ontario public institutions and custodians’ compliance with the GDPR depends on processing activities (offering goods or services to individuals in the EU or monitoring activities of individuals in the EU), requires express consent (which is specific, unambiguous and freely given), ensuring data subject rights (right to object, restrict, access and delete personal data, and the right to be forgotten), and breach notification (to the DPA within 72 hours after becoming aware). [IPC ON – Privacy Fact Sheet – July 2018 – General Data Protection Regulation]

CA – Investigation Records Not Protected by Solicitor-Client Privilege

The OIPC NL reviewed the City of Corner Brook’s decision to withhold requested records, pursuant to the Access to Information and Protection of Privacy Act. The OIPC found that communications to and from a public body’s solicitor contained facts, and do not entail the seeking or giving of legal advice; records were created for the dominant purpose of a workplace investigation, and the fact that some of those records may have been held in a file in the solicitor’s office does not result in them being privileged. [OIPC NL – Report A-2018-017 – City of Corner Brook]

CA – OIPC SK Finds Unlawful Disclosure of PHI

An OIPC SK report investigated two alleged privacy breaches under the Health Information Protection Act. A medical lab sent two patient reports to the wrong doctor because the lab’s information system is designed to automatically highlight the first name appearing in a list of doctors with the same surname; the system should be reconfigured to search for a doctor by their unique ID number or enter in both the first and last name of the doctor. [OIPC SK – Investigation Report 014-2018, 016-2018 – Saskatchewan Health Authority]

CA – Alberta Public Body Did Not Destroy Records

The OIPC AB investigated whether Balancing Pool had proper records management and retention procedures, pursuant to the Freedom of Information and Protection of Privacy Act. The public body did not follow instructions in email records received from another public body to delete records to prevent release; however, the public body must create a records management program, train employees on retention and destruction schedules and provide related documents alongside each other to help FOI applicants understand responsive records. [OIPC AB – Investigation Report F2018-IR-02 – Balancing Pool]

CA – Calgary Homeless Shelter Plans to ID Clients With Facial Recognition

Agencies have struggled with how to identify clients that don’t have official ID, and the Calgary Drop-In Centre shelter thinks it might have a high-tech solution — facial recognition  — but it’s a fix that comes with serious privacy risks for an already marginalized population. Currently, person[s] who enter the building are fingerprinted. However, “Being fingerprinted is invasive and can cause stress for some clients” said Helen Wetherley Knight, director of IT at the Calgary Drop-In Centre. So, instead the Drop-In Centre is testing facial recognition technology for a non-invasive ID solution. Each client’s photos are captured with a secure webcam, encrypted, and then linked to a system where staff can access the client’s profile. Knight said she isn’t aware of other shelters in Canada using it The program the Drop-In Centre is testing, which is being implemented by Vancouver-based IT company Sierra Systems, uses Microsoft’s Facial Recognition API. Client data would be stored securely in Microsoft’s cloud in data warehouses in Quebec. Eventually, they’d like to implement blockchain technology, to give clients control over which agencies access their personal information. The facial recognition technology has already gone through one round of testing — 41 clients, volunteers and staff “eagerly participated” Knight said — but no further testing is planned as it is undergoing a feasibility study. [CBC News]

CA – OIPC SK to Doctor Who Altered Records of Dead Patient: Adopt Better Record Keeping Practices

The Saskatchewan Information and Privacy Commissioner, Ron Kruzeniski, is recommending that a doctor who altered an electronic record of a dead patient’s visit eight times after the patient’s death do a better job at keeping medical records. Dr. Svitlana Cheshenchuk altered a record of a visit from Sandra Hendricks, who died hours after leaving a check up with the doctor back in 2014. The alterations took place between October 2014 and June 2015. The privacy commissioner found [Investigation Report 024-2018] that the doctor did not comply with multiple sections of the Health Information Protection Act (HIPA), which are related to policies which should protect the integrity and information and its compliance with HIPA. “Integrity refers to the condition of information being whole or complete; not modified, deleted or corrupted,” the report reads. [CBC News, CTV News, Regina Leader-Post and CBC News]

CA – Ontario Company’s Zero Tolerance Approach Unreasonable

The U.S. Workers Union grieved a workplace policy of Drivetest. The Ontario Labour Arbitration Board found the company’s termination of an employee for viewing her own information on a confidential system an unreasonable interpretation of its personal data handling policy; the employee did not view the data with any malicious intent or in furtherance to an illegal act, and the policy did not expressly stipulate a particular level of discipline for any specific offense. [SERCO DES (Drivetest) v United Steel Workers – 2018 CanLII 64969 – Ontario Labour Arbitration

CA – Surrey Plans to Install Cameras to Catch Illegal Garbage Dumping

City of Surrey wants to install 10 or more cameras to catch people illegally dumping garbage. Surrey says illegal dumping has been increasing at a ‘alarming rate’ in the municipality during the past decade. Ray Kerr, Surrey’s manager of engineering operations. He said illegal dumping isn’t only a problem in Surrey, but throughout Metro Vancouver and across the country. He said Surrey spent $600,000 last year on removing illegally dumped garbage. He estimated that Surrey was on track to spend about $550,000 by the end of 2018. Metro Vancouver estimates that it cost regional municipalities $5 million to clean up illegal dumping, which often includes items such as mattresses, sofas, carpeting, tires and appliances. Kerr said there was no specific reason why the total of 10 or more cameras was picked other than he believes it was a “good place to start.” [Vancouver Sun, Peace Arch News]

CA – Privacy of Online Pot Sales Needs Watching: Experts

Buyers who have to provide personal information to purchase recreational pot online after legalization this fall should be able to rely on existing laws to protect their privacy but the issue needs to be watched closely to ensure regulations are obeyed and mistakes are avoided, experts say. Ontario’s government recently announced [see related rules here & here] that consumers 19 years or older will have to go online to buy weed after legalization federally on Oct. 17 because private retail stores won’t be up and running until April. A government agency called the Ontario Cannabis Store will run the online sales, although private e-commerce provider Shopify will be involved. The matter is important given the stigma many people still attach to marijuana use, and the potential for Canadians to be barred from the United States if their otherwise legal indulgence becomes known to American border agents [see earlier reporting here & here]. A spokeswoman for the OPCC said the office had not looked specifically at online marijuana sales. At the same time, the commission said it recognized privacy concerns around buying or using marijuana given its longtime status as a controlled substance. At minimum, have to provide a name along with email and delivery address, and payment information. However, a spokesman for the Ministry of Finance said buyers will have to provide proof of age via government-issued ID, which a delivery person will verify but not copy. The cannabis store website will have data security and privacy controls “aligned with global e-commerce best practice,” he said. Personal data will remain in Canada and not be shared with third parties. [National Post]

CA – Many Organizations Still Ignore Basic Security: Survey

Experts are going hoarse telling organizations they have to build their cyber security strategies around doing the basics. But if a survey/report called “The State of Cyber Hygene” sponsored by Tripwire is accurate not many are following even the top six of the 20 recommended Critical Security Controls [download] set by the Center for Internet Security (CIS). The top six CIS Security Controls are inventory and control hardware assets; inventory and control software assets; perform vulnerability management; secure hardware and software configuration; control administrative privileges; and monitor and analyze logs. The survey was completed by 306 participants in Canada and the U.S. last month, all of whom are responsible for IT security at companies with more than 100 employees. [IT World Canada]

CA – OIPC SK: Too Much Info Released to Parents About School Gun Threat

An employee of the Good Spirit School Division, which oversees 27 schools in southeast Saskatchewan, breached a student’s privacy earlier this year after an alleged threat involving guns was overheard by a substitute teacher and some students, the Saskatchewan privacy commissioner said in a report. A letter sent out to parents a few days after the incident provided too much information that shouldn’t have been disclosed, the commissioner found. The letter said that the student made a threat and included the wording of the threat, that the RCMP had been called and that the student had parents who were “very responsible gun owners and the subject individual could not access weapons.” The letters also said the student had been suspended and that there were concerns the student may have been bullied. Once the letters went out and the breach was noticed, the school division proactively reported itself to the office. The parents of the student also made a complaint to the privacy commissioner. [CBC News]


WW – Spam is Still an Effective Way to Infect Computers: Study

A study from F-Secure and MWR InfoSecurity says that spam is still the top choice of attackers for spreading malware. The study found that spam click rates have risen slightly from 13.4% last year to 14.2% this year. The report also says that spam is still an effective vector of infection because the presence of others, like Adobe Flash, is diminishing. Threatpost ThreatList: Spam’s Revival is Tied to Adobe Flash’s Demise | information-age.com: Spam still the most common cyber crime technique, according to recent research]

EU Developments

EU – EU Publishes New ePrivacy Revisions

The EU released the latest revisions to the proposed ePrivacy Regulation for comment. Providers would be permitted to process metadata if necessary for network management/optimization (for a limited duration and if anonymised data cannot be used), for statistical counting/scientific research (pursuant to EU/Member State law and subject to encryption, pseudonymisation and compliance with related GDPR provisions), and for calculating/billing interconnection payments. [Council of the European Union – Presidency Delegations – Revisions to ePrivacy Regulation 10975/18 – July 2018]

EU – EPIC Reacts to EDPB Certification Guidance

The Electronic Privacy Information Center (EPIC) commented on EU Data Protection Board’s (EDPB) draft guidelines on certification criteria. EPIC recommends certification criteria include disclosure of algorithm logic, processing prohibitions when profiling risks are identified, and scrutiny of categories and amount of personal data collected. Immediate re-certification should be mandatory for new technologies that collect large quantities of granular data, and certification should not be granted for unspecified or excessive processing. [EPIC – Comments on EDPB Consultation on Guidelines 1-2018 on Certification Criteria under the GDPR]

EU – German State DPA Publishes List of Mandatory DPIAs

The Hessian Data Protection Authority issues a list of processing activities, pursuant to article 35(4) of the GDPR, that require a data protection impact assessment. Such processing includes vehicle data using automatic readers, merging of data using non-transparent algorithms (e.g., fraud prevention), behavioural/performance evaluation assessments (e.g., ratings portals, collection services, geolocation of employees), online profiling (e.g., dating sites and social networks), Big Data, artificial intelligence, location tracking (e.g., in shopping malls), RFID (e.g., by apps/maps), and centralized storage of measurement data (e.g., fitness apps). [DPA Hesse – List of Processing Operations Pursuant to Article 35(4) of GDPR | General information]

EU – CIPL Maps Elements of GDPR Accountability

The Centre for Information Policy Leadership (CIPL) has mapped GDPR requirements to elements of accountability. CIPL outlines controls and measures to ensure GDPR accountability for leadership and oversight (privacy engineers, DPO oversight and reporting), risk assessment (DPIAs, for breach incidents, at program or service level), policies and procedures (crisis management, vendor management, legal basis and fair processing), transparency (breach notification, dashboards, information portals), and monitoring and verification (processing records, evidence of consent, notices). [The Case for Accountability – How it Enables Effective Data Protection and Trust in the Digital Society – CIPL]

EU – DPAs Should Incentivise Accountability: CIPL

The Centre for Information Policy Leadership called for incentivising organisational accountability in the EU. Accountability should be encouraged, incentivized and rewarded where it goes above minimal legal requirements, and should not be left solely to the threat of sanctions, or the self-interest of the organisation; impactful incentives include more flexibility in interpreting privacy principles and discretion when considering enforcement actions for organisations that demonstrate heightened accountability. [Incentivising Accountability – How Data Protection Authorities and Law Makers Can Encourage Accountability – CIPL]

EU – Nymity Issues Recommendations for Demonstrating GDPR Accountability

Nymity has issued recommendations for generating required reports to demonstrate GDPR compliance and accountability. A spreadsheet, document or scorecard can be used to tie relevant GDPR provisions to implemented measures and references the owner of each processing activity, ensure DPIA records clearly show how risk was mitigated in the project (e.g. identifying privacy by design elements and how accountability for addressing risks was affirmed), and consider keeping records on legitimate interests processing (individuals impacted, potential harms and risks mitigated). [Reporting on GDPR Compliance – An Accountability Approach to GDPR Regulator Ready Reporting – Nymity]

UK – ICO Appoints New Executive Director for Technology Policy and Innovation

The UK ICO has appointed Simon McDougall as executive director for Technology Policy and Innovation to lead new approaches to information rights practice and promoting the legally compliant processing of personal data as a core element of new technologies and business systems [see ICO PR here]. McDougall is currently managing director of Promontory [see here wiki here] – a risk management and regulatory compliance consulting firm acquired in 2016 by IBM, where he founded and led a global privacy practice. Elizabeth Denham, Information Commissioner, said, “As a globally respected figure in the world of privacy and innovation, Simon is a great fit for this new role, which will strengthen our expertise and responsiveness to new challenges and opportunities.” The ICO is also planning for a regulatory ‘sandbox’ to enable organisations to develop innovative products and services while benefitting from advice and support from the ICO. [Government Computing Network at: ComputerWeekly and Information Age]

EU – Publishers Adopting Consent Management Platforms for GDPR Compliance

More publishers are feeling under pressure to adopt a consent-management platform [also see here] to be compliant with the GDPR. CMPs store consent information and pass it on to the publisher’s programmatic partners. In the U.K., 31% of publishers had a CMP, an increase of 12% from July to August, according to tech vendor Adzerk. Among U.S. publishers, 27% had a CMP in August, up 13% from the month before. (Adzerk defines “publisher” as a site that shows programmatic ads). Several vendors in France have reported the same findings. Smart, an SSP, said just over 40% of its ad calls now come through with consent strings — which can only be generated once a publisher adopts a CMP (it didn’t have a comparison figure). Getting consumers’ consent to have their data collected for ad-targeting purposes is one way marketers can comply with the GDPR. Until now, many publishers have chosen other routes such as legitimate interest [see here & here], a route that’s seen as more likely to safeguard ad revenue. While many big publishers have built their own CMP or used free versions from vendors [see here], others worry that implementing them could cause a drop in personalized advertising. [Digiday and at: MarTech]

EU – Children’s Rights and the GDPR

The General Data Protection Regulation (GDPR) applies to both children and adults alike and includes certain child-specific clauses that aim to protect the data of children. Children merit additional protections because they are less likely to be familiar with the risks, consequences and safeguards regarding their personal and public data. The GDPR has a non-standardized definition of a child, with the default age set to those 16 years old and below [see GDPR Art.8 here]. Member States are permitted to lower the age cap to define children in the GDPR, but to no younger than 13 years old. This is an option nearly half the Member States have exercised. Recently the UK ICO began developing an “Age Appropriate Design Code” to inform organizations seeking consent to use children’s data. The Information Commissioner’s current call [see PR here – consultation closes September 19 here] for evidence seeks evidence from bodies representing the interests of children or parents, child development experts, and online service providers. This evidence will be taken into consideration while developing the Code in order to provide clear guidelines and expectations of age-appropriate design standards to providers of online information society services. [ICO’s earlier guidance on children & the GDPR] This post reviews the best way to comply with the GDPR’s regulation of the data of children. Understanding the relevant provisions of the legislation is key. In particular: 1) the GDPR’s provisions regarding informed consent; 2) the right to erasure; and 3) how automated decision making are relevant to organizations which may be processing a child’s data. [CyberLex Blog (McCarthy/Tetrault)]

EU – GDPR Could Hinder Blockchain Innovation, Warns EU Body

The EU Blockchain Observatory and Forum [see here & here] has warned that the GDPR law that went into effect a little over two months ago could hinder innovation in the blockchain space. According to the European blockchain body, this is because of the lack of legal clarity between blockchain technology and the GDPR law, whose aim is to protect individual data rights as well as facilitate the free movement of personal data in the single market. “As long as the legal framework around personal data and blockchain remains unclear, entrepreneurs and those designing and building blockchain-based platforms and applications in Europe face massive uncertainty. That can put a brake on innovation,” notes the report titled ‘Blockchain Innovation in Europe’. Under the GDPR the key to ensuring that individual data rights are protected is having a central body that can be held accountable when things go wrong. But in the case blockchain, a centralized data controller does not exist. Additionally, it is stipulated in the GDPR law that data can only be transferred to third parties based outside the European Union on condition that the data will be held in a jurisdiction which offers data protection levels that are equivalent to those in the single market. With open permissionless blockchains, however, it is impossible to select where the data ends up since a full copy of the database is replicated on all the full nodes regardless of their geographical location. [Blockchain News, ETH News and Loyens & Loeff News]

Facts & Stats

AU – 242 Breach Notifications to Australian DPA from April to June

The Australian Privacy Commissioner has released a report on breach notifications received by from April 1 – June 30, 2018. Malicious or criminal attacks caused 59% of the reported breaches (phishing, compromised credentials, brute-force, paperwork or device theft), 36% resulted from human error (PI send to the wrong recipient, loss of PI, failure to redact or use BCC), and system faults caused 5%; types of PI compromised include contact information, financial details, identity information, health information, and tax file number. [OAIC – Notifiable Data Breaches Quarterly Statistics Report – 1 April – 30 June 2018]


WW – G-Suite Security and Privacy Settings Every Admin Should Review

G Suite administrators may select from a wide range of settings that control the privacy of new G Suite files, sharing settings on Team Drives, and security requirements for account sign-ins. Many organizations prefer the default options for these three sets of settings, which result in: 1) New G Suite files that are private, viewable only by the creator of the file; 2) Team Drives that allow files to be shared externally; and 3) step authentication that is optional. But different organizations may choose dramatically different defaults. As organizations transition to G Suite, they would like their G Suite settings to reflect their current security and privacy preferences. Thi article looks at how each of these three sets of G Suite settings affect security and privacy. [TechRepublic]


US – Your Banking Data Was Once Off-Limits to Tech Companies. Now They’re Racing to Get It.

Facebook has joined a growing race among big technology companies seeking private financial information once regarded as off-limits: users’ checking-account balances, recent credit card transactions and other facts of their personal finances and everyday lives. Facebook said [see Fortune & TechCrunch] that it had proposed data-sharing partnerships with banks and credit card companies that would allow users to access their personal account information from within the social network’s messaging service Facebook said the banking information wouldn’t be included in the vast stores of information the site uses to build people’s personality profiles. Many of the tech world’s major players have shown similar ambitions in tapping users’ financial data. Apple and Google provide mobile-payment services that allow users to access financial information and pay for products with their phones. Amazon.com offers users a credit card issued by JPMorgan Chase. And Google last year announced a deal that would let it review and analyze roughly 70% of all credit and debit card transactions in the United States. Facebook already has smaller agreements with financial institutions, including PayPal and American Express, that allow users to do things such as review transaction receipts on Facebook Messenger. In March, Facebook launched a service that would allow Citibank customers in Singapore to ask a Messenger chatbot for their account balance, their recent transactions and credit card rewards. [The Washington Post]

US – Facebook’s Plan to Partner With Banks Raises Privacy Concerns

Facebook has asked big banks to share their customers’ detailed financial records with it in an effort to offer new financial and commerce services through Facebook Messenger, the Wall Street Journal reports. The social media network wants access to card transactions and checking account balances along with information about where its users shop, the report said, citing people familiar with the matter. Gennie Gebhart [here], a Researcher at the Electronic Frontier Foundation told Fortune that this push to change user habits and increase their interactions with businesses through the Messenger app is dangerous for user privacy. Facebook said it would not use any information provided by banks for targeted ads, and would not share it with third parties. In a statement reported by CNBC, a Facebook spokesperson clarified that the company is not “actively asking financial services companies for financial transaction data.” Rather, banks could offer real-time customer service to users through Facebook Messenger, according to the statement. [Fortune, PC Magazine, TechCrunch and Global News]


WW – Siri is Listening to You, But She’s NOT Spying, Says Apple

Are our iPhones eavesdropping on us? How else would Siri hear us say “Hey, Siri” other than if she were constantly listening? That’s what Congress wondered, so last month the US House of Representatives Energy and Commerce Committee [here] sent a letter to Apple CEO Tim Cook [PR here & 5 pg PDF letter here] on the matter of Apple having recently cracked down on developers whose apps share location data in violation of its policies. The letter posed a slew of questions about how Apple has represented all this third-party access to consumer data, about its collection and use of audio recording data, and about location data that comes from iPhones. This week, Apple responded with a letter that translates into “We Are Not Google! We Are Not Facebook!” As in, Apple’s business model is different from those of other data-hoovering Silicon Valley companies that rely on selling consumer information to advertisers And no, Siri is not eavesdropping. The letter went into specifics about how iPhones can respond to voice commands without actually eavesdropping. It has to do with locally stored, short buffers that only wake up Siri if there’s a high probability that what it hears is the “Hey, Siri” cue. Once actual recording takes place after the “Hey, Siri” phrase is uttered, the recording that’s sent to Apple is attached to an anonymous identification number that isn’t tied to an individual’s Apple ID. Users can reset that identification number at any time. [Naked Security (Sophos) and at: Infosurhoy here & here, CNN Tech & Fortune]

WW – Is Apple Really Your Privacy Hero?

Apple Inc. has positioned itself as the champion of privacy. Even as Facebook Inc. and Google track our moves around the internet for advertisers’ benefit, Apple has trumpeted its noble decision to avoid that business model. When Facebook became embroiled in a scandal over data leaked by an app developer, Apple CEO Tim Cook said he wouldn’t ever be in such a situation. He framed Apple’s stance as a moral one. Privacy is a human right, he said. “We never move off of our values,” he told NPR in June. The campaign is working, as evidenced by media reports depicting Apple as hero to Facebook’s villain. But that marketing coup masks an underlying problem: The world’s most valuable company—its market value crossed the $1 trillion mark on Aug. 2—has some of the same security problems as the other tech giants when it comes to apps. It has, in effect, abdicated responsibility for possible misuse of data, leaving it in the hands of the independent developers who create the products available in its App Store. Bloomberg News recently reported that for years iPhone app developers have been allowed to store and sell data from users who allow access to their contact lists, which, in addition to phone numbers, may include other people’s photos and home addresses. According to some security experts, the Notes section—where people sometimes list Social Security numbers for their spouses or children or the entry codes for their apartment buildings—is particularly sensitive. In July, Apple added a rule to its contract with app makers banning the storage and sale of such data. It was done with little fanfare, probably because it won’t make much of a difference. For all of Facebook’s privacy problems, it was at least able to alert people who were potentially affected by the Cambridge Analytica leak. Apple has no such mechanism. If the company insists on not knowing what happens to our data in the name of privacy, it can at least help us ensure we don’t share more of it than necessary. [Bloomberg LP and at: Bloomberg, Macworld, FastCompany & 9to5Mac – Related coverage at: Naked Security (Sophos)]


CA – Border Agents Using DNA Databases to ID Detainees, Track Relatives

According to immigration lawyers, border officials have collected DNA samples from at least three immigration detainees in the past year in their attempts to identify their ethnicity, track down relatives and establish nationality in order to remove these individuals from Canada. “CBSA uses DNA testing in order to determine identity of longer term detainees when other avenues of investigation have been exhausted,” said the agency’s spokesperson Jayden Robertson, who refused to release how many detainees have undergone the DNA searches to date. “DNA testing assists the CBSA in determining identity by providing indicators of nationality thereby enabling us to focus further lines of investigation on particular countries.” DNA is just one of the many tools that assist officials in their detective work, Robertson added. The border agency would not comment if it has any protocol or guideline in the use of DNA samples in investigations but said it requires consent from clients before submitting their information to DNA websites such as Familytree.com and Ancestry.com. Lawyer Jared Will, who represents the other two detainees, says detainees often have no choice but give consent. “The consent cannot be truly voluntary. These individuals are being detained and they risk prolonged detention because if they don’t give consent, they are alleged to be non-cooperating.” [The Toronto Star ]

Health / Medical

US – HHS Issues Authorizations for Health Research

HHS has issued guidance on uses and disclosures of protected health information for research. HIPAA authorizations must contain specific information about research purposes of requested PHI use or disclosure, specific identification of authorized persons, and expiration dates or events; covered entities do not have to remind individuals of their right to revoke authorization (they may choose to remind minors when they reach the age of majority), and revocation exceptions include maintenance of research integrity, quality assessments, and reporting adverse events. Data Privacy Monitor

EU – CNIL Approves Simplified Measures for Health Research Approvals

The French Data Protection Authority (CNIL) has issued a series of documents related to health sector research without obtaining consent under the GDPR. Five new reference methodologies lessen the need for prior DPA authorization subject to compliance with prescribed conditions (such as, depending on the type of research undertaken, a public interest in the research and a prohibition on data matching); the CNIL also approved simplified access to extractions from a health insurance database which only requires prior approval from a research institute. [CNIL – Approval of 5 Reference Methodologies for Health Research and Simplified Access to PMSI Press Release | Guidance | Reference Methodology MR-001 | Reference Methodology MR-003 | Reference Methodology MR-004 | Reference Methodology MR-005 | Reference Methodology MR-006 | Deliberation No. 2018-256 | CNIL overview (French)

US – HHS Weighs Changes to Health Data Privacy Regulations

The Department of Health and Human Services is considering making changes to federal privacy regulations governing health data – including the HIPAA Privacy Rule and the 42 CFR Part 2 law [here, here & here], which pertains to substance abuse and mental health information. According to HHS Secretary Alex Azar in a July 26 speech. In the coming months, HHS will be releasing requests for information, seeking comments regarding potential changes in HIPAA and also 42 CFR Part 2, a federal privacy law that governs confidentiality for individuals seeking treatment for substance use disorders from federally assisted programs. Congress is also awaiting word from HHS about its work to address “Compassionate Communications on HIPAA” provisions that are authorized under the 21st Century Cures Act [PDF here], which was signed into law in 2016. In a July 26 letter, six members of Congress asked HHS for an update regarding the status of the department implementing the 21st Century Cures provision that calls for HHS to develop “model programs and training” for healthcare providers to clarify when patient information can be shared. Some regulatory experts argue that no changes to the HIPAA Privacy Rule are needed, while others say that changes could prove helpful. [GovInfo Security, HealthIT Security and Becker’s Hospital Review]

US – OCR Issuing Fewer HIPAA Penalties in 2018, Report

The HHS Office for Civil Rights is on track to impose significantly fewer HIPAA settlement fines in 2018 than the agency has in previous years, according to a report from the law firm Gibson Dunn [here & see 32 pg PDF report here]. The July 26 report is a mid-year review of healthcare enforcement actions, including decisions by HHS, CMS, OCR and the Justice Department. Since HIPAA privacy rules went into effect in 2003, OCR has reviewed and resolved more than 180,000 complaints related to the legislation. In 2017, the civil rights office issued 10 penalties totaling $19.4 million, and in 2016, the office issued 13 penalties totaling $23.5 million. As of July, OCR has reported only two HIPAA penalties in 2018, along with one decision from an HHS administrative law judge. The three decisions amount to an estimated $7.9 million in fines. Gibson Dunn noted it’s unclear whether the downtick in HIPAA enforcement actions during the first half of 2018 signals a shift in priorities, or whether the civil rights office intends to pursue more settlements in the second half of the year. However, if OCR continues at this pace throughout the remainder of 2018, the year will mark a “dramatic decline in HIPAA enforcement actions.” [Becker’s Hospital Review, Health IT Security & Bank Info Security]

US – Healthcare IT Security Worst of Any Sector with External Threats

Healthcare IT security is the worst of any sector when it comes to external security posture, according to a recent report by security advisory firm Coalfire. The Coalfire Penetration Risk Report [PR here] used customer penetration test data to analyze the security challenges within enterprises of various sizes and in different industries, including retail, healthcare, financial services, and technology industries, and compared the security posture between small, mid-sized, and large organizations. In terms of external security posture, healthcare organizations had the highest level of severe issues in their external security posture, followed by tech, retail, and financial services. In terms of internal security posture, retail had the highest level of severe security issues, followed closely by healthcare, tech, and financial services. Coalfire found that healthcare organizations, especially hospitals, have hundreds and sometimes thousands of high-risk connected devices that are unsupported, unpatched, and without basic security systems in place. The report also found that large enterprises are not the best prepared to protect against cybercrime, despite having bigger budgets and more resources. Across all sizes and sectors, however, people remain the biggest security weakness, whether through human error or creating opportunities for social engineering hacks. Phishing was a highly successful “foot in the doorway” for attackers who use it as an entry point to infiltrate the organization, then pivot to navigate internally to escalate for greater control. [Health IT Security]

US – Amazon’s Healthcare Expansion: Privacy Concerns

Amazon is greatly expanding its healthcare activities. For example, it’s nearing completion of its purchase of online pharmacy PillPack, and it has entered an employee health partnership with JP Morgan and Berkshire Hathaway. As a result, the online retail giant now will face a wide variety of important new privacy issues, say attorneys Jeffrey Short and Todd Nova. As Amazon collects, analyzes and exchanges more healthcare data, it will need to navigate privacy and breach regulations at the state and federal levels, including HIPAA, the attorneys note in a joint interview with Information Security Media Group [listen here]. In the interview, Short and Nova also discuss: a) Other privacy and security issues tied to Amazon’s pending $1 billion purchase of PillPack, which is slated for completion by the end of this year; b) Potential privacy and security concerns for Amazon’s partnership announced earlier this year with JP Morgan and Berkshire Hathaway to create a not-for-profit company aimed at lowering the healthcare costs of their employees; and c) Issues raised by the healthcare sector’s ongoing consolidation. [BankInfo Security]

US – NIST Issues Best Practices for Healthcare Use of Mobile Devices

The National Institute of Standards and Technology issued guidance on the security of health records on mobile devices. Primary risks to patient information on mobile devices include loss, theft, deliberate misuse (use of unsecure networks, virus or malware downloads), and inadequate privilege management; address risks by implementing least privilege access controls, firewalls, vulnerability scanning tools, continuous monitoring of server baselines, end-to-end encryption for communications (between doctors, patients, IT administrators, EHRs), and using encryption for archived files. [NIST Special Publication 1800-1 – Securing Electronic Health Records on Mobile Devices]

Horror Stories

CA – Nova Scotia Privacy Breach “An Epic Government Failure”

It is impossible to overstate the epic failure of Nova Scotia’s Health Department, uncovered and enumerated by Privacy Commissioner Catherine Tully last week in a pair of damning reports [see PR, IR 18-01 & IR 18-02]. The Health Department failed at virtually every turn to protect the privacy of 46 Nova Scotians whose medical records were pilfered for purely personal motives by a former Sobeys pharmacist. The department then compounded that failure with callous disregard for the victims’ rights to timely and complete notification on the extent of the intrusion into their personal medical histories. “It is virtually impossible to undo the harm and sense of violation individuals feel when the intimate details of their personal health information are breached. I find that the harm from these breaches is significant,” Tully concluded. The department will respond to the commissioner’s findings sometime this month. There was nothing remotely serious or even competent about the way the department handled this breach. It didn’t seriously pursue a tip that came in on its 1-800 Health Privacy tip line It deferred to Sobeys in the initial investigation, and it even failed to identify all those whose records had been inappropriately accessed. The department identified 39 of the victims. The commissioner discovered seven more. [Cape Breton Post at: CBC News and City News Toronto]

Law Enforcement

UK – Privacy International Calls for Probe into Cops’ Use of Mobile Phone Extraction

Privacy International is calling for the UK’s Investigatory Powers Commissioner (IPCO) to probe whether cops have a legal right to extract data from mobile phones. In a letter [see PR here] sent to Lord Justice Sir Adrian Fulford, the Investigatory Powers Commissioner, the privacy advocacy group says its concerned that the use of mobile phone extraction technology by coppers may in some – or all – circumstances constitute either an unlawful interception of communications or hacking. “If it does, then the conduct engaged in is subject to your oversight,” Privacy International says in its letter. The letter highlights findings from a recent report [see PR here & 41 pg PDF report here] out of Privacy International which revealed that the use of extraction kits – already used by more than half of UK police forces, and being trialled by a further 17% – allows cops to download the entire content of someone’s phone without their knowledge. [The Inquirer, The Register]


WW – Google Tracks Unsuspecting Users; Lawmakers Demanding Action

The Associated Press published a report about Google products that track users’ location even if they tried to stop it in their privacy settings. Google, according to the AP, claims that it is clear with its data tracking practices. Lawmakers are now saying they want to look into this privacy practice. Senator Mark Warner [here] of Virginia and Representative Frank Pallone [here] of New Jersey’s 6th district have both decried the practice, calling for tougher privacy legislation. The FTC is already investigating Facebook’s privacy, and this development could expand the scope of its inquiry. Politico reported that past FTC officials believe the company’s actions could warrant heightened scrutiny. [Fast Company, Bloomberg Law and DBR on DATA] See also: Google Tracks Your Movements, Like It or Not (AP) and also at: WIRED, The Verge, CNET and The Register] and also “The FBI Attempted Unprecedented Grab for Google Location Data” at: Forbes, Press Herald, AP via Washingto Times and Bangor Daily News]

Privacy (US)

US – Treasury Report Urges National Breach Notification Standard

A U.S. Treasury report that focuses on nonbank financial institutions, financial technology, and innovation includes recommendations for improved fin-tech consumer protection, such as giving consumers greater control over their financial data, and establishing a national breach notification standard. [SC Mcmagazine: U.S. Treasury calls for national data breach notification and increased data protections | Treasury.gov: Treasury Releases Report on Nonbank Financials, Fintech, and Innovation | Treasury.gov: A Financial System That Creates Economic Opportunities: Nonbank Financials, Fintech, and Innovation]

US – NIST Required to Develop Small Business Guidelines

S. 770, the National Institute of Standards and Technology (NIST) Small Business Cybersecurity Act, has been signed by the U.S. President. Resources must be publicly available on agency websites to help small businesses to reduce their cyber risks and promote awareness of basic controls, a cybersecurity culture and mitigation of common risks; resources must be technology-neutral, and vary with the nature and size of the business and sensitivity of data collected or stored. [S. 770 – NIST Small Business Cybersecurity Act – 115th Congress]

US – FTC Strengthens Safeguards for Kids’ Data in Gaming Industry

The FTC has unanimously voted to approve EPIC’s recommendations to strengthen safeguards for children’s data in the gaming industry. In a 5-0 vote, the FTC adopted EPIC’s proposals to revise the Entertainment Software Rating Board’s industry rules to (1) extend children’s privacy protections in COPPA to all users worldwide; and (2) to implement privacy safeguards for the collection of data “rendered anonymous.” The FTC wrote, “the Commission agrees with EPIC’s comment. As COPPA’s protections are not limited only to U.S. residents, the definition of ‘child’ in the ESRB program has been revised to remove the limitation.” The Commission also strengthened protections for de-identified children’s data: “companies must provide notice and obtain verifiable parental consent if personal information is collected, even if it is later anonymized.” EPIC has testified several times before Congress on protecting children’s data and supported the 2013 updates to COPPA. [Electronic Privacy Information Center and at: New York Magazine]

US – House Candidates Vulnerable to Hacks: Researchers

A team of four independent researchers led by former National Institutes for Standards and Technology security expert Joshua Franklin concluded that the websites of nearly one-third of U.S. House candidates, Democrats and Republicans alike, are vulnerable to attacks. NIST is a U.S. Commerce Department laboratory that provides advice on technical issues, including cyber security. Using automated scans and test programs, the team identified multiple vulnerabilities, including problems with digital certificates used to verify secure connections with users, Franklin told Reuters ahead of the presentation. The report follows a string of warnings by Trump administration security officials that Russia is actively interfering in the November elections. FBI Director Christopher Wray recently warned that Russian government agents were working around the clock to sow discord ahead of the election. The researchers did not identify any cases where it appeared that politically motivated hackers had exploited those vulnerabilities. “We’re trying to figure out a way to contact all the candidates” so they can fix the problems, said Franklin, who joined the nonprofit Center for Internet Security [here] last month. [Reuters and at: Buisiness Insider]

US – FTC Seeks Comments on Privacy Impacts and Enforcement

On August 6, 2018, the Federal Trade Commission published a notice seeking public comment on whether the FTC should expand its enforcement power over corporate privacy and data security practices. The notice, published in the Federal Register [see here], follows FTC Chairman Joseph Simons’ declaration [read prepared statement here] at a July 18 House [Subcommittee on Digital Commerce and Consumer Protection of the Committee on Energy and Commerce see here & wiki here] hearing [FTC PR here & watch here] that the FTC’s current authority to do so, under Section 5 of the FTC Act [15 USC §45 here, also see overview here], is inadequate to deal with the privacy and security issues in today’s market. The FTC asks for input by August 20, 2018. It also requests comment on growing or evolving its authority in several other areas, including the intersection between privacy, big data and competition. Beginning in September 2018, the FTC will conduct a series of public hearings to consider “whether broad-based changes in the economy, evolving business practices, new technologies, or international developments might require adjustments to competition and consumer protection law, enforcement priorities, and policy.” [Privacy & Information Security Law Blog (Hunton/Andrews/Kurth) andat: Blockchain Legal Resource (Hunton) – also Related coverage at: PYMNTS and The Wall Street Journal | FTC – Notice of Hearings and Request for Comments – Hearings on Competition and Consumer Protection in the 21st Century | see Press Release & here & 16 pg PDF Federal Register notice here | see also: DBR on Data (Drinker Biddle)]

US – BART Is Planning a System-Wide Surveillance Network Using ‘Video Analytics’ to Automatically Pinpoint Crime and Alert Cops

In response to several recent high-profile crimes, including the horrific killing of Nia Wilson, Bay Area Rapid Transit (BART) officials have revealed preexisting plans to build out a massive surveillance system that would closely monitor all of the district’s stations, trains, and other property [BART statement]. The district’s general manager and police want to upgrade BART’s 1,500 existing analog video cameras to a digital format, which would then be linked to computers that analyze video feeds in real time to detect possible criminal activity. The computers would then automatically notify officers to respond to the scenes of crimes and other disturbances. The proposal is mentioned in a report that will be heard at a meeting of the BART board of directors [see report at pp 39-46 of the agenda]. But the proposal isn’t really new. BART officials said they’ve been testing various powerful surveillance technologies since long before Wilson’s death and other recent violent incidents. BART has long sought to use technologies to secure its trains and stations, but this hasn’t necessarily made the system safer, and many worry about the loss of privacy and civil liberties, or fear surveillance tools could be used in harmful ways. Brian Hofer, the chair of the city of Oakland’s privacy commission and a member of the group Oakland Privacy said there are many ways to make BART safer that don’t necessarily involve mass surveillance. [East Bay Express]


WW – Study Assesses Impact of Cloud Migration Strategies on Security and Governance

Three major approaches to cloud migration have very different technical and governance implications: a) ‘lift and shift’ approach, where applications are moved from existing servers to equivalent servers in the cloud. The cloud service model consumed here is mainly IaaS [Infrastructure as a Service – wiki here]; b) the other side of the spectrum is adopting SaaS solutions [Software as a Service – wiki here]. More often than not, these trickle in from the business side, not from IT. These could range from small meeting planners to full blown sales support systems; and c) More recently, developers have started to embrace cloud native architectures. Ultimately, both the target environment as well as the development environment can be cloud based. The cloud service model consumed here is typically PaaS [Platform as a Service – wiki here]. There can be business case for each of these. The categories also have some overlap. The big point I want to make here is that there are profound differences in the issues that each of these categories faces, and the hard decisions that have to be made. Most of these decisions are about governance and risk management. [Information Security]

US – Accidents Were Most Frequent Cause of Healthcare Data Breaches

In the second quarter of 2018, the most frequent cause of healthcare data breaches was accidental disclosures, according to incidents reported to the Beazley Breach Response Services team [report]. Accidental disclosures made up 38% of the data breaches in the healthcare sector, hacking/malware were 26% of breaches, followed by insiders at 14%, physical loss of a nonelectronic record at 7%, loss or theft of a portable device at 6%, social engineering at 4%, and unknown/other at 5%. The compromise of a single email account provides the hacker with a platform from which to spear phish within and outside the organization, the report noted. Hackers can also use compromised accounts to make fraudulent wire transfers, redirect an employee’s paycheck, and steal sensitive information form the inbox. The report cited a case study involving an undisclosed health system that was hit by a widespread phishing campaign. The phishing email had a link that took victims to a website that instructed them to enter their credentials. All told, the attack costs the health system $800,000 for legal fees, forensic costs, programmatic review, and manual review of documents and another $150,000 in notification, call center, and credit monitoring fees. Beazley said that phishing attacks can be prevented using two-factor authentication and employee training. [Health IT Security]

WW – Are IT Managers Keeping Up with Social-Engineering Attacks?

Using both high-tech tools and low-tech strategies, today’s social-engineering attacks are more convincing, more targeted, and more effective than before. They’re also highly prevalent. Almost seven in 10 companies say they’ve experienced phishing and social engineering. Today’s phishing emails often look like exact replicas of communications coming from the companies they’re imitating. They can even contain personal details of targeted victims, making them even more convincing. Given the prevalence and advanced nature of social-engineering threats, your privacy and security measures should cascade across three key areas: a) people, b) processes, and c) technology. You and your IT team must be vigilant about emerging threats so that as they evolve, your security and privacy measures evolve with them. [DARKReading: Security Boulevard here & here, and ITWorld]

WW – Firms Must Spread Responsibility for Security throughout Enterprise: Accenture

Organizations aren’t doing enough to spread responsibility for cyber security throughout the enterprise, says consulting firm Accenture after looking at the results of a global study [overview]. While 73% of the C-level executives polled agreed that cyber security staff and activities need to be dispersed and executed throughout all parts of the organization, only 25% of non-CISO executives said business unit leaders are accountable for cyber security today. The survey questioned 1,460 executives in 16 countries – including 66 from Canada – on whether their security plans address future business needs. Half of the respondents were Chief Information Security Officer or equivalent roles, while the remaining half were CEOs or other C-suite executives. Among the results:

  1. Only half of the respondents said all employees receive cyber security training upon joining the organization and have regular awareness training throughout employment;
  2. Only 40% of CISOs said establishing or expanding an insider threat program is a high priority; and
  3. Just 40% of CISOs said they always confer with business-unit leaders to understand the business before proposing a security approach. [IT World Canada]

WW – Amnesty International Spearphished with Government Spyware

Amnesty International has been spearphished by a WhatsApp message bearing links to what the organization believes to be malicious, powerful spyware: specifically, Pegasus, which has been called History’s Most Sophisticated Tracker Program. The human rights-focused NGO said in a post that a staffer received the link to the malware in June. Pegasus is a tool sold by NSO Group, an Israeli company that sells off-the-shelf spyware. It enables governments to send a personalized text message with an infected link to a blank page. Click on it, whether it be on an iOS or Android phone, and the software gains full control over the targeted device, monitoring all messaging, contacts and calendars, and possibly even turning on microphones and cameras for surveillance purposes. NSO Group’s response to incidents like this has been consistent on each occasion: the company points to the fact that Pegasus is supposed to be used solely by governments, to enable them to invisibly track criminals and terrorists. “If an allegation arises concerning a violation of our contract or inappropriate use of our technology, as Amnesty has offered, we investigate the issue and take appropriate action based on those findings. We welcome any specific information that can assist us in further investigating of the matter.” Once software blinks into existence, keeping it out of the hands of the wrong people can be very difficult. Pegasus is a case in point: last month, one of NSO Group’s own employees allegedly stole the valuable software and hid it under his bed. Then, he allegedly tried to sell it for the bargain basement price of USD $50 million. (According to the indictment, the tool is estimated to be worth “hundreds of millions of dollars.”) [Naked Security, The INQUIRER, V3 News and The Citizen Lab]

WW – The Internet of Things: Baby Monitor Hacked

A Texas family heard noises coming from their toddler’s bedroom through their video baby monitor. A man was yelling obscenities at their child, and when the parents entered the room, he yelled obscenities at them as well. The family had taken security precautions, including enabling a firewall and establishing passwords for their router and the baby monitor camera, which connects to their Wi-Fi network. [BBC, CNET, NBC News]

WW – Android Malware Spreading Through Mobile Ad Networks

Malware targeting Android devices has been found to be spreading through mobile advertisement networks. Many developers include advertising frameworks in their apps to help boost profits. Advertisements in mobile apps are served by code that is part of the app itself. An attack scheme in Asia involved a rogue ad network pushing code onto devices. When users download and install legitimate apps, the malware prompts users to approve its installation, appearing to be part of the process for the app they have just downloaded. [ComputerWorld]

US – HHS Recommendations for Secure PHI Disposal

The HHS Office for Civil Rights issued guidance on proper disposal of electronic devices and media for healthcare organisations. Ensure policies for final disposition of devices or media consider where data is stored, if all asset tags or corporate identifiers will be removed by the method chosen, and the logistics and security controls necessary to move equipment. The final decommissioning process should ensure total data destruction (or proper migration to another system), and inventories that accurately reflect the current status of devices. [HHS – Guidance on Disposing of Electronic Devices and Media See also: Signal Magazine: NSA Influences Commercial End-of-Life Data Security]

US – NIST Working on Final Public Draft of Risk Management Framework 2.0

The National Institute of Standards and Technology (NIST) is hard at work on the next version of its Risk Management Framework 2.0 (RMF 2.0). The final public draft of RMF 2.0 is expected to be available in September 2018, with final publication expected in November. RMF 2.0 will address supply chains, systems engineering, and privacy. FCW.com: NIST pushes on next version of Risk Management Framework | csrc.nist.gov : Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy – Revision 2, May 2018]

WW – Smart City Sensor Vulnerabilities

IBM Security and Threatcare examined smart city sensor hubs made by three companies and found 17 unpatched vulnerabilities. The flaws could potentially be exploited to manipulate traffic signals and activate flood warnings. The researchers notified the companies of the problems, and all say they have made patches available. It is not known if cities that use the affected sensors have applied the patches. [Wired: The Sensors That Power Smart Cities are a Hacker’s Dream | ZD Net: Smart city systems are riddled with critical security vulnerabilities | CNnet: Smart cities around the world were exposed to simple hacks]

WW – Cybersecurity: Average of 10 Cloud Security Incidents Annually

Kaspersky Lab surveyed 3,041 IT personnel from small-medium businesses in 29 countries regarding their IT infrastructure and use of cloud tools. Most small-medium businesses have adopted some form of cloud platform, but such businesses view data protection and business continuity as their top business challenges, and almost half say a primary IT challenge is the difficulty in securing a distributed IT security perimeter. [Growing Businesses Safely: Cloud Adoptions vs Security Concerns]

CA – Cybersecurity is Top IT Priority for Canadian Organizations: Survey

Cybersecurity is ranked as the top priority for Canadian organizations as more firms become interested in cloud storage and collaboration tools, according to a recent survey by CDW Canada [here & PR – also see related report]. Nearly six in 10 (59%) firms said email security is a main focus, followed by ransomware protection (52%) and intrusion prevention (48%). Daniel Reio, director, product and partner management and marketing, said this year also marks a continued focus on the cloud, with data at the core of many organizations’ IT plans. More than half of Canadian organizations (53%) say their cloud strategy for 2018 includes shifting workloads over time through hybrid solutions. 16% of organizations plan to take a “cloud-first” strategy moving forward while 13% want to move all workloads to the cloud. [Insurance Business]

WW – Information Security Spending to Surge to Over $124bn by 2019

Gartner estimates that worldwide spending on IT security solutions will reach at least $124 billion next year, an increase of 8.7% from 2018’s estimate of $114 billion [see PR here]. A recent report conducted by the Ponemon Institute and sponsored by IBM estimates that the average cost of a data breach to an enterprise company is $3.86 million, and the average cost for each lost or stolen record containing sensitive and confidential information has increased by 4.8% year over year to $148. To make matters worse, the average time it takes to identify a data breach is 197 days. In the meantime enterprise players must tackle the security issues prompted by ongoing cyberattacks and new regulations by way of investment in new and better security solutions. Robust security is not only required to protect corporate networks but has also now become a competitive advantage. By 2019, at least 30% of organisations are expected to invest in GDPR-related consulting and implementation services in order to become compliant with the new regulations. In particular, enterprises are expected to turn to cloud-based systems, such as security information and event management (SIEM), in order to protect corporate networks. [Computer Business Review, Forbes & Information Age]

US – Police Body Cameras Open to Attack

Josh Mitchell, a consultant at security firm Nuix, analysed cameras from five vendors who sell them to US law enforcement agencies. Presenting at the DEF CON conference last week, he highlighted vulnerabilities in several popular brands that could place an attacker in control of a body camera and tamper with its video. Many of them include Wi-Fi radios that broadcast unencrypted sensitive information about the device. This enables an attacker with a high-powered directional antenna to snoop on devices and gather information including their make, model, and unique ID. An attacker could use this information to track a police officer’s location and find out more about the device that they are using. They might even be able to tell when several police officers are coordinating a raid, he said. Mitchell’s research found that some devices also include their own Wi-Fi access points but don’t secure them properly. An intruder could connect to one of these devices, view its files and even download them, he warned. In many cases, the cameras relied on default login credentials that an attacker could easily bypass. Mitchell contacted the vendors about these vulnerabilities and has been working with them to fix the issues, he said. In the meantime, it should leave police forces thinking hard about security audits for their wearable devices. [Naked Security and WIRED & watch here, Engadget, NewsWeek and GIZMODO]

AU – Data Breaches Report Provides Insight into Data Security Vulnerabilities

Under the Notifiable Data Breaches Scheme [OAIC Guidance] which commenced in February 2018, organisations are required to notify the Office of the Australian Information Commissioner (OAIC) and affected individuals in relation to data breaches where there are reasonable grounds to believe that an eligible data breach has occurred (that is, a data breach that is likely to result in serious harm). We now have a better idea of how the Scheme is working, and the nature of the causes of data breaches, with the release by the OAIC on 31 July 2018 of the second quarterly report on data breach notifications the OAIC has received under the Notifiable Data Breaches Scheme [see PR here & report here — also see the 1st quarterly report here & PR here]. It covers the period between 1 April 2018 and 30 June 2018, the first full period for the Scheme and thus the first full period for which information about data breach notifications has been available. During that period, a total of 242 data breach notifications to the OAIC were made, which is an average of more than two notifications per day (reports of a breach that involved multiple entities were counted as a single notification). What will be of most interest to entities subject to the Scheme will be the kinds of information involved, the causes of the breaches, and what they suggest are the key areas of concern in improving their own data security This post reviews these issues. [Clayton Utz Knowledge, Security & Privacy Bytes (Squire Patton Bogs)]

US – Justice Department Releases A-G’s First Cyber-Digital Task Force Report

The Department of Justice recently released its comprehensive assessment of cyber threats in the United States, titled “Report of the Attorney General’s Cyber-Digital Task Force“ [Press Release & Fact Sheet]. The Report is the result of the establishment of the Attorney General’s Cyber-Digital Task Force by the Department in February 2018. Attorney General Jeff Sessions directed the Task Force to answer two questions: 1) How is the Department responding to cyber threats? and 2) How can federal law enforcement more effectively accomplish its mission in this important and rapidly evolving area? The Report responds to the first question and is broken into six chapters, each analyzing cyber threats and how the Department counters them. This Report provides an overview of the Department’s detection of ever-changing cyber threats to the United States as well as the tools and methods the Department is utilizing to counter those threats. The Report indicates that in the future the Department will build on the initial findings and provide recommendations to the Attorney General for more means to protect Americans from cyber threats. [Data Privacy Monitor (BakerHostetler)]

US – Anthem $115 Million Data Breach Settlement Approved by Judge

A $115 million settlement Anthem Inc. for a data breach that exposed consumer personal data won the approval of Judge Lucy Koh [wiki here] of the U.S. District Court for the Northern District of California… The approval [8/15/18 see 53 pg PDF here] finalizes one of the largest settlements in a consumer data breach case. The health insurance giant reached the settlement with about 19.1 million Anthem consumers June 23 without admitting any wrongdoing. [The case is In re Anthem, inc. Data Breach Litig., N.D. Cal., No. 15-md-02617 here ] The 2015 breach [see details here & wiki here] on Anthem’s system affected more than 78 million people, exposing consumers’ Social Security numbers, names, dates of birth, and health care ID numbers and other data. The settlement includes a pool of $15 million for consumers in the class group to claim up to $10,000 each for their out-of-pocket expenses related to correcting. The class members can also get free credit monitoring services beyond what Anthem has already offered. In addition to the settlement fund, the health benefits company agreed to make changes to its data security procedures, including adopting encryption protocols for sensitive data. [Bloomberg Law , The Recorder (Law.com) and Law360]

Smart Cars and Cities

CA – Sidewalk Toronto Shies Away from Data Privacy, Focuses on Urban Planning at Third Public Roundtable

While the August 14th, 2018 Sidewalk Toronto public roundtable focused on Alphabet subsidiary Sidewalk Labs’ infrastructure plans for Toronto’s waterfront, there was a distinct lack of discussion surrounding the company’s privacy and data governance policies. However, Waterfront Toronto’s vice president of innovation, sustainability and prosperity Kristina Verner specified that conversations about data governance and data stewardship are still taking place at smaller committee meetings. “What we’re doing with the digital governance and privacy issues is really pulling that into a slightly different location for the conversation,” said Verner. These different locations include Waterfront Toronto’s public Digital Strategy Advisory Panels — including the one set for August 16th, 2018 — and at three public ‘CivicLabs’ on October 3rd, November 7th and December 5th, 2018. Verner added that the CivicLabs will focus on discussions about “privacy, data governance, data residency, intellectual property, shared benefits — all of that sphere that is much more…technical in nature, a little different from the urban planning piece to it.” “It’s just a different venue, but that conversation is still going to be happening very soon,” said Verner. [MobileSyrup, IT World Canada, Finacial Post and Toronto Star]


US – University Putting 2,300 Echo Dots in Student Living Spaces

Saint Louis University will “deploy more than 2,300” Echo Dot smart devices throughout student living spaces, including in “every student residence hall room and student apartment on campus” [see SLU notice & FAQ]. Arizona State University put Echo Dots in student spaces last year [see here & here], but SLU’s new initiative seems to be the first time a university has put an Echo Dot in all student living quarters. The SLU Echo Dots will be modified to answer 100-plus SLU-related questions about things like library hours, basketball games, campus events, and university office locations. SLU is using Amazon Alexa for Business platform so the Echoes are going to be attached to a central SLU system and not individual accounts. SLU claims Alexa does not keep recordings of asked questions. The SLU system apparently does not keep personal information on users, and “all use currently is anonymous.” Amazon did not immediately respond to a request for comment on SLU’s use of the devices and privacy concerns that may come with having a voice-controlled device in student living spaces. [GIZMODO,: The Verge, voicebot and DIGITAL Trends See also: Hackers Found a Way to Make the Amazon Echo a Spy Bug

AU – Gov’t Plays Down Concerns About Use of Biometrics for Mass Surveillance

Officials from the Department of Home Affairs have sought to assuage concerns that a proposed national facial recognition service could lay the basis for mass surveillance. The government currently has two bills before parliament — the “Identity-matching Services Bill 2018“ [review here] and the “Australian Passports Amendment (Identity-matching Services) Bill 2018” [here] — which are part of creating the legal infrastructure for the new system. The Commonwealth, state and territory governments have endorsed the idea of a national, federated system for facial identification and verification, which could draw on the driver’s licence data held in different Australian jurisdictions as well as other sources of face images including passport and citizenship data. In October 2017, the Council of Australian Governments (COAG) signed the Intergovernmental Agreement on Identity Matching Services (IGA) that committed them to promoting “the sharing and matching of identity information to prevent identity crime, support law enforcement, uphold national security, promote road safety, enhance community safety and improve service delivery, while maintaining robust privacy and security safeguards”. The services are “not intended for mass surveillance,” acting first assistant secretary, Identity and Biometrics Division at the Department of Home Affairs, Andrew Rice, told a federal parliamentary inquiry into the two bills. Earlier this year the Victorian government, which signed the IGA, raised a number of concerns about the proposed federal legislation. The IGA envisages potential private-sector access to the Face Verification Service and Document Verification Service. The document states, however: “The private sector will not be given access to the other Face Matching Services or the Identity Data Sharing Service.” [Computerworld, Biometric Update and Security Document World]

WW – Google Keeps on Tracking

An investigation conducted by the Associated Press found that Google stores users’ location data even when those users have switched off Location History in their account settings. In fact, turning off Location History only stops Google from adding location information to a viewable timeline. The issue affects all iPhone and Android users who run Google Maps on their devices. There is a way to completely turn off location tracking by making changes to the Web and App Activity setting, but it is not easy to find. [CS Monitor: Turned off location history tracking? Google might still be following you | BBC: Google tracks users who turn off location history | Wired: Google Tracks You Even if Location History’s Off. Here’s How to Stop it]

WW – Apple Cannot Monitor Third Party App Data Use

Tim Cook, CEO of Apple, Inc. responded to a letter from the Energy and Commerce Committee relating to information on the microphone functionality of iPhones. The iPhone’s capabilities enable collection of location data even when the phone does not have a SIM card and the WiFi services are disabled, but only if location services are enabled; Apple has put in place security measures like evaluating developer’s apps to ensure compliance with its privacy guidelines, however it does not guarantee or monitor compliance of third party app developers with local laws or their own privacy policies. [Tim Cooks Response Letter to the Committee on Energy and Commerce – the iPhones capabilities of collection and use of consumer data and microphone functionality of iPhones]

Telecom / TV

US –Senators Probe NSA’s Deletion of Phone Records

Senators Ron Wyden (D-OR and Rand Paul (R-KY) have sent a letter [Press Release] to the NSA’s inspector general asking him to look into the agency’s torching of metadata for hundreds of millions of phone calls. “We write to request that you conduct an investigation into the circumstances surrounding, and any systemic problems that may have led to, the deletion by the National Security Agency (NSA) of certain call detail records (CDRs) collected from telecommunications service providers pursuant to Title V of the Foreign Intelligence Surveillance Act (FISA),” the letter begins. That deletion was announced back in June [see reports at: CATO at Liberty, Marea Informative Blog and Hit & Run Blog], one month after the spy agency revealed in a “statistical transparency report“ that it had collected 534 million call details in 2017, a tripling of the number from the previous year. The NSA blamed “technical irregularities” for the receipt and storing of an unspecified amount of phone call data, and said that, since it was not possible to discern between legitimately and illegally gathered details, it was going to “delete all CDRs acquired since 2015.” Wyden and Paul have proven to be two of the very few congressmen and women willing to challenge the powerful intelligence agencies, and in the letter ask eight questions of NSA’s data bonfire, focused on identifying contradictory elements. [The Register, The Washington Times and The Hill]

US Legislation

US – California Bill Takes Aim at Secrecy Surrounding Police Personnel Records

More than 40 years of police secrecy could begin to crumble if California lawmakers pass a new bill allowing the public release of personnel records for law enforcement officers involved in deadly force, on-duty sexual assaults and falsifying evidence. Senate Bill 1421 [see here and/or here], by state Sen. Nancy Skinner, D-Berkeley, is the latest effort to open police records in the name of transparency. Since 1976, California law enforcement officers have been protected by statutes and court rulings — the strictest in the nation — that make it illegal to release virtually all police personnel records, including those involving wrongdoing and disciplinary action. California’s protections were made virtually impenetrable in 2006, when the California Supreme Court ruled in Copley Press v. Superior Court of San Diego County that civilian police commissions could not publicly disclose their findings on police misconduct. As a result, some commissions could no longer gain access to personnel files. Lobbyists for the police said these protections were necessary for officer safety. Specifically, Skinner’s bill would allow for the disclosure of reports, investigations or findings for incidents involving the discharge of a firearm or electronic control weapons, strikes by weapons to the head or neck area or deadly force; incidents of sustained sexual assault by an officer; and findings of dishonesty by an officer. The proposal is scheduled to be heard Thursday by the Assembly Appropriations Committee. It already has been passed by the Senate [see here]. [Orange County Register and at: CATO]

US – Federal Bill Regulates Consent for Apps

H.R. 6547, the Application Privacy Protection and Security Act of 2018 was introduced in the US House of Representatives. If passed, mobile app developers must obtain consent to the terms and conditions prior to collecting personal data, and provide users with a means of notifying the developer that they intend to stop using the app, and requesting the developer to refrain from further processing or sharing of the data, and possibly deleting data collected and stored by the app. [H.R. 6547 – Application Privacy Protection and Security Act of 2018 – US House of Representatives]




July 2018


CA – Canada Expands Its Biometrics Screening Program

On July 31, 2018, all nationals from countries in Europe, Africa and the Middle East are required to provide biometrics (fingerprints and a photo) if they are applying for a Canadian visitor visa, a work or study permit, or permanent residence. Accurately establishing identity is an important part of immigration decisions and helps keep Canadians safe. For more than 20 years, biometrics (fingerprints and a photo) have played a role in supporting immigration screening and decision-making in Canada. Canada currently collects biometrics from in-Canada refugee claimants and overseas refugee resettlement applicants, individuals ordered removed from Canada and individuals from 30 foreign nationalities applying for a temporary resident visa, work permit, or study permit. .More than 70 countries are using biometrics in immigration and border management. Canada’s Migration 5 partners – the United Kingdom, Australia, the United States, and New Zealand – have implemented biometric programs; so have the 26 Schengen states in Europe, and other countries around the world like Japan, South Africa and India. [Immigration, Refugees and Citizenship Canada and iPolitics]

US – Amazon Face Recognition Matches 28 Members of Congress With Mugshots

Amazon’s face surveillance technology is the target of growing opposition nationwide, and today, there are 28 more causes for concern. In a test the ACLU recently conducted of the facial recognition tool, called “Rekognition“, the software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime. The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country. The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance. To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza. [Free Future Blog (ACLU) and at: Seattle PI, WIRED, NPR and The Washington Post]

US – Schools Face Civil Liberties Battles in Effort to Adopt Facial Recognition

As schools around the country attempt to deploy new facial recognition functionality as part of their video surveillance systems, the ACLU is challenging those efforts in the name of protecting civil rights. And they’re not alone in their concerns about the controversial student surveillance tactic. As EdTech Strategies recently reported, both Magnolia School District in Arkansas and Lockport City School District in New York recently approved purchases of camera systems that include the ability to identify people captured on camera and track them. In both scenarios, however, the ACLU has objected to the use of facial recognition for several reasons. They’re “vulnerable to hacking and abuse,” asserted the ACLU of Arkansas, and they compromise “students’ privacy.” The national organization stated that once somebody’s facial image is captured by the technology and uploaded into the system planned for New York, the program has the ability to “go back and track that person’s movements around the school” for the previous two months. [T.H.E. Journal coverage at: CNET News, Narcity Planet Biometrics and also Hey mom, did you see this? Camps are using facial recognition and also at: Biometric Update

US – Lawmakers to Investigate Use and Abuse of Face Recognition Tech

Less than a week after a damaging report [see ACLU blog post here and PRs here, here & here] exposed substantial flaws in facial recognition technology marketed to law enforcement by Amazon, five Democratic lawmakers are calling for an investigation into the commercial and government use, and potential abuse, of the technology. In a letter [PR here] to Gene Dodaro, head of the U.S. Government Accountability Office (GAO), lawmakers raised concerns about the use of facial recognition and its impact on privacy rights, underscoring, in particular, the “disparate treatment of minority and immigrant communities within the United States we ask that you investigate and evaluate the facial recognition industry and its government use.” The letter was signed by Senators Ron Wyden, Chris Coons, Ed Markey, and Corey Booker, and Jerrold Nadler, the ranking Democrat on the House Judiciary Committee. [GIZMODO and at: The Hill, Healthcare IT News and Techdirt]

UK – Government Has Created an Automated Facial Recognition “Policy Void”

A lack of clear government action has created a UK “policy void” when it comes to using automated facial recognition technology in CCTV analytics, according to a leading cyberlaw academic. Andrew Charlesworth, professor of law, innovation and society at the University of Bristol, called for an informed debate into the use of artificial intelligence (AI) in video surveillance. UK police are increasingly using automated facial recognition on CCTV footage to identify persons of interest. A recent report by Big Brother Watch [PDF & blog post] found that automated facial recognition technology used by police falsely identified 98% of people in UK cases. However, Charlesworth, in a white paper named “CCTV, Data Analytics and Privacy: The Baby and the Bathwater“ said that public debate over the issue had become “distorted”. He warned that the two sides of the argument had become polarised, fuelled by the government’s lack of stringent regulations. The UK Government’s long-awaited biometrics strategy, released in June, was criticised for not being comprehensive enough. Charlesworth’s report was commissioned by Cloud-based video surveillance system company Cloudview. [Verdict and at Security Boulevard]

UK – Consultation on Police Handling of Biometric Data Launched

The Scottish Government wants to introduce additional safeguards to ensure the safe and proportionate use of fingerprints, DNA and facial recognition technology. A public consultation is now underway in response to recommendations made by an Independent Advisory Group on biometrics earlier this year. It asks for views on the creation of a code of practice on the use, storage and disposal of biometric data to be overseen by a new Scottish Biometrics Commissioner. The arrangements will cover data held by the likes of Police Scotland, the Scottish Police Authority and other bodies involved in law enforcement activity in Scotland. The Scotsman

Face Recognition ‘Tickets’ Are Coming to Baseball Games

MLB and Clear announced a partnership that will soon let baseball fans enter stadiums using fingerprints, and eventually, just their face, instead of tickets. Clear, which offers similar biometric fast-tracking for participating airports, says it will let baseball fans link their Clear and MLB.com accounts. By sharing fingerprint data, visitors can bypass long lines at stadiums. 13 stadiums use Clear already. GIZMODO

Big Data / Artificial Intelligence

US – NIST Identifies Challenges of Big Data

The National Institute of Standards and Technology issued an interoperability framework for big data. Challenges include the ability to infer identity from anonymized datasets by correlating with apparently innocuous public databases, and shifts in protection requirements and governance as processing roles change and responsible organizations merge or disappear; where data is stored and moved between multi-tiered storage media, systematic analysis of threat models and development of novel techniques is required. NIST – Big Data Interoperability Framework: Volume 1 Definitions – NIST Special Publication 1500-1r1

US – Strategy Experts Split Over Effect of Privacy Concerns on Big Data: Survey

In a new survey, a group of the world’s top strategy experts found they could not agree on the effect privacy concerns will have on how businesses use data. Fifty two percent disagreed with the statement “concern over consumer privacy will fundamentally limit businesses’ ability to use big data,” while 48% agreed or strongly agreed. The forum’s findings come from the MIT SMR Strategy Forum, a new regular feature at MIT SMR where strategy scholars react to a provocative question on strategy development and execution. The forum is led by Joshua Gans of the Rotman School of Management, University of Toronto and Timothy Simcoe of Boston University’s Questrom School of Business. MIT Sloan Management Review

WW – Google’s Approach to Big Data and Artificial Intelligence

Google has unveiled it’s strategy for development of artificial intelligence applications. The company will incorporate privacy principles in AI development and use (e.g. notice and consent, privacy safeguards), develop systems to be overly cautious, test technologies in constrained environments, avoid unfair biases based on sensitive information (e.g. race, income, gender, ethnicity), and evaluate likely uses (based on primary purposes, and whether the technology will have significant impact). [AI at Google – Our Principles]

WW – FPF Provides Risk Assessment Framework for Machine Learning

The Future of Privacy Forum assessed the three line of defense when using machine learning models. The first line is focused on the development and testing of models, the second line on model validation and legal and data review, and the third line on periodic auditing over time. [FPF – Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models]

WW – Key Findings from Value of Artificial Intelligence in Cybersecurity Study

A day seldom passes without any exposure to the term artificial intelligence (AI). But when our survey team conceptualized this topic, we were stunned to learn that there wasn’t much publicly available information that documented end users’ perspectives on the impact of AI on organizations’ cybersecurity efforts. So, we’re pleased to share our comprehensive findings — and help answer the critical question: What value does AI bring to cybersecurity? The Ponemon Institute 2018 Artificial Intelligence (AI) in Cyber-Security Study, sponsored by IBM Security, includes detailed and high-level cybersecurity discoveries, as well as a comprehensive look at the impact of AI technologies on application security testing. Here are our top 10 key findings from the study. Make plans to register and attend our webinar on this compelling topic on Aug. 2. After our live session, the webinar will be available on demand for your listening and sharing pleasure. Security Intelligence

CN – Ethics of Big Data: A Look at China’s Social Credit Scoring System

There is much good to be gained from data science, but the negative side includes concerns over data privacy, risk management and cybersecurity, not to mention valid ethical debates over the fairness of digital divides, open access and the democratic use of public information. Now there is a new system being pioneered in China that has the potential to encompass many of these concerns: the creation of social credit scores by the government for its citizens. Will China’s social credit scores represent a grand technological breakthrough for society or ultimately be an example of ethical quicksand? Beyond traditional concerns over data privacy and cybersecurity, this form of social ranking poses deeper ethical dilemmas. First, the dilemma of “conformity vs. coercion” is central to address. A second ethical dilemma is the issue of “transparency vs. trafficking.” The use of gamification with social status scores means that both your absolute score and position relative to others is important. There are other ethical issues that public social credit systems may present. If unaddressed, these issues can become an ethical quicksand that widens the divide across people through the use of a socially constructed algorithm of “trustworthiness.” Do these social credit systems represent an opportunity or are they ethical quicksand? I wonder what George Orwell would say. Forbes

US – Big Data Is Getting Bigger. So Are the Privacy and Ethical Questions

The next step in using “big data” for student success is upon us and also raises issues around ethics and privacy. Whenever you log on to a wireless network with your cellphone or computer, you leave a digital footprint. Move from one building to another while staying on the same network, and that network knows how long you stayed and where you went. That data is collected continuously and automatically from the network’s various nodes. Now, with the help of a company called Degree Analytics, a few colleges are beginning to use location data collected from students’ cellphones and laptops as they move around campus. Some colleges are using it to improve the kind of advice they might send to students, like a text-message reminder to go to class if they’ve been absent. Others see it as a tool for making decisions on how to use their facilities. Many colleges now collect such data to determine students’ engagement with their coursework and campus activities. Of course, the 24-7 reporting of the data is also what makes this approach seem kind of creepy. My concerns are broad: Just because colleges and companies can collect this information and associate it with all sorts of other academic and demographic data they have on students, should they? How far should colleges and companies go with data tracking? The Chronicle of Higher Education

EU – Ethical Matters Raised by Algorithms and AI: CNIL Report

The Commission nationale de l’informatique et des libertés (CNIL) in France discusses ethical matters raised by algorithms and artificial intelligence. The CNIL proposes that the principles of fairness and vigilance could be used to form part of a new generation of principles and human rights in the digital age, and recommendations include education for all players in the algorithmic chain (designers and professionals), setting up organisational ethics committees, and designating a role to oversee social responsibility of the company. CNIL – How Can Humans Keep the Upper Hand – The Ethical Matters Raised by Algorithms and Artificial Intelligence


CA – CSE Annual Report Tabled in Parliament

The Annual Report of the Communications Security Establishment Commissioner, the Honourable Jean-Pierre Plouffe, cd, was tabled in Parliament. All of the CSE activities reviewed in 2017-2018 complied with the law. The Commissioner did, however, make four recommendations to promote compliance with the law and strengthen privacy protection. One recommendation related to CSE information sharing with international partners to ensure an adequate assessment of authorities and privacy protection measures prior to undertaking new sharing activities. A second recommendation related to disclosure of Canadian identity information and requiring client departments to note both lawful authority and a robust operational justification to acquire that information. Two other recommendations dealt with ministerial authorizations one that CSE should clarify language to reflect the legal protection afforded solicitor-client communications; and the other that CSE should restore the inclusion of comprehensive information in its request to the Minister for one particular MA to assist the Minister in making his decision. Office of the CSE Commissioner

CA – Federal Government Supports PIPEDA Changes

The Federal Minister of Innovation, Science and Economic Development responded to recommendations from the Standing Committee on Access to Information, Privacy and Ethics following its review of PIPEDA. Specific rules are needed for collection and use of minors’ PI (given recent breaches involving PI obtained from social media), some GDPR rights and protections can be incorporated into PIPEDA to enhance privacy protections (algorithmic transparency, privacy by design, data portability), and there are active discussions with the EU Commission to ensure adequacy standing is maintained. Letter to the Chair of the Standing Committee on Access to Information, Privacy and Ethics – Minister of Innovation, Science and Economic Development Committee Recommendations | Minister’s Response

CA – NS Board Conditionally Permits Smart Meter Implementation

The Nova Scotia Utility and Review Board reviews an application by Nova Scotia Power Inc. for approval of a $133.2 million smart meter project. The utility must permit customers to opt-out of the smart meters, subject to a cost (TBD) of continuing with non-standard meter service, and devise a detailed plan of how to inform customers of the opt-out process; the Board is satisfied that the utility’s data collection will not involve PI (there will be an identifier for each customer account), and security is sufficient (data will be protected by security certificates and end-to-end encryption). Nova Scotia Utility and Review Board – 2018 NUSARB 120 – Decision

CA – Waterfront Toronto, Sidewalk Labs Walk Back Plans In New Deal

After months of talks, Waterfront Toronto and Sidewalk Labs LLC have signed a deal [PR here] that reins in some of the Google-affiliate’s plans around its proposed “test bed” for new urban technologies on the city’s lakeshore. Waterfront Toronto released both a new “plan development agreement“ as well as the original “framework agreement“ it had signed last fall with the New York-based Sidewalk, the full text of which had until now been kept secret [W.T. also posted an FAQ ]. The new deal walks back or clarifies a number of provisions contained in that original deal, signed after Sidewalk Labs, a unit of Google parent Alphabet Inc., was chosen as the “funding and innovation partner” to develop a five-hectare (12-acre) parcel of land on the waterfront known as Quayside that sits at the end of Parliament Street. Waterfront Toronto and Sidewalks Labs praised the deal as an important milestone as they continue to develop the project. It was approved unanimously by Waterfront Toronto’s board, but only after Toronto developer Julie Di Lorenzo, who has previously publicly questioned the plan, resigned from her seat on the board. The agreement comes after the sudden departure in early July of Waterfront Toronto’s CEO, Will Fleissig, who had been a driving force behind the project. The deal is only a step toward a final “master innovation and development plan,” which the waterfront agency said won’t be signed until next year. Toronto Mayor John Tory said the agreement will allow “the City to consider an innovative new approach to development, housing, public space and mobility in the Quayside District,” and that he was confident Waterfront Toronto and all three levels of government would ensure it proceeds “in the best interests of Toronto residents.” [The Globe and Mail and at: MobileSyrup ]

CA – BCCLA Launches Handbook to Protect Privacy at the Border

The BC Civil Liberties Association (BCCLA) and the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC) released the online guide “Electronic Devices Privacy Handbook – a Guide to your Rights at the Border“ [overview, short PDF version] The Handbook helps travellers understand what is known about their data privacy rights at these border areas, best practices for securing digital devices and interacting with border officers, and what to do if they’ve been searched. The handbook is for every person who crosses the Canadian border and the U.S. border through preclearance areas, but has particularly important implications for marginalized populations and professionals carrying sensitive documents. All people with personal information on their devices have vested interests in protecting their data from being seized at the border and shared with Canada’s vast network of coordinating departments and national security partners. [BC Civil Liberties Association and at: The Vancouver Province & The Toronto Star]

CA – OIPC NS Recommends Amendments to PHIA

The OIPC NS releases its findings of its review of Nova Scotia’s Personal Health Information Act. PHIA should be amended to permit an executor to determine the collection, use and disclosure of a decedent’s PHI, and require security breaches with a risk of harm notified to individuals to also be notified to the OIPC; a working group should consider the issues of PHI data matching/linking for research and planning purposes, the disposition/outsourcing of storage of health records (including outside NS), and whether there are sufficient safeguards for genetic data and EHRs. [OIPC NS – Personal Health Information Act – Three Year Review Findings] See also: Health Records: DPA Cyprus Sets Retention at 15 Years

CA – Yukon Privacy Commissioner Worried About City Drone Proposal

Diane McLeod-McKay, the Yukon’s Information and Privacy Commissioner is concerned about a proposal that could see the City of Whitehorse using drones to enforce certain bylaws. At the July 16 council meeting, one of the recommendations made was to consider purchasing a drone for use patrolling trails. Council heard that other municipalities have used drones in search and rescue, but that they could also be effective in preventing illegal dumping. Council has accepted a new bylaw review document, but that doesn’t mean all of its recommendations will be implemented. McLeod-McKay noted that the city isn’t subject to the Yukon’s ATIPP Act., or PIPEDA. “Even if ATIPP or PIPEDA did apply, the lack of transparency around what a drone is recording, at any given time, hinders accountability. It’s difficult to make a complaint when you don’t know what personal information is being collected.” Yukon News

CA – OIPC ‘Following Up’ With Calgary Mall Using Facial Recognition Software

The Office of the Privacy Commissioner of Canada [here] said it is “following up” with Cadillac Fairview – the company that owns the Chinook Centre – after the company disclosed it is testing facial recognition technology in mall directories. News of the software came to light after a shopper saw a window on a directory at the Chinook Centre that showed what appeared to be facial-recognition data, including codes like “gender/inception” and “age/inception.” “Given we are not storing images, we do not require consent,” a statement from the company said.[see here] The agency has reached out to Alberta’s privacy commissioner [here] to discuss the matter as well. To date, the agency has not received any complaints involving the Chinook Centre directories. [Global News and at: CBC News and 660 News]

CA – Court Affirms Expectation of Privacy in Devices Under Repair

A Canadian appeals court has decided in favor of greater privacy protections for Canadians. The case involves the discovery of child porn by a computer technician who was repairing the appellant’s computer. This info was handed over to the police who obtained a “general warrant” to image the hard drive to scour it for incriminating evidence [see R v Villaroman, 2018 ABCA 220]. “General warrants” are still a thing in the Crown provinces. These days, it has more in common with All Writs orders than the general warrants of the pre-Revolution days. “General warrants” are something the government uses when the law doesn’t specifically grant permission for what it would like to do. The appellant’s challenge of the general warrant (rather than a more particular search warrant) almost went nowhere, but this decision grants him (and others like him) the standing to challenge the warrant in the first place. As the court notes, handing a computer over to a technician doesn’t deprive the device’s owner of an expectation of privacy. Standing helps, but ultimately didn’t help the appellant here. The court decides the failure to obtain the proper warrant is indeed a violation, but one not severe enough to trigger suppression of the evidence. The court goes on to note the failure to follow proper procedures when obtaining the warrant (ultimately the wrong sort of warrant) was negligent. It was anything but a “trivial” breach of protocol. Even if the officer’s inexperience resulted in erroneous actions, the violation is severe enough for the court to take note of. But this negligence isn’t enough to overcome the inevitable outcome of the search, in the court’s opinion. TechDirt

CA – OIPC NL Directs Healthcare Custodians

The OIPC NL published a newsletter addressing issues pertaining to:

  • personal representative of a deceased individual;
  • privacy training expectations; and
  • the importance of auditing access

Custodians should conduct ongoing training programs for employees handling PHI (training new employees, continuing education throughout the employment, and avoid reliance on general external training), and monitor and assess access to PHI (addressing who should conduct audits and when they should be conducted, what information is being assessed, and what areas will need to be audited). [OIPC NL – Safeguard – A Quarterly Newsletter – Volume 2 – Issue 2]

CA – A Cross-Border Perspective on Privacy Class Actions in Canada

This post explores trends in Canadian privacy class actions and points out similarities and differences in the approaches taken in the United States and Canada in these types of lawsuits. Canadian privacy class actions have been on the rise for the last decade. In both Canada and the U.S., privacy class actions largely fall into three categories: 1) claims that challenge a corporation’s business practices (e.g., cookies, targeted advertising); 2) claims that arise from accidental breaches (e.g., lost storage devices); and 3) claims relating to intentional, targeted misconduct (e.g., hacking, employee snooping). In all categories, the size of the classes and the quantum of damages claimed tend to be large. Importantly however, most cases settle for a fraction of the compensation sought. Generally, plaintiffs must establish some evidence of actual harm and may not simply seek damages for mere fear of identity theft, although no decisions have yet tested the line between harm and mere fear in a trial on the merits. Although moral damages for humiliation or anxiety arising from privacy violations are sometimes awarded, they are nominal—in the range of $2,000–$20,000 per claim. Compared to Canada, many more privacy class actions are commenced in the U.S. Canadian class actions are growing in number, but Canada is still developing its statutory causes of actions related to misuses of technology, while the data breach privacy class actions in the U.S. are largely founded on statutes such as the Electronic Communications Privacy Act [see here & wiki here] and the Computer Fraud and Abuse Act [see here & wiki here]. Unlike the U.S., Canada has an expansive federal regulatory regime—the Personal Information Protection and Electronic Documents Act [see here & wiki here], which provides a simple administrative procedure for complaints and remedies, arguably making class actions less preferable. The European Union (EU) General Data Protection Regulation (GDPR), which purports to extend to organizations based outside the EU that offer goods or services to individuals in the EU or to those who engage in practices that monitor online behaviour of individuals in the EU, may impact privacy litigation and force business to modify their practices in the U.S. and Canada. Mondaq

CA – Ontario Children’s Lawyer Records Exempt from FIPPA

The Ontario Court of Appeals reviewed a decision of the OIPC ON ordering the Children’s Lawyer for Ontario to disclose records pursuant to the FIPPA. The court quashed the IPC order for the Office of the Children’s Lawyer to issue a decision to a father requesting access to his children’s records; the entity is not a government agent (it does not receive direction from, or report to the Attorney General), and it has a fiduciary duty to child clients to keep information provided confidential (which is separate from solicitor-client privilege). Children’s Lawyer for Ontario v. IPC ON, AG ON and John Doe – 2018 ONCA 559 CanLII – Court of Appeal of Ontario

CA – Canada Amends AML/ATF Regulations

The Regulations Amending Certain Regulations Made Under the Proceeds of Crime Money Laundering and Terrorist Financing Act 2018 were published in the Canada Gazette on June 9, 2018. The Regulations update customer due diligence requirements to permit confirmation of identity from a reliable source (e.g., a prescribed financial entity), and beneficial ownership reporting requirements that include information about the beneficiary’s occupation, and user name if receiving payment online. Regulations Amending Certain Regulations Made Under the Proceeds of Crime Money Laundering and Terrorist Financing Act 2018 – Government of Canada

CA – Canada Spending $500 Million on Cybersecurity Over 5 Years

The Canadian federal government announces its renewed national cybersecurity strategy following its public consultation. National objectives are security and resilience (combatting increased cybercrime and the growing impact of IoT), cyber innovation (investing to address the cyber skills gap), and leadership and collaboration (establishing a national plan and clear focal point for cyber incidents and enhancing public awareness). National Cyber Security Strategy – Public Safety Canada | Press Release

CA – Ontario Survey Not Covered under Research Exemption

An Ontario Court reviewed an order of the IPC for Carleton University to disclose records requested, pursuant to the Freedom of Information and Protection of Privacy Act. A court upheld an IPC ON decision that a university survey of Jewish students and faculty was not for pure academic purposes; it is market research based on an administrative request to identify areas for improvement for minority students, and there would not be serious adverse consequences if the records were disclosed (survey data was coded to eliminate identification of respondents, and any identifiable information would be exempt from disclosure). Carleton University v. IPC ON and John Doe – Ontario Superior Court of Justice – 2018 ONSC 3696 CanLII


US – Walmart Patents Audio Surveillance Technology to Record Customers and Employees

America’s largest retailer has patented surveillance technology that could essentially spy on cashiers and customers by collecting audio data in stores. The proposal raises questions about how recordings of conversations would be used and whether the practice would even be legal in some Walmart stores. “This is a very bad idea,” Sam Lester, consumer privacy counsel of the Electronic Privacy Information Center in Washington, D.C., told CBS News. “If they do decide to implement this technology, the first thing we would want and expect is to know which privacy expectations are in place.” [Daily Mail]

CA – Canada Tackles Malicious Online Advertising

On July 11, 2018, the Canadian Radio-television and Telecommunications Commission imposed sanctions against the installation of malicious software through online advertising for the first time in its history. This decision was taken under the provisions of the Canadian Anti-Spam Legislation, which came into effect on July 1, 2014. The federal agency issued Notices of Violation [see CRTC PR here & Investigation Summary here] to Datablocks and Sunlight Media, for allegedly facilitating the installation of malware through online advertising. The companies are subject to penalties of $100,000 and $150,000, respectively. Among other things found, these two companies were not verifying their new customers and allowed payment by cryptocurrency. While both companies have been warned of these weaknesses in their practice in a 2015 report by cybersecurity researchers and then again in 2016 by the CRTC, neither implemented basic safety measures. While this CRTC fine is a first of its kind in Canada, this type of threat is nothing new in the industry. We Live Security Blog


UK – Voter Analytics and Data Protection: Early Findings from the ICO

The question of the role of big data analytics in modern elections is the question that the ICO has tackled in its report on voter analytics released this month [see July 10 PR here, the 60 pg PDF report “Democracy Disrupted? Personal information and political influence” here & related progress report here ]. For the first time, a DPA has tried to draw the curtain back on the very complicated world of voter analytics, to paint a picture of the range of organizations involved in contemporary elections, and of the practices they engage in. There has been a lot of hype about the importance of the “data-driven” election, and recent scholarly work that sheds a skeptical light on the extent to which data analytics do indeed influence election outcomes. [Democracy Disrupted ] does not go there, although there is an accompanying research report from Demos which reviews the current and future trends in campaigning technologies. Democracy Disrupted provides a detailed and empirically based description of the various sources of personal data that are used to profile the electorate and of how micro-targeting works across a variety of media. Around 40 organizations were the focus of this ongoing inquiry; many other individuals assisted. For privacy professionals, the report raises some intriguing questions about the application of the General Data Protection Regulation to political parties and election campaigns going forward. [IAPP.org and at: ByLine, Information Law Blog (Inksters) and Financial Times]

US – Top Voting Machine Vendor Admits It Installed Remote-Access Software on Systems Sold to States

The nation’s top voting machine maker has admitted in a letter to a federal lawmaker that the company installed remote-access software on election-management systems it sold over a period of six years, raising questions about the security of those systems and the integrity of elections that were conducted with them. In a letter sent to Sen. Ron Wyden (D-OR) in April, Election Systems and Software acknowledged that it had “provided pcAnywhere remote connection software to a small number of customers between 2000 and 2006,” which was installed on the election-management system ES&S sold them. The statement contradicts what the company told me and fact checkers for a story I wrote for the New York Times in February. At that time, a spokesperson said ES&S had never installed pcAnywhere on any election system it sold. “None of the employees, including long-tenured employees, has any knowledge that our voting systems have ever been sold with remote-access software,” the spokesperson said. The company told Wyden it stopped installing pcAnywhere on systems in December 2007, after the Election Assistance Commission [here], which oversees the federal testing and certification of election systems used in the US, released new voting system standards. Motherboard

US – For Sale: Survey Data on Millions of High School Students

At the end of June, three thousand high school students from across the United States trekked to UMass in Lowell sports arena to attend an event with an impressive-sounding name: the Congress of Future Science and Technology Leaders. Many students were selected for the event because they had once filled out surveys that they believed would help them learn about colleges and college scholarships. Many had taken a college-planning questionnaire, called MyCollegeOptions or surveys that came with the SAT or the PSAT, tests administered by the College Board. In filling out those surveys, the teenagers ended up signing away personal details that were later sold and shared with the future scientists event. Consumers’ personal details are collected in countless ways these days, from Instagram clicks, dating profiles and fitness apps. The recruiting methods for some student recognition programs give a peek into the widespread and opaque world of data mining for millions of minors — and how students’ profiles may be used to target them for educational and noneducational offers. These marketing programs are generally legal, but the handling of student surveys is receiving heightened scrutiny In May, the Department of Education issued “significant guidance” [11 pg PDF] that recommended that public schools make clearer to students and their parents that surveys with the SAT and the ACT, a separate college admissions exam, are optional. Over the last few years, several states have passed laws that might also limit the spread of some student profiles. The laws generally prohibit online educational vendors to schools from selling students’ personal data or using it for targeted advertising. The New York Times


US – FBI Provides Guidance for Email Scams

The Federal Bureau of Investigations (“FBI”) released guidance on an increasing threat related to requests for money transfers from compromised email accounts. The business email compromise and individual account compromise scam; targets businesses and real estate sectors performing wire transfers payments; organisations must verify any changes in the vendor payment type or location, and include a two-step verification process for wire transfer payments (form code phrases known only to the legitimate parties. FBI Public Service Announcement – Business Email Compromise the 12 Billion Dollar Scam

Electronic Records

AU – Privacy Commissioner Report: Health Sector Tops Breaches

The healthcare sector has topped the list for data breaches once again, with the Office of the Australian Information Commissioner releasing its delayed quarterly report into the Notifiable Data Breaches scheme [see PR here & report here], with most caused by malicious conduct and human error. According to the report, 49 notifications of data breaches in healthcare were made from April to 30 June 2018, surpassing the finance sector’s 36 notifications. A total of 242 notifications were received during the quarter. The report shows 59% of data breaches were caused by malicious or criminal attacks (142 notifications), with the majority of those linked to the compromise of credentials such as usernames and passwords. 36% of breaches were the result of human error such as sending emails containing personal information to the wrong recipients. The OAIC said the data breaches do not relate to the My Health Record system [see here & here]. But the stats are another setback to the national health information database as it continues to be buffeted by data privacy concerns. Up to 900,000 health professionals will have access to My Health Record via numerous software systems, creating a substantial “attack surface”, according to former Privacy Commissioner Malcolm Crompton. [Healthcare IT News Au, ABC News, CNET News, The Register and OAIC]

US – OCR Issues Guidance on Disclosures to Family, Friends and Others

In its most recent cybersecurity newsletter, the U.S. Department of Health and Human Services’ Office for Civil Rights (OCR) provided guidance regarding identifying vulnerabilities and mitigating the associated risks of software used to process electronic protected health information. The guidance, along with additional resources identified by OCR, are outlined in this post. Privacy & Information Security Law Blog (Hunton Andrews Kurth)

EU Developments

EU – EU Modernises Convention 108

The Council of Europe has approved amendments to modernise Convention 108. Amendments include conducting impact assessments to ensure processing is designed to minimise risks to data subjects, and processing is carried out on the basis of informed, express consent or some other legal basis; data subjects have the same rights afforded under the GDPR, and breaches must be notified where there is a serious risk to data subjects. Modernised Convention 108 – Council of Europe | Comparative table | See Analysis by Graham Greenleaf

EU – Supreme Court of Ireland to Review Facebook Privacy Case

On July 31, 2018, the Supreme Court of Ireland granted Facebook, Inc.’s leave [see ruling here] to appeal a lower court’s ruling sending a privacy case to the Court of Justice of the European Union (the “CJEU”). Austrian privacy activist Max Schrems challenged Facebook’s data transfer practices, arguing that Facebook’s use of standard contractual clauses failed to adequately protect EU citizens’ data. Schrems, supported by Irish Data Protection Commissioner Helen Dixon, argued that the case belonged in the CJEU, the EU’s highest judicial body. The High Court agreed. Facebook’s request to appeal followed. In granting Facebook leave to appeal, the Supreme Court noted that “it is in the interest of justice” that the Court hear its arguments. The hearing will take place within the next five months. Privacy & Information Security Law Blog (Hunton) coverage at: TechCrunch]

EU – Parliament Calls for Suspension of Privacy Shield

The EU Parliament passed a resolution on the adequacy of the EU-US Privacy Shield calling on the European Commission to ensure the Shield will comply with the GDPR; and suspend the Shield if the US is not fully compliant by September 1, 2018 if US authorities do not address identified deficiencies including unclear rules for automated decision making and processing of HR data, failure to follow the EU model of consent, and lack of effective judicial redress for EU citizens. EU Parliament – Motion for a Resolution on Adequacy of Protection Afforded by EU-US Privacy Shield

EU – Privacy Shield Under Pressure as Lawyers Back MEPs’ Call for Suspension

The Council of Bars and Law Societies of Europe (CCBE) [comments] – which represents 32 member countries and 13 associate and observer countries – has repeated its concerns over the deal’s suitability and called for an immediate suspension. The intervention comes as a group of MEPs, who called for a ban on the deal if the issues aren’t addressed by September, travels to Washington to discuss data privacy. The CCBE’s intervention comes as MEPs on the EU’s civil liberties and justice committee (LIBE) begin a four-day trip to Washington to discuss Privacy Shield, along with other data protection issues, with the US government. [The Register Related coverage at: CIO, DBR on Data and Legaltech News] Adequacy: EU Parliament Calls for Suspension of Privacy Shield

EU – Proposed EU Cybersecurity Act Released

The Council of the European Union released a proposal for the future of cybersecurity regulation in Europe. At a time of increased cybersecurity risks, the EU Cybersecurity Act would strengthen the powers of the European Union Agency for Network and Information Security by making it a permanent agency of the European Union. The EU Cybersecurity Act would also create a European cybersecurity certification framework for information and communications technology goods. The goal of the EU Cybersecurity Act is to build cyber resilience and response capabilities within the EU. Harmonizing standards to promote efficiency is also a central theme of the EU’s Digital Single Market strategy. The EU Cybersecurity Act is an output of a broader Cybersecurity Package which was first introduced in 2017 before going through several impact assessments and a comment period. To become law, the proposal will have to be approved by the European Parliament. CyberLex (McCarthyTetrault)

UK – ICO Release Annual Report

The Information Commissioner’s Office has released their Annual Report for 2018 [PDF here]. Commissioner Elizabeth Denham highlights the following in her foreword to the Report: The ICO…

  • has been involved in producing significant GDPR guidance in the last 12 months and has also run an internal change management process to ensure it is up to the demands placed upon it by GDPR (think: extra staff, new breach reporting functions and helplines);
  • pay levels have fallen out of step with the rest of the public sector. UK Government has given the ICO 3-year pay flexibility and some salaries have increased;
  • has taken decisive action on nuisance calls and misuse of personal data;
  • began investigation of over 30 organisations in relation to use of personal data and analytics for political campaigns; and
  • launched a “Why Your Data Matters” campaign – designed to work as a series of adaptable messages that organisations can tailor to inform their own customers of their data rights.

Privacy and Cybersecurity (Dentons)

EU – European Court of Justice Clarifies Who Is a ‘Data Controller’ Under GDPR

The European Court of Justice (ECJ) in Luxembourg rendered a judgment on July 12 [see CJEU Press Release & Judgment of the CJEU], that explains, among other things, what a (joint) data controller is. The judgment is on the “old” EU Data Protection Directive 95/46/EC, but the relevant provisions in the General Data Protection Regulation (GDPR), Art. 4 and 26, are very similar. The case is about Jehovah’s Witnesses Community and whether taking notes in the course of their door-to-door preaching falls under the GDPR. The ECJ states that (a) their activities don’t fall under the exemptions for religious communities, and that (b) the community is a data controller jointly with its members who engage in this preaching activity. Tech & Sourcing at Morgan Lewis

EU – ECHR Ruling Confirms Freedom of Expression Trumps Right of Erasure

The European Court of Human Rights (“ECHR”) decided on 28 June 2018 that the right to request the erasure of personal data on prior convictions, may be trumped by the right to freedom of expression and information. The court confirmed prior case law deciding that the public’s legitimate right of access to electronic press archives is protected by the fundamental right of freedom of expression and information and that limitations to this right must be justified by particularly compelling reasons. Inside Privacy

EU – The eData Guide to GDPR: What is Sensitive Personal Data?

Information on health, race/ethnic origin, sexual orientation, and religious and political beliefs are among a special category of data that have been classified as sensitive personal data under the EU’s GDPR and are given a higher degree of protection. This installment of The eData Guide to GDPR discusses how sensitive personal data is defined, under what conditions it can be processed, and what steps businesses can take to ensure compliance with the GDPR’s special protections of sensitive personal data. Morgan Lewis Insight

EU – Heirs Can Access Facebook Account of Deceased Relatives: German Court

Heirs in Germany have the right to access the Facebook accounts of their deceased relatives, a court said in a landmark privacy ruling on Thursday, saying a social media account can be inherited in the same way as letters. Reuters Additional coverage at: Technology Law Dispatch, Deutsche Welle, Quartz, AFP, Naked Security and GIZMODO]

EU – DPA Brandenburg Advises Caution for Photography

The Brandenburg Data Protection Authority issued guidance on the processing of photos under the GDPR. The taking and publication of photos is permitted under the GDPR (pursuant to data subject consent, a controller’s legitimate interests, or journalistic activity); however, photographers should be careful about photos of large groups of people (notice may be impossible to provide), employees (consent may not be truly voluntary), and existing photo stock (which should comply with prior legal requirements). DPA Brandenburg – Processing of Photos – Legal Requirements Under GDPR

EU – EDPS Comments on Monitoring for Copyright Infringement

The European Data Protection Supervisor commented on a draft resolution for a proposal regarding copyright in the Digital Single Market. According to the EDPS, the draft EU resolution appropriately addresses the obligation for online sharing service providers to monitor their platforms for copyright infringement by not targeting end users who might download or stream uploaded content, and requiring observance of the data minimisation principle; it will be impossible, however, for providers to avoid processing personal data while complying with monitoring and reporting obligations. EDPS – Formal Comments on a Proposal for a Directive of the European Parliament and Council on Copyright in the Digital Single Market

UK – ICO Seeks Views on Age Appropriate Design

The UK ICO is calling for evidence and views on the Age Appropriate Design Code under the Data Protection Act, 2018. The ICO UK is calling for suggestions from information service providers and child development experts to design the Age Appropriate Design Code, with a focus on the different development stages of children and the websites or applications that children access or are likely to access; specific areas of interest include profiling, geolocation, and strategies used to encourage extended user engagement. The Code will be submitted to the Secretary of State for Parliamentary approval within 18 months from May 25, 2018; and the call for evidence closes on September 19, 2018. ICO UK: Blog – Children’s Privacy – Call for evidence | Consultation

WW – Big Tech Companies’ Privacy Policies Not Totally GDPR Compliant: Report

A new report from a consumer protection group indicates that even though privacy policies were revamped right before the GDPR came into effect in late May, “there is still room for significant improvements.” The survey used artificial intelligence to analyze 14 privacy policies at major tech companies, including Google, Facebook, Amazon and Apple. The Recorder (Law.com)

EU – Cloud Security and Due Diligence Checklists

A UK law firm highlights industry best practices from regulators and associations, which include risk profiling, monitoring of security controls, and defining access controls to service interfaces and administration systems; to demonstrate compliance, cloud buyers can demonstrate provider compliance through contractual commitment, third party certification and/or independent testing. [Kemp Law]

Facts & Stats

US – Major Breaches in the First Six Months of 2018

The most serious breaches of the first half of 2018 include the US government acknowledging that Russian hackers have managed access to a power utility’s control systems; hackers using phishing attacks to gain access to university systems, private companies, and government agencies around the world and stealing many terabytes of intellectual property; and many instances of organizations misconfiguring data storage mechanisms, exposing stored information. Wired: The Worst Cybersecurity Breaches of 2018 So Far.

WW – Survey Finds Breach Discovery Takes an Average 197 Days

A global study based on 500 interviews conducted by The Ponemon Institute on behalf of IBM [see PR here, infographic here] finds that the average amount of time required to identify a data breach is 197 days, and the average amount of time needed to contain a data breach once it is identified is 69 days. When it comes to cost containment, the study makes it clear time is of the essence. Companies that were able to contain a breach in less than 30 days saved more than $1 million compared to those that took more than 30 days ($3.09 million versus $4.25 million average total). 2018 Cost of a Data Breach Study [PDF] Security Boulevard, Security Intelligence, listen Audio interview – 26 min

EU – The GDPR and Blockchain

Blockchain technology has the potential to revolutionise many industries; it has been said that “blockchain will do to the financial system what the internet did to media”. Its transformative capability also extends far beyond the financial sector, including in smart contracts and the storage of health records to name just a few. Notwithstanding its tremendous capabilities, in order for the technology to unfold its full potential there needs to be careful consideration as to how the technology can comply with new European privacy legislation, namely the GDPR. This article explores some of the possible or “perceived” challenges blockchain technology faces when it comes to compliance with the GDPR. The European Commission has recently launched the EU Blockchain Observatory and Forum which is focused on promoting blockchain throughout Europe. The Forum recently ran a series of workshops on the impact of the GDPR on blockchain technology. Inside Privacy (Covington)


CA – Canada 2020 Issues Open Banking Report

On July 5, 2018, Canada 2020, a Canadian think-tank, published its report on open banking [see 10 pg PDF here] following a Policy Lab which brought together various stakeholders to discuss open banking in Canada. “Open Banking” refers to an emerging financial services business model that focuses on the portability and open availability of customer data, including transactional information. The core aim of open banking is to enable consumers to share their financial data between their financial institution and third party providers (and between financial institutions), typically through the use of application programming interfaces (APIs). While still a relatively new concept in Canada, open banking has the potential to transform the financial services sector. The federal government is currently undergoing a review of open banking to assess whether it could have a positive impact on consumers while considering the risks to consumer privacy, data security, and financial stability. The purpose of the Canada 2020 Policy Lab was to encourage stakeholders to share information and to discuss the future of open banking in Canada and identified nine broad areas of consensus. CyberLex Blog (McCarthy Tétrault)


CA – NL Government Breaking Its Own Laws on Access to Info Requests: OIPC

Newfoundland and Labrador is breaking its own laws by exceeding the legal deadlines for responding to access to information (ATIPP) requests, the information and privacy commissioner says. In a report [ruling], Molloy said the government flouts the law based on the volume of work it takes to complete requests, something that would never be tolerated from average citizens. The result is long delays. Molloy’s report, which looked into a case where it took 86 business days for a response to an access to information request, when the law says that should happen within 20 business days concerned the Department of Transportation and Works, and found that over the last fiscal year the department was late on approximately 15 per cent of deadlines and received extensions on another 15%. CBC News


WW – Privacy Concerns After 23andme Shares Genetic Data With Major Drugmaker

Drug giant GlaxoSmithKline is investing US$300 million in the DNA-testing company 23andMe in a deal they say could spark the creation of important new medicines, but one that is also raising privacy concerns. Under the deal, GSK will have exclusive rights for four years to use 23andMe’s DNA database to develop new medicines using human genetics. Both the funding and proceeds will be split equally, with the option of extending the partnership for a fifth year. For more than a decade, 23andMe has been selling saliva-based DNA kits to consumers. The company has more than 5 million users – 80% of whom have checked boxes to consent to participating in medical research as well. Genetics is playing an increasingly important role in the world of drug discovery. Researchers use genetic data to help them understand how diseases begin and which proteins and pathways diseases use to progress. Peter Pitts, the president of the U.S.-based non-profit Center for Medicine in the Public Interest told Time he’s worried that whenever one organization shares personal data with another organization, there is a risk the information could be misused. Pitts also wonders whether 23andMe customers are entitled to be compensated if the genetic information they paid for is then used to lead to profitable drugs. “Are they going to offer rebates to people who opt in, so their customers aren’t paying for the privilege of 23andMe working with a for-profit company in a for-profit research project?” Pitts wondered to NBC. 23andMe insisted in its announcement Wednesday that its customers are still in control of their own data. [CTV News, BioNews, Forbes and Scientific American]

US – DTC Genetic-Testing Giants Throw Their Weight Behind Privacy

For years, consumer and privacy advocates have railed against the potential for the direct-to-consumer (DTC) genetic testing to go horribly wrong. In what ways? Privacy violations, for one, along with the idea that companies could get rich off patient data, all while freely sharing our most personal information with law enforcement. But news this week suggests solutions for these problems could be on the way. On July 31, Future of Privacy Forum [allong with testing companies 23andMe, Ancestry, Helix, MyHeritage, and Habit released a set of best practices for the DTC genetic-testing industry, outlining eight key areas and a war chest of possible fixes [see FPF blog post here]. The best practices cover transparency, consent, accountability, security, privacy by design and consumer education, along with data access, integrity, retention and deletion. Recommendations range from providing clear privacy notices of a company’s practices and asking separately for consent to share with third-party organizations to enabling consumers to delete their data, including biological samples. In no way, however, does the document serve as a call to disarm the growing genetic-testing industry. [Healthcare Analytics News, Chicago Tribune, Engadget and GIZMODO]

Health / Medical

US – HHS Releases Interim Guidance on Authorizations for Research

The Department of Health and Human Services (HHS) recently released interim guidance on sufficiency of authorizations for future uses or disclosures of protected health information (PHI) for research purposes. The HIPAA rule permits covered entities and business associates to use or disclosure PHI only as permitted by the Privacy Rule or as authorized in writing by the information’s owner or that person’s personal representative. The 21st Century Cures Act, enacted in 2016, sought, in part, to improve accessibility to medical information for research purposes. It mandated HHS issue guidance on how to allow for this improved access while still protecting patients’ rights under HIPAA. HHS recognizes that additional input from the public on this complex question would better help it provide meaningful guidance. Therefore, HHS is inviting comments from the public before issuing final rules. Data Privacy Monitor (BakerHostetler)

US – FDA: Make Sure EHRs Used for Clinical Studies are Secure

The Food and Drug Administration has issued new guidance spelling out its policy for organizations using electronic health record data in FDA-regulated clinical investigations, such as studies of the long-term safety and effectiveness of various drugs. Among other criteria, the EHRs need to contain certain privacy and security controls. EHRs used for clinical investigations should be certified under the Department of Health and Human Services’ Office of the National Coordinator for Health IT’s EHR certification program, which requires products to meet a variety of privacy and security protection requirements for patient data. But if data from EHRs that are not ONC-certified is collected from “foreign” sources – such as from clinical studies conducted outside the U.S. – sponsors need to consider whether such systems also have “certain privacy and security controls in place to ensure that the confidentiality, integrity and security of data are preserved,” the agency says. GovInfo Security

US – Health Data Breach Tally: Lots of Hacks, Fewer Victims

As of July, some 199 breaches affecting 3.9 million individuals had been added to the Department of Health and Human Services’ HIPAA Breach Reporting Tool website, commonly called the “wall of shame.” The website lists health data breaches affecting 500 or more individuals. By comparison, the 2015 cyberattack on Anthem Inc. affected nearly 79 million individuals. Plus, 2015 attacks against Premera Blue Cross, Excellus BlueCross BlueShield, and UCLA Health affected many millions more. Of the breaches added to the wall of shame so far this year, 74 are listed as hacking/IT incidents. Those incidents affected nearly 2.65 million individuals. But other types of breaches have also been added to the tally in the last seven months. Those include 84 “unauthorized access/disclosure” breaches impacting a total of more than 562,000 individuals, with some of the largest of these incidents involving email. Another 37 breaches involved loss or theft; those affected a total of about 672,000 individuals. Of the loss/theft breaches, 28 involved unencrypted devices. Those incidents impacted a total of about 80,000 individuals. The largest breach tied to loss or theft so far this year involved paper/film records. That incident – which, with 582,000 affected, is also the largest breach posted added to the tally so far this year – was reported in April by the California Department of Developmental Services. GovInfoSecurity

US – Cyberattacks on Health-Care Providers Are Up in Recent Months

Health-care providers and government agencies across the U.S. have seen an increase in cybersecurity breaches in recent months, exposing sensitive data from hundreds of thousands of people as the sector scrambles to find adequate defense mechanisms. The breaches include malware attacks, computer thefts, unauthorized network access and other security breaches, according to a government database that tracks attacks in the health-care sector. Last year’s global WannaCry ransomware attack crippled parts of the U.K.’s National Health Service for days. In a 2015 hack, U.S. health insurance giant Anthem Inc. had about 79 million customers’ personal information exposed. Bloomberg

US – California Bill Requires Security for Health Sensors

AB-2167, an Act Relating to Digital Health Feedback Systems was introduced in the Legislative Assembly of California and has been engrossed to the Senate. If passed, a manufacturer or operator that sells a device or software application that may be used with a digital health feedback system (ingestible sensor that collects or sends health information) must implement reasonable security features appropriate to the nature of the device/software application and the information it may collect, contain or transmit. AB-2167 – An Act Relating to Digital Health Feedback Systems – Legislative Assembly of California

US – Relaxing Patient Privacy Protections Will Harm People With Addiction

The nation is in the midst of a staggering opioid epidemic. Over 115 people die from an overdose each day – and all signs indicate that the problem is getting worse. Unfortunately, of the more than 20 million Americans who need treatment for addiction, it’s estimated that only about 7 percent of them will actually receive specialty care. We would expect policymakers and medical providers to do everything possible to increase the number of people entering treatment, not take actions that will discourage individuals from seeking treatment. But unfortunately, that’s exactly what the Overdose Prevention and Patient Safety Act would do. Despite its benevolent title, this legislation, which has already passed the House of Representatives, would jeopardize the confidentiality of substance use treatment and discourage patients from seeking the care they need. [The Hill and coverage at: STAT News, Scientific American and The Journal of Law, Medicine & Ethics]

WW – Mobile Apps Expose Sensitive and Regulated Data

Appthority’s Enterprise Mobile Threat Report uncovers a new variant of the HospitalGown data privacy vulnerability. The Mobile Threat Report showcases mobile apps’ failures to require authentication to a Google Firebase cloud database, exposing the system to data leak; implement user authentication on all database tables and rows to protect against exploit. Other mitigation steps to reduce risks include prohibiting employees from downloading unsecured apps and performing security reviews on private and public apps. Enterprise Mobile Threat Report – Unsecured Firebase Databases – Exposing sensitive data via thousands of mobile apps – Q-2 2018 – Appthority

WW – Insider Health Data Security Threats Bigger Concern than External

Many healthcare professionals are more concerned about insider threats to health data security than external breaches, according to a survey by HIMSS on behalf of SailPoint. There is an acute level of concern about the threats posed by insiders. On a scale of 1 to 10, the mean score for the level of concern of respondents was 8.2. Among respondents who implemented or managed cybersecurity solutions for their organization, 43% said that insider threats were of greater concern than external threats. Another 35% were equally concerned about insider threats and external threats to data security, according to the survey of 101 healthcare professionals.- HealthIT Security, Healthcare Informatics, CISION]

Horror Stories

US – Patient Data Exposed for Months After Phishing Attack On Sunspire

Several employees of Sunspire Health, a nationwide network of addiction treatment facilities, fell victim to a phishing email campaign, which may have exposed personal patient information for about two months. [see notice here] Hackers were able to access some employee email accounts between Mar. 1 and May 4, but officials did not become aware of the cyberattack until sometime between April 10 and May 17. Officials did not give an explanation as to why the discovery took more than a month. The impacted email accounts contained names, dates of birth, Social Security numbers, medical data like diagnoses and treatments, and health insurance information. All patients are being notified and offered a year of free credit monitoring. While officials have notified the U.S. Department of Health and Human Services, the number of patients impacted by the breach haven’t been posted to the breach reporting tool [here]. [Healthcare IT News and coverage at: Health Data Management]

US – Phishing Attacks Breach Alive Hospice for 1 to 4 months

Two employees of Tennessee-based Alive Hospice fell for phishing attacks, which potentially breached patient data for one to four months. During a review of their email system on May 15, officials discovered unauthorized access to two separate employee email accounts that began on December 2017 for one account and around April 5 for the other. While the breached data varied by patient, it included a vast store of highly-sensitive information including: Social Security information, passport numbers, driver’s licenses or state identification cards, copies of marriage and or birth certificates, financial data, medical histories, IRS pin numbers, digital signatures — and even security questions and answers. Notification letters were sent to impacted patients on July 13. Healthcare IT News See also: Phishing attack compromised the data of 1.4 million UnityPoint Health patients and at: SecurityInfoWatch, ISBuzz News and Latest Hacking News

NZ – Allegations 800,000 NZers at Risk of Medical Privacy Breach

Four New Zealand and Australasian healthcare IT companies, Healthlink, Medtech Global, My Practice, and Best Practice Software New Zealand, have jointly contacted the Privacy Commissioner with a claim the privacy of up to 800,000 Auckland patients has been put at risk. They said primary health organisation (PHO) ProCare Health was putting private information of up to 800,000 Auckland patients into a large database, including patient name, age, address, and all financial, demographic, and clinical information.ProCare Health runs a network of community-based healthcare services, including GPs, throughout Auckland. It strongly denies patient privacy is being compromised. The IT companies said they didn’t know how widespread the data collection was in New Zealand, but it wasn’t acceptable to hold so much identifiable information in one place. In a joint letter to the Privacy Commissioner, the companies said most patients seemed unaware of the ProCare database, as well as potentially some GPs. [The New Zealand Herald coverage at: Tripwire and New Zealand Doctor Online]

Identity Issues

CA – Feds Studying Mobile Passports Despite Privacy Fears

New public opinion research [PDF] published by Immigration, Refugees and Citizenship Canada suggests officials there are considering whether Canadians should be able to renew their passport via a mobile application, as well as what Canadians’ attitudes are towards the idea of using virtual or mobile passports. Through 15 focus groups held across the country earlier this year, participants were asked for their perspectives on what sort of “passport of the future” they would be most interested in using and as with most new technologies, there was general enthusiasm but also a marked wariness about the potential for misuse. Millennials and those over the age of 58 also said they would not be likely to use a mobile passport option. While participants suggested they would be all right with using a passport renewal app or a passport stored on their phone, they were less convinced the ease of use would be worth the security concerns. Convenience seemed to be the biggest motivator overall to consider any move away from the current passport. Mobile passport apps are not yet widespread but South of the border, U.S. Customs and Border Protection has officially endorsed an app called Mobile Passport and it’s being used in 25 American airports so far. Personal data on the app is encrypted and stored by Customs and Border Protection. It’s not clear whether Immigration, Refugees and Citizenship Canada is looking to develop its own app for mobile passports or use the existing one. Global News

CA – Canadian Bankers Push for Federated Approach to Digital ID

The Canadian Bankers Association discuss Canada’s need for a digital identity system. Digital ID can be standardized for use between entities (unlike physical documents), and ensures only one version of an individual’s identity exists, reducing the potential for misinformation, identity theft, or the use of outdated data; Canada should learn from the successes of Estonia and India, ensuring digital ID meets legislative and regulatory requirements for customer identification, and using government as a catalyst to bring digital ID to market. White Paper – Canada’s Digital ID Future – A Federated Approach – Canadian Bankers Association

CA – Health Records: Anonymised PHI Not Compellable

The Supreme Court of Canada reviewed an appeal by the Province of British Columbia regarding disclosure of personal health information to Philip Morris International, Inc. The Supreme Court of Canada found that the anonymization of health data in government databases did not change the nature of the information as data derived from a particular individual’s clinical record, and the relevance of the records to a claim brought on an aggregate basis does not alter that nature. [British Columbia v. Philip Morris International Inc. – 2018 SCC 36 CanLII – Supreme Court of Canada]

Law Enforcement

UK – Police Chief Explains ‘Justice by Algorithm’ Tool

A police chief pioneering new ways of dealing with offenders vigorously defended his force’s pilot of a controversial algorithm-based system for picking suitable candidates. Michael Barton, chief constable of Durham Constabulary, was appearing at the first public evidence-gathering hearing of the Law Society’s Technology and Law Policy Commission on algorithms in the justice system [see here & here]. Durham Constabulary has come under fire after revealing last year that it was testing whether an algorithmic ‘Harm Assessment Risk Tool’ (HART) [see “risk assessment“] could help custody officers identify offenders eligible for a deferred prosecution scheme called Checkpoint designed to encourage offenders away from criminality. The tool employs advanced machine learning to predict the likelihood that an individual will reoffend in the next two years. Barton said that HART was intended as a decision support tool and would never take the kind of nuanced decisions made by custody officers. The main reason for its use is to ensure that people released under the Checkpoint scheme do not go on to commit serious crimes, he said. ‘We are halfway through the pilot of finding out whether custody officers do better than the algorithm he said, promising that results will be peer-reviewed and published. [The Law Society Gazette and at: WIRED and BBC News]


WW – Polar Flow Fitness App Reveals Location of Users in Military and Intelligence Agencies

The Polar Flow fitness app exposes sensitive information about its users, which include US intelligence employees, and military personnel. The Polar Flow Explore function could be used to obtain not only a user’s geolocation data, but also their name and home address. Polar has temporarily suspended the Explore API. Polar is not the first fitness app to expose user data; several months ago, the Strava app was found to be exposing soldiers’ locations and routes. Threat Post: Polar Fitness App Exposes Location of ‘Spies’ and Military Personnel | Bleeping Computer: Polar App Disables Feature That Allowed Journalists to Identify Intelligence Personnel | Fifth Domain Polar fitness app broadcasted sensitive details of intelligence and service members | The Register: Fitness app Polar even better at revealing secrets than Strava.

Online Privacy

WW – Low Accuracy in Device Fingerprinting Techniques

Researchers study the accuracy of fingerprinting of smartphone motion sensors. Existing browser fingerprinting techniques (used to track users without using cookies) are less effective on mobile platforms; additional features and external auxiliary information can be used to improve accuracy (but are unlikely to uniquely identify devices), and combining multiple classifiers provides better accuracy than current techniques. Every Move You Make – Exploring Practical Issues in Smartphone Motion Sensor Fingerprinting and Countermeasures – Anupam Dua et al. – Carnegie Mellon University

WW – Google Admits Third-Party Developers Can Read Your Emails

According to the WSJ, developers of third-party apps can read your Gmail messages. The thing is, you gave the application permission to do that. You just don’t remember. Or weren’t paying attention. After long-running complaints from users, Google stopped scanning the contents of Gmail messages to create targeted ads last year. But the company still allows third-party applications to do so. What skeeves so many people out is discovering that this process isn’t all done by computer. Some companies give human developers access to emails. This enables the developer to check if the code they’ve written to scan the text is finding what it’s supposed to scan. Or to know what to scan for in the first place. [Cult of Mac, Google Blog, CNET, CBC News, VentureBeat, The Verge, Digital Trends, Naked Security and The Sydney Morning Herald]

CA – New Zealand Company Violated Rights of Canadians, Says Privacy Commissioner

How far can companies go using personal information of people copied from a publicly-available website? Not far at all if it involves Canadians who don’t give their consent, according to a decision released by Canada’s privacy commissioner [see Announcement here & report here]. New Zealand’s Profile Technology Ltd. violated the privacy rights of potentially some 4.5 million Canadians by copying the profiles of Facebook users around the globe and posting them on its own website, the office of the federal privacy commissioner has ruled. The company said it merely indexed information publicly accessible on Facebook It also argued Canadian law didn’t apply. However, the commission said under Canadian law these people had to give their consent because Profile Technology used the information not just for indexing but also to start its own social networking website called the Profile Engine. The OPC has sent its findings to the Office of the Privacy Commissioner of New Zealand, which is considering what options may be available under that country’s laws. IT World Canada

US – 3 of 16 Providers Have Sufficient Takedown Processes: EFF

The Electronic Frontier Foundation, an advocacy organization, released its annual report on transparency of online service providers. The Apple App Store, Google Play store and YouTube earned full marks for transparency in reporting government takedown requests based on both legal requests and requests alleging platform policy violations, providing meaningful notice to users of every content takedown and account suspension, providing users with an appeals process to dispute takedowns and suspensions, and limiting the geographic scope of takedowns when possible. Who Has Your Back? Censorship Edition 2018 – Electronic Frontier Foundation | Chart only

US – EFF Files Amicus Brief Supporting Warrant for Border Searches of Electronic Devices

EFF, joined by ACLU, filed an amicus brief in the U.S. Court of Appeals for the Seventh Circuit arguing that border agents need a probable cause warrant before searching personal electronic devices like cell phones and laptops. We filed our brief in a criminal case involving Donald Wanjiku. In 2015 border agents at Chicago’s O’Hare International Airport searched Wanjiku’s cell phone manually and forensically. Border agents also forensically searched Wanjiku’s laptop and external hard drive. Wanjiku asked the district court in U.S. v. Wanjiku to suppress evidence obtained from the warrantless border searches of his electronic devices, but the judge denied his motion. He then appealed to the Seventh Circuit. In their amicus brief, EFF argued that the Supreme Court’s decision in Riley v. California (2014) supports the conclusion that border agents need a warrant before searching electronic devices because of the unprecedented and significant privacy interests travelers have in their digital data. They also cited the Supreme Court’s recent decision in U.S. v. Carpenter (2018) holding that the government needs a warrant to obtain historical cell phone location information. In our amicus brief, we explained that historical location information can be obtained from a border search of a cell phone. DeepLinks Blog (Electronic Frontier Foundation)

Privacy (US)

US – FTC Wants Expanded Authority in Data Security, Privacy

While HHS is the primary federal agency that enforces HIPAA Security and Privacy Rules, the FTC has expanded its enforcement activities in data security and privacy, including taking on now-defunct medical testing firm LabMD over poor data security that led to PHI breaches. The FTC was recently rebuffed by a federal appeals court in its effort to compel LabMD to overhaul its data security program. Despite this setback, the FTC is looking for additional authority from Congress in the privacy and data security area, FTC Chairman Joseph Simons told the House Energy and Commerce Committee’s digital commerce and consumer protection subcommittee on Wednesday. Specifically, the FTC wants the ability to impose civil penalties in privacy and data security cases, authority over nonprofits and common carriers, and authority to issue implementing rules under the Administrative Procedure Act (APA). Currently, the FTC issues rules under the Magnuson-Moss Warranty Act, which is more burdensome than the APA process, Simons noted. [HealthIT Security and at: Imperial Valley News]

US – Judge Rebukes FBI Agent over Improper Stingray Use

A federal judge chastised an FBI agent for improper use of a stingray, also known as a cell-site simulator or IMSI catcher, and an improper search of a cellphone. In April 2016, an FBI agent sought and obtained warrants from a county superior court judge in California to search a suspect’s cellphone and to use a stingray to locate a second suspect. California law does not permit state judges to sign off on warrants for federal agents. Court documents also show that the FBI agent misled the judge about what a stingray does. [Ars Technica: Judge slams FBI for improper cellphone search, stingray use | SC Magazine: Federal Judge scolds FBI agent for improper stingray use]

WW – CIPL Issues Discussion Papers on the Central Role of Accountability

On July 23, 2018, the Centre for Information Policy Leadership at Hunton Andrews Kurth LLP issued two new discussion papers on the Central Role of Organizational Accountability in Data Protection [7 pg PDF notice & overview here]. The goal of these discussion papers is to show that organizational accountability is pivotal to effective data protection and essential for the digital transformation of the economy and society, and to emphasize how its many benefits should be actively encouraged and incentivized by data protection authorities, and law and policy makers around the globe. The first discussion paper [PDF] explains how accountability provides the necessary framework and tools for scalable compliance, fosters corporate digital responsibility beyond pure legal compliance, and empowers and protects individuals. The second discussion paper [PDF] explains why and how accountability should be specifically incentivized, particularly by DPAs and law makers. It argues that given the many benefits of accountability for all stakeholders, DPAs and law makers should encourage and incentivize organizations to implement accountability. Privacy & Information Security Law Blog | see also: CIPL Hosts Special Executive Retreat with APPA Privacy Commissioners on Accountable AI

US – Florida Man Jailed for Failing to Unlock His Phone

What started as a routine traffic stop has quickly escalated into a civil rights case in a Florida courtroom after a man was put behind bars for failing to unlock his phone. William Montanez was given 180 days in jail by a judge after he was asked to unlock two separate phones seized from him by police. Montanez told the court that he couldn’t remember the passwords, so the judge found him in civil contempt and threw him in jail. According to an emergency writ filed by Montanez’s lawyer, he was pulled over by police on June 21 for not properly yielding while pulling out of a driveway. The officers making the stop asked to search his car, which he refused, so the police brought in a drug-sniffing dog. The police got a search warrant for the devices, claiming that they contain evidence of “Possession of Cannabis Less Than 20 grams” and “Possession of Drug Paraphernalia”—both of which Montanez already admitted to, which makes it unclear why the cops still want to search the phone to prove the charges. [Gizmodo coverage at: Fox 13 News, Miami Herald, WPLG Local 10 and Phone Arena]

US – $2 Million FTC Fine for Nonconsensual Posting of PI

A US Court granted the FTC and State of Nevada a permanent injunction against Emp Media Inc. et al for alleged violations of the FTC Act. Website operators are permanently banned from publicly disseminating individuals’ intimate images, name, employer and social media account information, and charging a fee for removal; verifiable express consent must be obtained (after provision of a separate, conspicuous notice), individuals must have the right to revoke consent at any time, and any third party hosting the company’s websites must ensure they are no longer accessible. FTC and State of Nevada v. Emp Media Inc. et al – Order Granting Default Judgment, Permanent Injunction and other Relief – US District Court for Nevada | Press Release

Privacy Enhancing Technologies (PETs)

WW – Privacy Pros Gaining Control of Technology Decision-Making Over IT

The results of new TrustArc research that examines how privacy technology is bought and deployed to address privacy and data protection challenges. Surveying privacy professionals worldwide, the findings of the survey show that privacy management technology usage is on the rise across all regions and that privacy teams have significant influence on purchasing decisions for eight of the ten technology categories surveyed. To understand the different types of privacy and security technologies that are being used – and by whom, more than 300 privacy professionals in the U.S., EU, UK and Canada were surveyed. Key findings from the survey include: A) Privacy tech adoption approaching the tipping point; B) Data mapping, assessment management, and data discovery among fastest growing solutions; and C) Privacy has a strong influence on purchase decisions across most product categories. Help Net Security


WW – Advocates Push for More User Control Over IoT Devices

The IoT Privacy Forum, a think tank, discusses governance and strategies regarding the Internet of Things. The Forum advocates for data minimization, built in “do not collect” switches (e.g., mute buttons and software toggles), wake words and manual activation for data collection, and mechanisms to make it easy for users to delete their data or revoke consent; only the user should decide if IoT data should be published on social media or indexed by search engines. Clearly Opaque – Privacy Risks of the Internet of Things – IoT Privacy Forum

WW – Digital Security Threats from New and Unexpected Sources

Symantec issued volume 23 of its internet security threat report, providing information on 2017 trends in targeted attacks, email spam, ransomware and mobile threats. The report identifies the threats as including attacks against IoT devices (by using most used login names like admin, guest and supervisor), attacks on mobile devices (using malware in apps related to photography and music), and attacks on supply chain software (by hijacking network traffic and compromising software supplier directly). Internet Security Threat Report Volume 23.

US – FTC Asked To Investigate Smart TVs

US Senators Blumenthal and Markey have asked the FTC to investigate privacy policies and practices of smart TV manufacturers. The smart TV manufacturers allegedly collect sensitive information and use it for tailoring advertisements on the basis of viewed and accessed content (e.g., applications, video games and cable shows), without obtaining express consent or notifying the user about such collection or tracking activities. Letter to FTC regarding smart TVs collecting personal data of viewers – Senator Markey and Blumenthal, U.S. Senate | Press Release 


US – Final Report on U.S. Government Policies and Public-Private Frameworks to Address Botnets, Security and Resiliency Challenges Released

The U.S. Department of Commerce and the Department of Homeland Security, through the National Telecommunications and Information Administration (NTIA), has released the final report on enhancing the resilience of the Internet and communications ecosystem against botnets and automated distributed threats [see “Enhancing the Resilience of the Internet and Communications Ecosystem Against Botnets and other Automated, Distributed Threats“]. This report continues the work initiated under Presidential Executive Order 13800 titled “Strengthening the Cyber Security of Federal Networks and Critical Infrastructure“. The report aims to build upon consensus on various governmental and private initiatives and new approaches for the government either to adopt or to encourage the development of a more resilient ecosystem that can more effectively defend against threats and attacks by botnets. These attacks are expected to gain in both scale and complexity over time as vectors for attack (both end user devices and Internet of Things endpoints) proliferate. The final report does not differentiate between threats from nation states, cybercriminals or other actors; it observes that developing better cooperation and countermeasures within the ecosystem will generally be effective against all threats regardless of the threat origin. The final report was delayed from its originally scheduled May 11 deadline, it was released in late May 2018, along with a number of other reports relating to cybersecurity and linked to the Presidential Executive Order. A full list and links to the released reports is available [DBR on Data].

WW – Malware Attacks Have Doubled In First Half of 2018

The “malware boom” of 2017 has shown no signs of stopping through the first half of 2018, according to a new report from security company SonicWall. The company’s Capture Labs threat researchers recorded 5.99 billion malware attacks during the first two quarters of the year. At the same point in 2017 SonicWall logged 2.97 billion malware attacks [“2018 SonicWall Cyber Threat Report“]. On a month-to-month basis in 2018, malware volume remained consistent in the first quarter before dropping to less than 1 billion per month across April, May, and June. These totals were still more than double that of 2017, the report said. The study shows ransomware attacks surging in first six months of 2018, with 181.5 million ransomware attacks identified for the period. That marks a 229 percent increase over this same timeframe of 2017. [Information Management, SonicWall Blog, Tarsus Today]

US – US CERT Issues Best Practice to Reduce Phishing Risks

Verify unsolicited calls, visits or emails from individuals asking about employees or company internal information (however do not use contact information provided by the individual), check website URLs for spelling variations or domain changes, and do not provide personal, financial or company information in emails (unless assured of the person’s authority to have the information). Security Tip ST04-014 – Avoiding Social Engineering and Phishing Attacks – US-CERT

US –NIST Releases Security Assessment Requirements

NIST issues a publication for assessing security requirements for controlled unclassified information. Recommended controls include those under security categories such as access control, awareness/training, audit/accountability, configuration management, identification/authentication, incident response, maintenance (of connections/systems), media protection, personnel security, physical protection (escort/monitor visitors), risk and security assessments, system/communications protection and system/information integrity. NIST – Assessing Security Requirements for Controlled Unclassified Information – NIST Special Publication 800-171A | Press Release

EU – EU Commission Amends Draft ICT Certification Framework

The EU Commission amended its proposal for a regulation concerning cybersecurity and ENISA, the European Union Agency for Network and Information Security. Certificates issued under the framework will be valid in all EU countries, making it easier for companies to carry out their business across borders, certification will be voluntary (unless otherwise specified by EU or Member State law), and companies seeking certification will be evaluated against three assurance levels (basic, substantial, high). EC – Proposed Regulation on ENISA and ICT Cybersecurity Certification | Press Release

WW – Companies Overwhelmed by Data Collection: Survey

Gemalto’s fifth annual Data Security Confidence Index surveyed IT decision makers in organizations worldwide about data security mechanisms in place for compliance with GDPR. The study presents the status of organizations in protecting data collected from users, including that only 35% effectively analyze collected data and 65% are unable to analyze or categorize stored user data, while the collection of user data from sources such as apps and connected devices is only expected to increase in the future. Gemalto – Data Security Confidence Index

WW – 45% of US Companies Fell Victim to Phishing in 2017

Wombat Security Technologies, a security technology company, issued its 2018 report on phishing. The results are based on reported attacks from information security professionals; and analysis of simulated phishing attacks in more than 16 industries. The company reports that corporate phishing templates are the most frequently used by attackers (44%), with the most successful being corporate email improvements (89%); to combat phishing attacks, organizations train end users on how to identify and respond to suspicious email, and use email/spam filters, advanced malware analysis, and URL wrapping. State of the Phish 2018 – Wombat Security Technologies


CA – Controversial Gunshot Detector Technology Approved by Toronto Police

In an effort to curb gun violence, the Toronto Police Services Board (TPSB) has requested the city fund a motion to double the amount of public CCTV cameras and introduce a controversial audio recording technology called “ShotSpotter“ [wiki] that provides police with real-time shooting locations. The system is already in use by more than 90 cities in the U.S, including Louisville, Cincinnati and Chicago. The system uses microphones to detect and locate gunfire, and automatically informs police. According to its privacy policy, ShotSpotter said its devices only record and provide police with audio beginning two seconds before a gunshot has been fired and ending four seconds after. The effectiveness of the technology, however, is up for debate. The idea of using the ShotSpotter technology and increasing surveillance cameras raises questions about privacy. As for ShotSpotter, its privacy policy says it “does not have the ability to listen to indoor conversations” and does not have the ability to “overhear normal speech or conversations on public streets.” The company said there has been “three extremely rare ‘edge cases’” (out of 3 million incidents detected in the past 10 years) in which a human voice was overheard. City council will meet Monday and make the decision as to whether to approve the new measures. [Global News, The Toronto Star]

UK – GCHQ Spy Agency Given Illegal Access to Citizens’ Data

The British government broke the law by allowing spy agencies to amass data on UK citizens without proper oversight from the Foreign Office, Investigatory Powers Tribunal has ruled [see Judgment here]. GCHQ, the UK’s electronic surveillance agency, was given vastly increased powers to obtain and analyse citizens’ data after the 9/11 terrorist attacks in 2001, on the condition that it agreed to strict oversight from the foreign secretary. The Foreign Office on several occasions gave GCHQ an effective “carte blanche” to demand data from telecoms and internet companies, which could include visited websites, location information and email contacts. Monday’s ruling is the second from the IPT in a case brought by Privacy International [see PI’s July 23/18 PR here], the campaign group, on the harvesting and sharing of citizens’ data by British spy agencies. The UK government is currently seeking to convince the EU that it should be considered an “adequate” country for data transfer purposes after it leaves the bloc next March. On Monday, the tribunal updated its initial ruling [October 2016 – see here & PI’s PR here] to say that laws protecting UK citizens’ data had not been followed in full until October 2016, not November 2015 as it had previously concluded. A government spokesperson, speaking on behalf of the Foreign Office and GCHQ, said the unlawful requests for citizens’ data referred to in the tribunal’s judgment on Monday had since been replaced and were no longer in force. [Financial Times, The Register, Silicon UK, BBC News, Bloomberg, Computing and The Times]

EU – Statewatch Launches New Observatory of Centralised Big Brother Database

This Observatory covers the so-called “interoperability” of EU JHA databases which in reality will create a centralised EU state database covering all existing and future JHA databases – through combining biometrics and personal data in a single search. The intention is to bring together in one place the biometrics of millions – non-EU citizens now and EU citizens later – directly linked to the Common Identity Repository with personal details. The European Data Protection Supervisor says that the measure would mark a “point of no return” with all the inherent dangers that over time function creep will build up a highly detailed personal file attached to biometrics. For example when the EU-PNR (Passenger Name Record) comes into effect this will contain details of all travellers in and out of the EU and inside the EU as well.Statewatch (London)

WW – Does Your Phone Secretly Listen To You, Two-Year Study Says No

It’s the smartphone conspiracy theory that just won’t go away: Many, many people are convinced that their phones are listening to their conversations to target them with ads. Vice recently fueled the paranoia with an article that declared “Your phone is listening and it’s not paranoia,“ a conclusion the author reached based on a 5-day experiment where he talked about “going back to uni” and “needing cheap shirts” in front of his phone and then saw ads for shirts and university classes on Facebook. Some computer science academics at Northeastern University had heard enough people talking about this technological myth that they decided to do a rigorous study to tackle it. They ran an experiment involving more than 17,000 of the most popular apps on Android to find out whether any of them were secretly using the phone’s mic to capture audio. The apps included those belonging to Facebook, as well as over 8,000 apps that send information to Facebook. They found no evidence of an app unexpectedly activating the microphone or sending audio out when not prompted to do so. On the other hand, the strange practice they started to see was that screenshots and video recordings of what people were doing in apps were being sent to third party domains. In other words, until smartphone makers notify you when your screen is being recorded or give you the power to turn that ability off, you have a new thing to be paranoid about. The researchers will be presenting their work at the Privacy Enhancing Technology Symposium Conference in Barcelona next month. [Gizmodo, Business Insider and BGR]

US Government Programs

CA – Canadian Pot Investors Are Being Banned From Entering the U.S.

Sam Znaimer is a Vancouver, Canada-based venture capitalist who has been investing in everything from tech to telecommunications for more than 30 years. Recently, he put more than $100,000 into legal American cannabis companies. In May, when he attempted to drive across the border, he was flagged for a secondary inspection and questioned for four hours. “To my shock and horror, I was told that I was deemed to be inadmissible to the United States because I was assisting and abetting in the illicit trafficking of drugs,” Znaimer said. “They never asked whether I had consumed marijuana, the only thing that they’re interested in is that I’ve been an investor in U.S.-based cannabis companies.” Marijuana in some form is legal in 30 states and Washington D.C., but it’s still outlawed by the U.S. federal government. American immigration attorney Len Saunders said he’s seen at least a dozen cases like Znaimer’s at the Blaine land crossing as well as airports in Vancouver and Edmonton over the past few months. In the prior 15 years that he’s practiced law on the border, he’d never seen one. CBS News See also: How the tech behind bitcoin could safeguard marijuana sales data

CA – OPC Warns Canadians to Keep Data Secure When Crossing the Border

The OPC is warning citizens to be aware that their digital devices can be searched — and civil liberties advocates say every precaution must be taken. The commissioner’s updated guidelines on privacy at airports and borders advises that officers on both sides of the border can search your devices and ask for passwords. The guidelines include new advice on searches conducted at “preclearance” sites, where U.S. border officials can do searches on Canadian ground, part of an act passed in late 2017. They come following the release of a new U.S. Customs and Border Protection directive [see PR here] on searches of electronic devices, which clarifies previous search rules. It also includes updates on electronic searches for people going back through Canadian customs. Meghan McDermott, staff counsel at the BC Civil Liberties Association, said that due to the new powers of customs officers at preclearance sites and more detailed abilities for U.S. border patrols, she recommends taking every precaution to ensure your data is secure and protected should a search take place “the best guarantee is to not even bring your device at all, but if you do bring a device, you can use a burner phone [see here] or substitute. One of the other things people can do is to delete all the apps and documents and texts as well.” Toronto Star

US Legislation

US – California Enacts Comprehensive Privacy Rules

Effective January 1, 2020, organizations must comply with individual requests to provide categories of personal information collected and shared, stop selling personal information (services cannot be refused and prices cannot be increased as a result), delete personal information, and provide their information in a portable format; the Attorney General can impose civil penalties for violations and there is a private right of action for breaches resulting from reckless behavior. [AB 375 – The California Consumer Privacy Act of 2018 – State of California]

US – Tech Companies Cool Toward California Consumer Privacy Act

On the heels of the EU’s General Data Protection Regulation, California lawmakers passed a tough new privacy law, California Consumer Privacy Act, which is designed to give consumers more control over their personal information. Under the act, which goes into effect Jan. 1, 2020, consumers will be able to request details on how their personally identifiable information (PII) is used and how it is collected. The question now for California—and those state governments watching—is whether companies will embrace the California Consumer Privacy Act or will they find loopholes to skirt the law. California’s tech companies, usually out on the front line of innovation and new ideas, are soundly against the state’s new privacy law and are expected to fight for changes before the law goes into effect. The bill was pushed through too quickly, they say, and it is too vague. Yet, supporters of the bill point out, these same companies already have groundwork in place because of GDPR. Many large companies still have a long way to go in finishing the technical aspects of GDPR, and now California companies need to be ready for CCPA a year and a half later. Security Boulevard See also: California Consumer Privacy Act: What you need to know now | Key Takeaways from the California Consumer Privacy Act of 2018 | Out of the pot and into the fire? What the heck happened in California?! | California’s privacy law a commendable step toward national standard | New California Consumer Privacy Act increases the risk of additional data breach class actions

Workplace Privacy

US – Judge: No ‘Risk of Harm’ From Fingerprint Scan Time Clocks

A federal judge [Manish S. Shah, U.S District Court for the Northern District of Illinois] has kicked back to Cook County court [Illinois] a class action lawsuit accusing manufacturer Rexnord of violating an Illinois state privacy law by requiring employees to scan their fingerprints when using employee punch clocks to track work hours. The underlying complaint was brought by former Rexnord Industries employee Salvador Aguilar, who said the company violated the Illinois Biometric Information Privacy Act [see here] through its use of a fingerprint-based time clock system [see Rexnord policy here]. According to Aguilar, he never signed a written release allowing the company to collect or store his fingerprint. Further, he said the company never fully explained why it was keeping his fingerprint data and how long it would retain the information. Although Aguilar and his attorneys originally filed his complaint in Cook County, Rexnord removed the suit to federal court. The company then moved to have it dismissed for failure to state a claim. However, in an opinion issued July 3, U.S. District Judge Manish Shah remanded [see here] the matter because he said the federal court lacked jurisdiction in the case. Cook County Record



10-30 June 2018


US – Police Use of Facial Recognition With License Databases Spur Privacy Concerns

31 U.S. states now allow law-enforcement officials to access license photos to help identify potential suspects. Roughly one in every two American adults—117 million people—are in the facial-recognition networks used by law enforcement. Police in Maryland used a cutting edge, facial recognition program last week to track down a robbery suspect, marking one of the first such instances of the tactic to be made public. In the process of identifying a possible suspect, investigators said they fed an Instagram photo into the state’s vast facial recognition system, which quickly spit out the driver’s license photo of an individual who was then arrested. This digital-age crime-solving technique is at the center of a debate between privacy advocates and law-enforcement officials: Should police be able to use facial recognition software to search troves of driver’s license photos, many of which are images of people who have never been convicted of a crime? Wall Street Journal

US – 150,000 People Tell Amazon: Stop Selling Facial Recognition Tech to Police

On Monday afternoon, civil rights, religious, and community organizations [took] their demand that Amazon stop providing face surveillance technology to governments, including police departments, to the company’s headquarters in Seattle. The groups delivered over 150,000 petition signatures, a coalition letter signed by nearly 70 organizations representing communities nationwide, and a letter from Amazon shareholders. Monday’s action is a part of a nationwide campaign to stop the spread of face surveillance technology in government before it is unleashed in towns, cities, and states across the country. Documents obtained by the ACLU reveal Amazon is aggressively marketing its Rekognition face surveillance tool to law enforcement in the United States, and even helping agencies deploy it. Among other capabilities, the technology provides governments the ability to rewind backwards in time to see where we’ve been, who we’ve been with, and what we’ve been doing. [ACLU and at: Mashable, CNN Tech, Planet Biometrics and GeekWire]

US – School Facial Recognition System Sparks Privacy Concerns

New York’s Lockport City School District has committed to purchase the facial and object recognition software from Ontario-based firm SN Technologies, as part of a $3.8m security update using a grant provided by the 2014 Smart School Bond Act SSBA). The district wants to be a model of security, but it has privacy and civil rights advocates up in arms. In a letter to the New York State Education Department (NYSED), the New York Civil Liberties Union protested the purchase, disputing the accuracy of facial recognition systems and voicing privacy concerns [NYCLU Blog post here]. Student images are part of students’ biometric records and classified as personally identifiable information under New York state law, said the NYCLU. It added that because student images would be stored for 60 days in the SN Technologies system, schools could use the images to analyse students’ movements and interactions. Lockport won’t be the first school district in the US to use facial recognition technology. Arkansas’ Magnolia School District is also spending $287,000 on similar systems, according to reports. [NakedSecurity and at: Security Info Watch and Lockport Union Sun & Journal]

WW – Biometric Driver ID Market Expected to Grow to US$ 25 Billion by 2022

Biometric driver identification systems are being used to prevent unauthorized access to vehicles. Automobile industry is increasingly adopting biometric identification system to ensure security of the car. Manufacturers are offering various biometrics technology for authentication such as facial and fingerprint recognition, voice analysis, iris-based in-car biometrics, hand geometry, etc. biometric identification system are being developed with some advanced features such as behavior-based algorithms to ensure performance and safety. This Research Report Insights report discusses key prospects for growth of global biometric driver identification system market during the forecast period, 2017-2022, offering pragmatic insights to lead market players towards devising & implementing informed strategies. True Industry News

US – FaceFirst Launches Biometric Shoplifter Alert System for Retailers

L.A.-based FaceFirst has launched a new facial recognition solution for security surveillance aimed at the retail market. Dubbed “Sentinel-IQ”, the platform is designed to identify known shoplifters and criminals, and to send an alert to administrators the moment such individuals are detected by the surveillance system. And it’s available in multiple deployment configurations including a SaaS-based setup that allows it to run on almost any HD camera with a compatible processor. Sentinel-IQ’s ability to identify criminals can only be as effective as the databases upon which it relies, and FaceFirst offers Watchlist as a Service solutions for this purpose. And the company has a track record, with its facial recognition surveillance technology having previously seen some heavy duty deployments including an airport security implementation in Panama and a CCTV deployment for police in the Indian city of Bengaluru. Now, with facial recognition becoming ever more mainstream, FaceFirst could find more interest than ever in this technology from the retail sector at which Sentinel-IQ is aimed. [Find Biometrics]


CA – Federal Bill Expands OPC Enforcement Powers

Bill C-413, amending PIPEDA in relation to the Office of the Privacy Commissioner of Canada’s enforcement abilities, had its first reading in the House of Commons. If passed, the OPC can order organizations that contravened PIPEDA to take any reasonable action to ensure compliance, and can decide not to conduct investigations where not necessary or reasonably practicable; fines up to $30 million can be imposed for knowing, reckless violations considering the nature and gravity of the violation, organization’s resources and size, number of affected individuals, and mitigation measures taken. [Bill C-413 – An Act to Amend PIPEDA (Compliance with Obligations) – Parliament of Canada Bill Status | Bill Text

CA – Federal Government Launches Consultations on National Data Strategy

The Trudeau government will take fresh steps towards equipping the country for the rapidly advancing era of big data. The Minister of Innovation, Science, and Economic Development Navdeep Bains announced that the federal government would be launching a series of consultations regarding a national data strategy [see PR here]. According to the Ministry of Innovation, Science, and Economic Development, the consultations will take the form of several roundtable discussions [see here] that will be held over the summer in cities across Canada, with businesses, educational institutions, and private citizens invited to participate. Whether the target is businesses or government, however, not every privacy expert believes Canada’s current data standards are an issue. Halifax-based internet, technology, and privacy lawyer David Fraser called the data gathering policies employed by tech giants such as Google and Facebook nothing more than “simple reality The reason Facebook has information on 28 million Canadians is because 28 million Canadians choose to use Facebook” [ITWorld Canada see also: MobileSyrup and iPolitics | The Globe and Mail | National Post]

CA – Apply Privacy Laws to Canadian Political Parties, Committee Recommends

The House of Commons’ ethics committee unanimously recommended sweeping changes to Canada’s privacy regime, including bringing in strict data protection rules similar to those recently adopted by the European Union. The committee’s recommendations [see report notice here & 56 pg PDF report here] can be grouped into three broad categories. First, they suggest applying Canada’s privacy laws to federal political parties, as well as increasing transparency around how political actors use big data to target voters or advertising. Second, the committee restated earlier recommendations to increase the power of the federal privacy commissioner, giving the office enforcement powers like levying fines and seize company’s documents in the course of an investigation. Finally, and perhaps most consequentially, the committee recommended the Liberals urgently move to mirror the strict privacy framework recently adopted by the European Union, the General Data Protection Regulation (GDPR). Taken together, the measures would represent a significant shift in Canada’s privacy regime. [Toronto Star and at: CBC News, iPolitics, The Canadian Press (via NP) and The Globe and Mail]

CA – Poll: 72% Majority Want Stronger Privacy Rules for Political Parties

According to an Innovative Research Group poll, people in Canada overwhelmingly support greater privacy standards for political parties, which are currently not subject to any federal privacy legislation. Only 3% of those polled support the status quo policy of fewer privacy requirements for political parties. The law that governs the privacy practices of businesses in Canada (PIPEDA) [see here, OPC info here & wiki here], does not currently apply to political parties. Bill C-76 [the Elections Modernization Act — see PR here & Text here], the government’s current proposal to amend our elections laws, only proposes one change to this; requiring that parties publish a privacy policy. C-76 does not put any limitations or requirements for how individuals’ data is handled once collected. Key findings from the polling include: 1) A large majority – 72% – supported changing the law so that political parties follow the same privacy rules as private companies; 2) Only 3% of those polled supported the status quo policy of fewer restrictions for political parties; 3) Support for extending PIPEDA to political parties has broad support across partisans from all parties; and 4) 65% of respondents are concerned about the possibility of private companies collecting personal information about Canadians and using it in an attempt to influence the next election – Of those that followed the issue closely, 80% were concerned. [Open Media and also Elections Canada ‘blind’ to how political parties could use – or abuse – personal information and HuffPost Canada and The Globe and Mail]

CA – OPC Issues New PIPEDA Guidance on Inappropriate Data Practices

The OPC released a critical interpretation document [PR here] intended to guide how companies subject to the PIPEDA, will be allowed to collect, use and disclose personal information, as viewed from the perspective of the reasonable person. The guidance on inappropriate data practices is intended to offer interpretation on s. 5(3) of PIPEDA, which requires that organizations may collect, use or disclose personal information only for purposes that a “reasonable person would consider appropriate in the circumstances.” The OPC will begin to apply the guideline on July 1, 2018. Recognizing that any evaluation of an organization’s information practices under this subsection will necessarily require both contextual analysis and a review of the particular facts, the OPC has nonetheless established six “no-go zones” of behaviour that are completely offside PIPEDA and are essentially prohibited. The current no-go zones described in the guideline are as follows: 1) Collection, use of disclosure that is otherwise unlawful; 2) Profiling or categorization that leads to unfair, unethical or discriminatory treatment contrary to human rights law; 3) Collection, use or disclosure for purposes that are known or likely to cause significant harm to the individual; 4) Publishing personal information with the intended purpose of charging individuals for its removal; 5) Requiring passwords to social media accounts for the purposes of employee screening; and 6) Surveillance by an organization through audio or video functionality of the individual’s own device. [Canadian Lawyer Magazine]

CA – Canada’s Rape-Shield Law Can’t Be Used to Prevent an Accused from Mounting Defence, Ont. Court Rules

Canada’s so-called rape-shield law, which aims to protect sexual-assault complainants from unfair and irrelevant scrutiny of their sex lives, cannot be used to prevent an accused from mounting a reasonable defence, Ontario’s top court ruled [see R. v. R.V. here]. The court acknowledged the critical importance of protecting complainants from questioning about their sexual activity when that activity does not form the subject matter of the charge. “Notwithstanding these powerful considerations, there are times when such questioning must be permitted,” the Appeal Court said. “This is one of those cases where a proper balancing requires that such questioning be permitted.” In October 2016, Judge Robert Gee convicted R.V. after upholding the earlier ruling as binding on him. Both those decisions were in error, the Appeal Court said. The higher court said the pre-trial judge was wrong in finding that R.V.’s attempt to question the teen amounted to a “fishing expedition” despite knowing exactly what the cross-examination would have entailed. [CBC]

CA – OIPC ON Annual Report Celebrates 30 Years

2017 was a milestone year for the OIPC Ontario, which proudly celebrated 30 years of service on behalf of all Ontarians. The OIPC released its 2017 Annual Report, Thirty Years of Access and Privacy Service [see PR here], in which the OIPC calls for a number of legislative changes to enhance both access to information and protection of privacy in Ontario. Among the recommendations is a call to expand the IPC’s oversight to include Ontario’s political parties. Political parties collect and use personal information to target individuals in specific and unique ways. These increasingly sophisticated big data practices raise new privacy and ethical concerns and the need for greater transparency is evident. Subjecting Ontario’s political parties to privacy regulation and oversight will help to address the privacy, ethical and security risks associated with how political parties collect and use personal information. The OIPC also tabled the following recommendations in this year’s report: 1) Enact legislation that provides a strong, government-wide big data framework; 2) Ensure smart city initiatives are privacy protective; 3) Implement MOU for police services who adopt the use of the Philadelphia Model; and 4) Amend Ontario’s access laws to affirm IPC’s power to compel the production of records [IPC and at: The Canadian Press (via CTV)]

CA – OIPC SK Annual Report Emphasizes Privacy Breach Risk Reduction

“Reducing the Risk” is the title of OIPC SK Commissioner’s 2017-2018 annual report [see PR here]. In the report, Ron Kruzeniski [IPC] reflects on the progress and accomplishments of his team during the past year, hopes for the upcoming year and provides recommendations to reduce the risk of future privacy breaches. Recommendations for organizations to reduce risk were broken down into four sections [Prevention (p.14), Specific Controls (p.15), Policies (p.16) & Monitoring and Taking Action (p.18)] and include things like mandatory annual privacy training for all staff, and for staff to sign confidentiality agreements at least once a year. The report urges people to use complex passwords, not let co-workers use your computer if it means they will have access to information they shouldn’t, and use email encryption. The office has experienced an increase in the number of reviews, investigations and consultations, resulting in more files being opened [from 182 in 2014-2015 to 345 in 2017-2018] Kruzeniski also repeated the office’s recommendations from last year’s report [see here] to make amendments to The Health Information Protection Act, which the Ministry of Health has yet to implement. [Leader Post and at: CBC News]

CA – CSIS Risks Privacy of Innocent People Despite Scathing Court Ruling

In a report made public, the Security Intelligence Review Committee said the Canadian Security Intelligence Service has failed to ensure it doesn’t illegally hold on to sensitive information about innocent people, a federal spy watchdog says.[It also expresses concern that CSIS lacks the ability to make the necessary changes, two years after a scathing court ruling about its practices [2017-2018 SIRC Annual Report – see PR here]. An October 2016 Federal Court decision [see redacted Ruling here & Summary here] said CSIS broke the law by keeping and analyzing electronic data about people who were not actually under investigation. The report noted that CSIS has since destroyed most of the metadata in question. But it found the spy service was “still dealing with the implications” of the court decision when it comes to handling information about third parties. In a statement [see here], Public Safety Minister Ralph Goodale said he takes the matter “very seriously,” and a full review of such cases is underway. [Penticton Herald and at: The Globe and Mail and CBC News]

CA – OPC Funding Research on Public Wi-Fi ‘Privacy Leakage’, Smart Cities

The office of Canada’s privacy commissioner has announced it will fund research into privacy risks related to public Wi-Fi hotspots through its 2018 to 2019 contributions program. The project will assess privacy policies, measure personal information leakage to hotspot operators, and identify issues such as potential attack opportunities for malicious users. Research and analysis from the report will culminate in a public hotspot privacy report card and presentation of recommendations. Eight other projects will receive funding, as well. Among them is a project that examines the potential privacy impact for children when parents share their personal information on social networks. There are also studies on the privacy implications of smart cities in Canada, as well as children’s smart toys. Funding for the projects ranges from $21,155 to $74,110 CAD. betakit

CA – OIPC AB Issues Guidelines in Light of Post Election Paper-Shredding

AB OIPC wants to see more in-depth training for government workers who deal with freedom of information and privacy requests. The office also wants the government to close a loophole that allows some public bodies to avoid being subject to the Alberta government’s records management program. The recommendations are contained in two new reports, released Tuesday. The first report [20 pg PDF], written by senior information and privacy manager Chris Stinner, examined the government’s FOIP request tracking system. The office’s second investigation [18 pg PDF – by senior information and privacy manager Elaine LeBuke] centred on two access to information requests made to the Balancing Pool in 2016 and 2017. [Edmonton Journal and at: Alberta OIPC]

CA – Liberal Backbencher Tables Bill to Give Privacy Commissioner More Power

On June 20, Liberal backbencher Nathaniel Erskine-Smith introduced a bill [Bill C-413 – see here & Text here] to give “new powers” to Canada’s privacy commissioner allowing the office to hold social media companies and other to account for breaking the law. [The bill aims to] allow the commissioner to make orders, impose fines, conduct audits and undergo investigations into suspected breaches of the Personal Information Protection and Electronic Documents Act. Under his proposed legislation, when companies are found in violation of the law and aren’t taking steps to comply with it thereafter, hefty financial penalties would ensue. Fines could range from $15 million to $30 million, depending on the offence. Unlike the EU’s GDPR which] encompasses acts of negligence, Erskine-Smith’s bill only captures “intentional conduct” — groups that have acted recklessly towards the law. Some provincial privacy commissioners technically have “more power” than their federal counterpart. For instance, B.C.’s representative Michael McEvoy has the authority to make orders and issue fines of up to $100,000.To that end, a company operating in B.C. would be subject to stronger privacy regulations than a company operating in Ontario. Erskine-Smith’s bill was adopted following question period and will be addressed when the House returns in the fall. [iPolitics]

CA – Complainants in Intimate Images Cases Don’t Get Automatic Publication Ban

In a recent Nova Scotia Supreme Court advisory, there is a stipulation stating adults will not be able to count on a publication ban when they come forward in cases of cyberbullying and the non-consensual sharing of intimate images. On June 22, the Supreme Court issued advice for lawyers on how they should handle the relatively new Intimate Images and Cyber-Protection Act. Adults will be able to request a publication ban on their name, but will have to go through an application process. In 2017, the Intimate Images and Cyber-Protection Act replaced the Cyber-Safety Act, which was deemed unconstitutional. Though it was released last year, the new law is not in effect. In the meantime, the Supreme Court has released the advisory to instruct lawyers as to how to implement the law. The stipulation about adults having their names used as the default position, while minors remain unnamed, is bringing up concerns. [CBC News]


CA – Common Sense Finds Social Media Privacy Matters To Teens

New research from nonprofit org Common Sense Media shows that nine out of 10 teens think it’s important that sites clearly label what data they collect and how it will be used. The research follows recent blunders by big social media companies that have unnerved young users and their parents, including the scandal surrounding political consulting firm Cambridge Analytica harvesting raw data from up to 87 million Facebook profiles unbeknownst to the users. The majority, 69% of teens and 77% of parents, responded that it is “extremely important” for sites to ask permission before selling or sharing their personal information. The vast majority, 97% of parents and 93% of teens, also agree that it is at the very least, moderately important. Very few people surveyed think that sites do a good job of explaining what they do with user’s information. Only 36% of teenagers and 25% of parents agree that social networking sites and apps actually do a good job of explaining what they do with users’ data. On top of that, most parents and teens are concerned about ad targeting by social media sites with 82% of parents and 68% of teens saying they are at least “moderately” worried that those sites already use their data to allow advertisers to target them with ads. Many of those surveyed have already taken action with 79% of teens saying they have changed their privacy settings on a social networking site to limit what they share with others. Parents are also concerned, with 86% changing their own privacy settings. Despite these concerns, 30% of parents and 57% of teens reported never reading the terms of service, with 66% of parents and 65% of teens saying it’s because they are not interested in what those privacy terms have to say. Parents of teens are far more concerned about bots on social media, with 85% saying that they are moderately to extremely concerned about the fake accounts’ influence online. Teens are less concerned, with 72% reporting they are moderately to extremely concerned. This new data also comes on the heels of GDPR rolling out in Europe on May 25, only a few days after the survey was completed. One of the changes with the new EU data privacy and security legislation is that countries can choose at what age someone is considered a child online. In Italy, Germany and Ireland, for example, the cut-off ranges from ages 13 to 16. A number of social apps have already responded to the changes, including WhatsApp which changed its required age of use to 16 all across Europe. [kidscreen]

CA – Canadian Businesses Not Guarding Private Information Carefully: Survey

The results of a government-commissioned survey reveal that a staggering 94% of Canadian companies now collect basic contact information like names, phone numbers and email addresses from their customers. Opinions, evaluations, and comments are collected by 29% of businesses, financial information like credit card numbers by 25%, and identity documents (even social insurance numbers) are collected by 21%. 15% tracked “purchasing habits.” Once they have it in hand, 73% of businesses store this information on-site in electronic form, which the survey notes is “a shift from previous years” when storing information on paper was the most popular method. The research was conducted late last fall by Phoenix Strategic Perspectives, and involved 1,014 Canadian businesses, the vast majority of which were small or medium-sized. The survey was commissioned by the Office of the Privacy Commissioner of Canada. There was a mixture of good and bad news when it came to the security of customers’ personal data. [Global News]


CA – Government of Canada Mandates HTTPS, HSTS

Effective June 27, 2018, all Canadian government websites should implement HTTPS for web connections. The government of Canada has issued an Information Technology Policy Implementation Notice (ITPIN) directing all “departments” to implement Transport Layer Security and migrate to HTTPS. The Notice is effective as of June 27th. All departments, agencies and organizations that in Canadian government that are not subject to the Policy on Management of Information Technology are advised to abide the ITPIN. Canadian departments are to implement safeguards that ensure their services are only offered via a secure connection. [Hashed Out]

EU Developments

EU – LIBE Wants Privacy Shield Axed by September If US Doesn’t Act

Yet more pressure on the precariously placed EU-US Privacy Shield [see here, here & wiki here]: The European Union parliament’s civil liberties committee [LIBE – here] has called for the data transfer arrangement to be suspended by September 1 unless the US comes into full compliance. Though the committee has no power to suspend the arrangement itself, it has amped up the political pressure on the EU’s executive body, the European Commission. In a vote late yesterday the Libe committee agreed [see PR here] the mechanism as it is currently being applied does not provide adequate protection for EU citizens’ personal information. The Libe committee says it wants US authorities to act upon privacy scandals such as Facebook Cambridge Analytica debacle without delay — and, if needed, remove companies that have misused personal data from the Privacy Shield list. MEPs also want EU authorities to investigate such cases and suspend or ban data transfers under the Privacy Shield where appropriate. The EU parliament as a whole is also due to vote on the committee’s text on Privacy Shield next month, which — if they back the Libe position — would place further pressure on the EC to act. Though only a legal decision invalidating the arrangement can compel action. [TechCrunch and at: Out-Law (Pinsent Masons), ITPro, EURACTIV and The Register]

EU – Parliament Advocates Blockchain Ledger Technology

The EU Parliament issued an opinion on blockchain technology. Blockchains shift control over daily interactions with technology to users, provides transparency through its immutability, and permits decoupling of user identities from tracking the movement of goods; issues to consider include that with enough effort, it can still be possible to connect transactions to particular parties, and the ledger’s immutability may compromise a user’s right to be forgotten. [European Parliament – How Blockchain Technology Could Change Our Lives: Report | Press Release]

UK – ICO Guidance on Data Protection by Design and Default

The UK’s Information Commissioners’ Office issued guidance on data protection by design and default under the GDPR. Data protection by design and default should begin at the time of the determination of the means of processing, time of processing, and initial phase of any system, service, product or process; organisations should make data protection an essential part of the core functionality of processing systems and services, practice data minimisation, and provide individuals with tools to determine how their data is used and whether the organisation properly enforces its policies. [UK ICO – Data Protection by Design and Default]

UK – ICO Seeks Views on How Kid-Friendly Websites Should Be Designed

The UK Information Commissioner’s Office is crowdsourcing ideas for the code that will govern how websites and apps aimed at under-16s are designed [see Commissioner’s Blog post here]. The ICO, which must publish a statutory code on age-appropriate design as part of the Data Protection Act – has today acknowledged this fine balancing act as it called for opinions on the code [see here]. The ICO is seeking views [consultation closes September 19] on how websites and apps should be designed to take into account children’s rights and needs, from industry, online service providers, academics and children’s advocacy services. Separately, the ICO said it plans to run a direct consultation with children, parents and guardians – an effort to emphasise the importance it is putting on the opinions of those who are going to be affected by the code. [The Register]

UK – ICO Penalizes Failure to Protect Against Ransomware

The UK Information Commissioner’s Office issued a monetary penalty notice against the British and Foreign Bible Society for violations of the Data Protection Act. A Society failed to take preventative measures to ensure the security of the personal data of its supporters and protect its network from ransomware attacks, including by changing default credentials, restricting access rights, and using network segmentation; the unauthorized access to sensitive information could be used for fraudulent activities and identity theft. [ICO UK – Monetary Penalty Notice – The Bible Society]

Facts & Stats

CA – Canada Revenue Agency Logs 2,338 Privacy Breaches in 2 Years

The personal, confidential information of over 80,000 individual Canadians held by the Canada Revenue Agency may have been accessed without authorization over the last 21 months, according to government documents made public. But while the number of potential privacy breaches may be eye-popping, the CRA is downplaying the seriousness of most of them. Government documents tabled in the House of Commons outline privacy breaches across all government departments and agencies since mid-September 2016. The CRA has experienced the most privacy breaches, recording a total of 2,338 in the 21-month time span. There have been dozens of cases involving unauthorized access over the last 21 months, and 24 of them were considered serious enough to notify the Office of the Privacy Commissioner. [Global News and at: Narcity]

WW – Data Breaches Decline in 2018

According to Risk Based Security’s Q1 2018 Data Breach QuickView Report [see PR here, see 30 pg PDF here or download here], following year over year increases in the number of publicly reported data breaches, the first three months of 2018 saw a respectable decline. But while the numbers look good, they may reflect a change in criminal targeting and goals and less an indication that cyber-criminals are waving white flags. According to the report the number of breaches disclosed in the first three months of this year declined to 686 compared to 1,444 breaches reported in the same year-ago period. Still, the number of records exposed were high: more than 1.4 billion. It seems, for the period, a shift from targeting files for theft to mining cryptocurrencies could explain the turn of events. [Security Boulevard/]


US – Free Credit Freezes Are Coming

Thanks to a new federal law [Economic Growth, Regulatory Relief, and Consumer Protection Act – signed by POTUS May 24 – see S.2155 here & wiki here], soon you can get free credit freezes and year-long fraud alerts. When the law takes effect in September, Equifax, Experian and TransUnion must each set up a webpage for requesting fraud alerts and credit freezes. The FTC will also post links to those webpages on IdentityTheft.gov. And if you’re in the military, there’s more. Within a year, credit reporting agencies must offer free electronic credit monitoring to all active duty military. Here’s what to look forward to when the law takes effect on September 21st [FTC and at: Cuna.org and All Things Finreg]

CA – Class-Action Lawsuits Filed Against Bank of Montreal, CIBC’s Simplii

Law firms Siskinds LLP and JSS Barristers say [see PR here] they have filed in the Ontario Superior Court of Justice proposed class-action lawsuits against Bank of Montreal and CIBC’s direct banking division Simplii Financial over recently disclosed cybersecurity breaches impacting up to 90,000 customers. They are alleging the institutions failed to establish robust security measures to protect clients’ sensitive information. Simplii and BMO warned in May that “fraudsters” may have accessed certain personal and financial information of some of its customers, up to 40,000 clients and 50,000 clients, respectively. [CTV News]


CA – Best Practices: Calculating FOI Request Fees in Ontario

The Ontario OIPC issued guidance on calculating fees for access requests, pursuant to the:

The IPC outlined when entities can charge fees for responding to access requests, including manual record searches, preparing records for disclosure, shipping costs, costs for locating and copying records, photocopies, and CD-ROM records; fees cannot be charged for associated legal costs, third party processing costs, registered mail, employee overtime in responding to requests, or restoring records to their original state. [IPC ON – Fees, Fee Estimates and Fee Waivers – June 2018]


WW – Investigative Strategy of Police Prompts Debate on DNA Privacy Rights

A new investigative technique [genetic genealogy] that American police have been using to comb through the genetic family trees of potential suspects in unsolved crimes has prompted debate in Canada about privacy rights. Josh Paterson, executive director for the B.C. Civil Liberties Association, warned that positive results don’t necessarily justify the process. “The fact of one story or a handful of stories seemingly going in a positive way doesn’t take away our concern for the potential of misuse for these kinds of tools,” he said. Even in cases where a website warns users that their genetic information may be shared with police, Paterson said, it means someone’s third cousin may be consenting on their behalf. In Canada, there are strict rules for good reason around the use of genetic information in the National DNA Data Bank, which limits samples to individuals convicted of certain crimes and regulates their use by police. In contrast, he said American detectives appear to be fishing for suspects through genealogy sites that store genetic information. “They’re basically throwing a net in the sea and asking these companies what they might come back with,” he said. On the other hand, Eike-Henner Kluge, a professor of philosophy at the University of Victoria with an interest in biomedical and information ethics, said there are cases where privacy rights can be breached if there’s a threat of harm to others, and unsolved murders may be one of them. “Any right is subject to the equal and competing rights of others,” Kluge said in an email. “This is also recognized in the classic legal statement, ‘Your right to swing your arms ends just where the other man’s nose begins.’” It’s unclear if Canadian law enforcement are using the same techniques. [The Star and at: Infosurhoy, GenomeWeb and Connecticut Law Tribune. Additional coverage at: Science (Vol. 360, Issue 6393, pp. 1078-1079), Science News, Here & Now (Audio – WBUR) and MediaPost Communications]

Health / Medical

CA – Ontario to Let Companies Access Database of Patient Health Records

The government of Ontario announced Project Spark, an initiative to make healthcare data more accessible to healthcare professionals, researchers, companies, and the people of Ontario themselves. So there’s reason to be excited, and a bit nervous. The government of Ontario has accumulated a vast, central database of its citizens’ electronic health records that in other healthcare systems might be fragmented among various doctor’s offices, health maintenance organizations, and medical labs. While the people of Ontario won’t have to contribute additional data to Project Spark — the government isn’t going to come knocking with cheek swabs for genetic tests — it does turn them and their medical histories into commodities. Commodities that could bring about medical breakthroughs but could also share more personal details than they may want to give. If Project Spark, or any other holder of big data repositories, is about to open for business, it needs to take extra care in advance. Ontario only gets one shot to do this right. Project Spark will have to invest in the right kind of digital infrastructure before kicking into high gear. [Futurism and at: QUARTZ and Canadian Reviewer]

CA – Health Information Breach Notification Obligations under Alberta’s Health Information Act

Commencing August 31, 2018, Alberta’s Health Information Act will require custodians of personal health information to give notice of any health information security breach that presents a risk of harm to an individual. The security breach obligations under the HIA join an increasing number of Canadian statutory regimes that impose information security breach reporting and notification obligations. Custodians subject to the HIA should assess their readiness to comply with the security breach obligations, and make appropriate changes to prepare for compliance. [Borden Ladner Gervais, Lexology]

US – Walmart Wins Patent for Medical Records Stored on Biometric Blockchain

Walmart has been awarded a patent for a system that would store a person’s medical information in a blockchain database and allow first responders to retrieve it in the event of an emergency. The patent, issued by the U.S. Patent and Trademark Office, describes three key parts to the system: a wearable device in which the blockchain is stored; a biometric scanner for an individual’s biometric signature; and an RFID scanner to scan the wearable device, ideally a bracelet or wrist band. According to the patent, first responders would scan the device to access an encrypted private key. They would decrypt that using the biometric identifier and, with a second public key, retrieve the victim’s records. Walmart has been revving up its focus on healthcare. The retail giant has touted the idea of “optimized networks” to improve consumer price and cost transparency while steering patients to providers with better performance ratings. Planet Biometrics

CA – OHIP Billings Should Not Be Public Because ‘Doctors Are Different’

The names of high-billing doctors should not be made public, lawyers for the Ontario Medical Association and two other doctor groups have told the Ontario Court of Appeal. “Doctors are different Why are they different? Because they do not have a contract with government,” lawyer Linda Galessiere, acting for a group of physicians described as “affected third-party doctors,” argued. Others paid from the public purse — including lawyers, consultants and contractors — have actual contracts with government, she said, but with doctors, it is simply legislation that mandates their OHIP payments come from the public treasury, not contracts, Galessiere argued. The contract between the government and the Ontario Medical Association (OMA) is only about the value of specific fees doctors can charge OHIP, she said. Galessiere said physician-identified billings are public in British Columbia, Manitoba and New Brunswick because governments in those provinces passed legislation forcing disclosure. She said that if the Ontario government wants disclosure, then it can also introduce legislation. The doctors and the OMA are appealing a ruling made a year ago by the Ontario Divisional Court that upheld an order by the Information and Privacy Commissioner of Ontario (IPC) [The Star and see IPC Blog here and Order here] to release physician-identified billings of the 100 highest-paid doctors.

US – OCR to Distribute Enforcement Funds to Victims of HIPAA Violations

OCR will seek comments on establishing a way to distribute funds collected from Health Insurance Portability and Accountability Act (HIPAA) enforcement actions to individuals harmed by the underlying incident [see here]. This would fulfill a long-awaited and overdue requirement included in the Health Information Technology for Economic and Clinical Health (HITECH) Act, which required OCR to issue regulations about this methodology within three years of HITECH’s 2009 enactment date. This advanced notice of proposed rulemaking will be released sometime in November 2018. [Data Privacy Monitor]

Horror Stories

US – Equifax Agrees to Cybersecurity Requirements; Former Employee Charged with Insider Trading

Equifax has agreed to comply with security requirements put in place by financial regulators from eight US states. The requirements are a response to the massive data breach that compromised information belonging to more than 147 million individuals. In a related story, a former Equifax employee has been charged with insider trading. Sudhakar Reddy Bonthu, who was one of the Equifax employees orchestrating the company’s public response to the breach, allegedly profited from making trades prior to the breach’s disclosure. [NY Times: 8 States Impose New Rules on Equifax After Data Breach | SC Magazine.com: Equifax agrees to cybersecurity regulations set forth by 8 U.S. States | Reuters: U.S. charges former Equifax manager with insider trading | CNet: Former Equifax exec charged with insider trading following data breach | Justice.gov: Charges filed against second defendant for insider trading related to the Equifax data breach

EU – Irish DPA Finds Against Yahoo in Massive Email Breach

The Irish Data Protection Commissioner has found against Yahoo for a 2014 data breach that affected 500m people and 39m EU citizens. However, the watchdog’s offices said that it will issue no fine or other punitive measure, largely because the events took place before the introduction of the GDPR, which came into force last month. Instead, the DPC has ordered Yahoo to update its data processing systems. Yahoo’s European headquarters are in Dublin. The breach was reported to the DPC in September 2016. It involved the unauthorised copying and taking, “by one or more third parties”, of material contained in approximately 500 million user accounts from Yahoo in 2014. It is the largest breach which has ever been notified to and investigated by the DPC. [Independent and at: Bloomberg, Reuters and SiliconRepublic]

CA – Data Breach Defendant Must Hand Over Computer Forensics Reports: Court

Casino Rama, located near Lake Simcoe, had its computer system hacked in 2016 when a significant amount of information on vendors, employees and customers was stolen facing a class-action lawsuit over the breach, it has lost its bid to prevent plaintiffs from getting their hands on part of a computer forensics investigation report. The casino claimed the report was protected by litigation privilege or solicitor-client privilege. Justice Benjamin Glustein of the Ontario Superior Court of Justice ruled [June 6, 2018 – see 10 pg PDF here] that if the computer forensics reports were subject to solicitor-client privilege or litigation privilege, “then the defendants waived privilege to the extent that the Mandiant Reports address the size and scope of the prospective class. A party cannot disclose and rely on certain information obtained from a privileged source and then seek to prevent disclosure of the privileged information relevant to that issue.” [Canadian Underwriter]

Identity Issues

AU – Australians to Soon Get MyGovId Single Government Identity

The first of several pilot programs using a beta version of a myGovID will begin in October, the Australian government confirmed. In a statement, Minister for Human Services and Minister Assisting the Prime Minister for Digital Transformation Michael Keenan said having 30 different log-ins for government services is “not good enough”, and it is anticipated the single log-in will allow Australians to access almost all government services by 2025. “Think of it as a 100-point digital ID check that will unlock access to almost any government agency through a single portal such as a myGov account,” he said. “The old ways of doing things, like forcing our customers to do business with us over the counter, must be re-imagined and refined.” Citizens will need to establish a digital identity before being able to use it across services, the minister explained. Keenan confirmed the first of several pilot programs using a beta version of the myGovID will begin in October, after the Digital Transformation Agency (DTA) revealed last month it had pencilled in the date for delivery of its first Govpass pilot. ZDNet

CA – Mogo Survey: 86% Believe Risk of Identity Fraud Is Growing

A recent survey conducted by Maru/Blu on behalf of Mogo Finance Technology Inc. [here] revealed that 86% of Canadians believe they are increasingly at risk of identity theft and identity fraud – yet only 24% of respondents currently have identity fraud protection. The survey which included more than 1,500 participants, revealed the following: 1) 86% of Canadians believe that in today’s digital world, they are increasingly at risk of identity theft and identity fraud; 2) While Canadians know the risk, only 24% have some sort of identity fraud protection solution; 3) 85% of Canadians believe that if they are a victim of identity theft or fraud, it will have an impact on their financial life; and 4) 35% of Canadians know someone who has been a victim of identity fraud. [PR Newswire]

EU – Plans to Include Fingerprints in Identity Cards Unjustified and Unnecessary

The European Commission has published a proposal calling for the mandatory inclusion of biometrics (two fingerprints and a facial image) in all EU Member States’ identity cards. The demands to include fingerprints are an unnecessary and unjustified infringement on the right to privacy of almost 85% of EU citizens, as explained in an analysis published by civil liberties organisation Statewatch. The foreseen rules would not oblige Member States to introduce any kind of national identity card and do not require the establishment of any kind of database, either at EU or national level. However, national governments may well take the opportunity provided by the introduction of biometrics into ID cards to establish national databases. An appetite may then develop for linking up them up under the EU’s ongoing “interoperability” initiative, which foresees bringing together all existing and future EU databases and the establishment of a giant, EU-level ‘Central Identity Repository’ which, in its first phase, will hold the biometric and biographical data of almost all “third-country nationals” who enter the EU. Proposals currently under discussion foresee this being extended in the future to include national databases holding information on EU citizens [see 12 pg PDF here]. [Statewatch]

WW – ID Management Study Finds Unfettered Access to Sensitive Information

A data risk report on 130 organizations that were assessed to help them understand where sensitive and classified data reside in their IT environment, and how much is exposed and vulnerable. Assessments performed in more than 50 countries and across 30+ industries, including: insurance; financial services; healthcare; pharma and biotech; manufacturing; retail; utilities and energy; construction; IT and computer software; education; and local, state and regional governments. This study provides recommendations to mitigate key data exposure issues, namely, stale user accounts Spot inactive users and govern active user accounts), toxic permissions (remove global access and restrict user access to relevant data), and password issues Set expiration dates for passwords and use multifactor authentication). [Data Under Attack – 2018 Global Data Risk Report – Varonis]

Law Enforcement

CA – Report Calls for Changes to Edmonton Police’s Use of Street Checks

A report examining the Edmonton Police Service’s use of street checks has recommended the force increase its diversity, monitor for inappropriate stops and initiate a public dialogue around the practice sometimes referred to as carding. The 300-plus page report was released by the Edmonton Police Commission, which oversees the Edmonton police force and is comprised of city councillors and members of the community. The commission announced the review in July, shortly after Black Lives Matter Edmonton obtained street-check data from the police force through a Freedom of Information request. The group released a report that found people who were black or Indigenous were more likely to be subjected to street checks than individuals who were white. [The Globe and Mail]

CA – Ontario Cops Push Access to Private Surveillance Footage

The St. Thomas police service is among a growing number of Ontario police forces that want to tap into home and business video surveillance systems to help fight crime. Police are encouraging home and business owners in St. Thomas to voluntarily identify their video surveillance locations in the community, so they can be mapped and stored on an internal database. Homeowners and businesses can register [see here] their information on the St. Thomas police website. And if there’s a crime in their community, police may come and ask if they can view their video. While police think it could help solve and deter crimes in the community, the trend disturbs former Ontario privacy commissioner Ann Cavoukian. She is worried about homeowners handing over videos that could include images of their neighbours and others who have no idea the information is being shared. She is also concerned about how easily police could obtain the information. [CBC News]

Online Privacy

CA – NEB Plan to Monitor Social Media En Masse “Alarming”

The National Energy Board’s plan to hire a security firm to monitor “vast amounts” of social media chatter may seem like the simple aggregation of publicly available data but actually raises a host of privacy concerns, says a prominent digital security and human rights researcher. Ron Deibert, director of the Citizen Lab at the University of Toronto’s Munk School of Global Affairs, has written an open letter asking the Calgary-based NEB to clarify exactly why it wants to accrue all this data and how it plans to use and share the information. In a recently posted request for information, the NEB — which is responsible for regulating pipelines and other energy infrastructure in Canada — says it is only looking to monitor publicly available data in accordance with existing privacy laws in order to identify potential risks or threats. But Deibert says many Canadians don’t realize just how much of their information could be considered public and the extent to which their online activity can be tracked. “Many of these companies have technologies and tools that enable them to gather up a lot of information that they would consider to be public information but is much deeper and far more revealing than what is posted publicly on a Facebook page,” he said. Social media platforms are constantly changing, he added, and it’s not always clear what defines public versus private data. The NEB has received Deibert’s letter and “will provide a response in due course.” [CBC News]

WW – ICANN Appeals Court Decision to Minimize WHOIS Data Collection

ICANN, has appealed [see PR here & 37 pg PDF Text here] a decision made by a German court last month over the information that should be collected on domain registrants. The German court’s decision [see 6 pg PDF here] was the latest development in a situation that has left many registrars unclear on what approach to take on WHOIS data in order to comply with the EU’s General Data Protection Regulation. The court ruled that while EPAG [located in Bonn, here], which is a subsidiary of the world’s second largest domain registrar, Tucows, has a contractual obligation to collect data to prevent misuse, it’s not required to collect the additional data ICANN wants it to collect e.g. administrative and technical contact data. ICANN argues that while the court ruled that EPAG was only required to collect data on the domain holder, it didn’t rule whether collecting technical and administrative contact data contravened the GDPR. It is asking the court to order EPAG to collect the additional data requested or face a penalty of 250,000 EUR. [Indivigital and at: The Register, World Trademark Review, Domain Name Wire,

EU – German Authorities: Tracking and Profiling Cookies Require Opt-In Consent

The Conference of German Data Protection Authorities released a position paper on the applicability of the German Telemedia Act (TMA) after 25 May 2018. The Position Paper clearly states that tracking and profiling cookies now require informed prior opt-in consent. The Position Paper has received a great deal of criticism. [Technology Law Dispatch]

WW – Facebook Quiz App Leaked Data on ~120M Users For Years

Facebook’s historical app audit [see Zuckerberg’s announcement here] conducted in the wake of the Cambridge Analytica data misuse scandal has already suspended around 200 apps But you do have to question how much the audit exercise is, first and foremost, intended to function as PR damage limitation for Facebook’s brand — given the company’s relaxed response to a data abuse report concerning a quiz app [NameTests.com] with ~120M monthly users, which it received right in the midst of the Cambridge Analytica scandal. Because despite Facebook being alerted about the risk posed by the leaky quiz apps in late April — via its own data abuse bug bounty program — they were still live on its platform a month later. Self-styled “hacker” Inti De Ceukelaire went hunting for data abusers on Facebook’s platform after the company announced a data abuse bounty on April 10 [read De Ceukelaire’ account here] and quickly realized the company was exposing Facebook users’ data to “any third-party that requested it”. NameTests was displaying the quiz taker’s personal data Such as full name, location, age, birthday) in a javascript file — thereby potentially exposing the identify and other data on logged in Facebook users to any external website they happened to visit. He also found it was providing an access token that allowed it to grant even more expansive data access permissions to third party websites — such as to users’ Facebook posts, photos and friends. He reckons people’s data had been being publicly exposed since at least the end of 2016. De Ceukelaire found that NameTests would still reveal Facebook users’ identity even after its app was deleted. Here are the details. [TechCrunch and at: Medium, The Register, CNET, The Verge and GIZMODO]

WW – Facebook Patents System That Can Use Your Phone’s Mic to Monitor You

Facebook has patented a system that can remotely activate the microphone on someone’s phone using inaudible signals broadcast via a television. The patent application describes a system where an audio fingerprint embedded in TV shows or ads, inaudible to human ears, would trigger the phone, tablet or long-rumoured smart speaker to turn on the microphone and start recording “ambient audio of the content item”. The recording could then be matched to a database of content to allow Facebook to identify what the individual was watching – like Shazam for TV, but without the individual choosing to activate the system. The patent positions the technology as a way for broadcasters to know exactly who is watching their TV shows or ads and for how long. Privacy experts are concerned about the intrusion into people’s homes, particularly as the ambient audio recording would likely catch snippets of people’s private conversations without their knowledge. Such a system could also give Facebook a better understanding of people’s social connections as it would show the social network which people were meeting up in real life. Facebook was quick to downplay [see here] the patent filing. [The Guardian and at: Mashable, Ars Technica, Fortune, Naked Security, New York Times and Engadget and also The Verge: No, Facebook did not patent secretly turning your phone mics on when it hears your TV and at: GIZMODO Australia]

US – Groups ask FTC to Probe Facebook’s Nudging Users for Max Data

Consumers Union, the advocacy division of Consumer Reports, which helmed a study of Facebook in the wake of the Cambridge Analytica third-party sharing fiasco that led to congressional hearings and increased scrutiny, said it is calling for an FTC investigation [see CU PR here, CR report here & 8 pg PDF letter here] …The Consumer Reports study is being released at the same time as a Norwegian Consumer Council report, “Deceived by Design” [see PR here & 44 pg PDF Report here], looking at the pop-up privacy boxes announcing companies’ new privacy policies in Europe in the wake of the enhanced privacy framework — General Data Protection Regulation or GDPR — adopted by the EU in May. Consumer Watchdog and [seven] other groups are also calling on the FTC to investigate Google based on the NCC findings [see PR here & 3 pg PDF letter here] Jeff Chester, executive director of the Center for Digital Democracy [here], said that almost two dozen organizations in Europe are part of a letter-writing campaign to seven different regulatory jurisdictions. [Multichannel News and at: Consumer Reports, The Hill and Compliance Week]

US – Facebook Gives Lawmakers the Names of Firms It Gave Deep Data Access

In a major data dump, Facebook handed Congress a ~750-page document with responses to the 2,000 or so questions it received from US lawmakers sitting on two committees in the Senate and House back in April. Facebook repeats itself a distressing amount of times. TextMechanic‘s tool spotted 3,434 lines of duplicate text in its answers — including Facebook’s current favorite line to throw at politicians, where it boldly states: “Facebook is generally not opposed to regulation but wants to ensure it is the right regulation”, followed by the company offering to work with regulators like Congress “to craft the right regulations”. Below is the full list of 52 companies Facebook has now provided to US lawmakers — though it admits the list might not actually be comprehensive, writing: “It is possible we have not been able to identify some integrations, particularly those made during the early days of our company when our records were not centralized. It is also possible that early records may have been deleted from our system”. Last month the New York Times revealed that Facebook had given device makers deep access to data on Facebook users and their friends, via device-integrated APIs. [TechCrunch and at: BankInfo Security]

US – Facebook Releases Privacy Safeguards After Pressure from Advertisers

Facebook is installing new controls it says will better inform its members about the way companies are targeting them with advertising, the latest step to quell a public outcry over the company’s mishandling of user data. Starting July 2, Facebook for the first time will require advertisers to tell its users if a so-called data broker supplied information that led to them being served with an ad. Data brokers are firms that collect personal information about consumers and sell it to marketers and other businesses. Facebook has also set up new procedures for the handling of names of potential customers supplied by data brokers. Advertisers seeking to upload lists of these prospects onto Facebook’s platform will first have to promise that the data vendor obtained any legally required consent from those consumers. Facebook says the new policies will create more transparency for its users and require more accountability from advertisers. The new policies are the second big push by Facebook this year to shore up its policy regarding data brokers. On March 28, Facebook moved to banish data brokers from its platform as part of efforts to burnish its image. But the company quickly softened its stance after big marketers threatened to pull their ad dollars from Facebook, according to three people familiar with the decision. Advertisers said the restrictions on data brokers would hurt their ability to aim their ads at customers most likely to buy their products. Details of advertisers’ pushback, and Facebook’s retreat, have not been previously reported. Reuters

WW – Apple Cracks Down on Apps Sharing Info on Users’ Friends

Apple Inc. changed its App Store rules last week to limit how developers harvest, use and share information about iPhone owners’ friends and other contacts. The move cracks down on a practice that’s been employed for years. Developers ask users for access to their phone contacts, then use it for marketing and sometimes share or sell the information — without permission from the other people listed on those digital address books. On both Apple’s iOS and Google’s Android, the world’s largest smartphone operating systems, the tactic is sometimes used to juice growth and make money. Sharing of friends’ data without their consent is what got Facebook Inc. into so much trouble when one of its outside developers gave information on millions of people to Cambridge Analytica, the political consultancy. Apple has criticized the social network for that lapse and other missteps, while announcing new privacy updates to boost its reputation for safeguarding user data. The iPhone maker hasn’t drawn as much attention to the recent change to its App Store rules, though. Bloomberg News, adage.com

US – Google to Fix Location Data Leak in Google Home and Chromecast

Google plans to fix a privacy issue that affects its Google Home and Chromecast devices. An authentication vulnerability allows attackers to obtain location data for the devices by tricking users into opening a link while connected to the same Wi-Fi network as a vulnerable device. Google is scheduled to release the fix next month. [krebsonsecurity.com: Google to Fix Location Data Leak in Google Home, Chromecast | www.tripwire.com: Google’s Newest Feature: Find My Home]\

Other Jurisdictions

AU – Experts Call for Kids’ Data Protection in Australia

Australia will inevitably need to follow other countries legislating against the collection of data about children from the internet, a data privacy protection expert warns. Dylan Collins, the chairman of the kids’ digital media company TotallyAwesome, believes the internet was designed for adults and many services are struggling to adapt to the extraordinary number of youngsters logging on every day. “Pretty much everything is based around capturing personal data and monetising it in some form,” the Irish entrepreneur said. “That’s just not safe or appropriate for six, seven or eight years olds.” In recent years, the US, Europe and China have created so-called “zero-data environments” which prohibit companies from collecting data on people under a set age – ranging between 13 and 16. “It’s probably inevitable that something similar will come to Australia in the not too distant future,” Mr Collins said. He predicted that over the next five to seven years there will be a universal right for children to have access to the internet without being tracked. Australian Associated Press

Privacy (US)

US – Supreme Court: Warrant Needed to Access Cell Site Location Data

The US Supreme Court has ruled that law enforcement must obtain a warrant to collect a suspect’s cell site location information (CSLI). In a 5-4 decision, Chief Justice John Roberts wrote in the majority opinion that “when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.” The ruling does not overturn the “third-party doctrine,” a legal precedent that found that people have no “reasonable expectation of privacy” regarding information collected by a third party, nor does it cover real-time tracking. [Supremecourt.com: Carpenter V. United States: Certiorari to the United States Court of Appeals for the Sixth Circuit (PDF) | Wired.com: The Supreme Court Just Greatly Strengthened Digital Privacy
SCmagazine.com: Supreme Court rules government generally needs warrant for long-term surveillance using location data | ZDnet.com: Supreme Court says police need a warrant for historical cell location records | Ars Technica: Supreme Court rules: Yes, gov’t needs warrant to get cellphone location data]

US – Analysis: SCOTUS “Carpenter v. United States” a Big Win for Privacy

Over 40 years ago, the Supreme Court outlined what has come to be known as the “third-party doctrine“– the idea that the Fourth Amendment does not protect records or information that someone voluntarily shares with someone or something else. On June 22 in “Carpenter v. United States” [see here & 119 pg PDF text here] an opinion [written] by Chief Justice John Roberts [and] joined by Justices Ruth Bader Ginsburg, Stephen Breyer, Sonia Sotomayor and Elena Kagan, the Supreme Court ruled that, despite this doctrine, police will generally need to get a warrant to obtain cell-site location information, a record of the cell towers (or other sites) with which a cellphone connected. …Roberts characterized the case as involving two, potentially conflicting lines of the Supreme Court’s precedent. The first involves whether someone like Carpenter can expect to have his whereabouts kept private [the so-called reasonable expectation of privacy test – wiki here]. The second line of precedent is the third-party doctrine [see wiki here]. Roberts emphasized that today’s ruling “is a narrow one” that applies only to historical cell-site location records. He took pains to point out that the ruling did not “express a view on” other privacy issues, such as obtaining cell-site location records in real time, or getting information about all of the phones that connected to a particular tower at a particular time. He acknowledged that law-enforcement officials might sometimes still be able to obtain cell-site location records without a warrant – for example, to deal with emergencies such as “bomb threats, active shootings, and child abductions.” And in a footnote, he also left open the possibility that law-enforcement officials might not need a warrant to obtain cell-site location records for a shorter period of time than the seven days at issue in Carpenter’s case – which might allow them to get information about where someone was on the day of a crime, for example. But what law-enforcement officials do not have, he wrote in closing, is “unrestricted access to a wireless carrier’s database of” cell-site location information. Justice Anthony Kennedy dissented from today’s ruling, in an opinion that was joined by Alito and Justice Clarence Thomas [starting at pg 28 here]. Alito filed a lengthy dissent, joined by Thomas, in which he stressed that, as originally understood, the Fourth Amendment would not have applied at all to the methods that law-enforcement officials use to obtain documents. [starting at pg 72 here]. Thomas also wrote alone to suggest that the court should reconsider its use of the “reasonable expectation of privacy” test, complaining that it “has no basis in the text or history of the Fourth Amendment.” [starting at pg 51 here]. …the most interesting separate dissent of the day came from Justice Neil Gorsuch [starting at pg 99 here], who specifically agreed with what he described as the majority’s “implicit but unmistakable conclusion that the rationale” for the third-party doctrine is wrong. Gorsuch would scrap both the third-party doctrine and the “reasonable expectation of privacy” test and focus instead on whether someone has a property interest (even if not a complete one) in the records at issue. But here, he pointed out, the court does not have any information on this question, because Carpenter didn’t make this argument in the lower courts. [SCOTUSblog and at: Lawfare Blog, DeepLinks Blog (EFF), Inside Privacy (Covington), The Volokh Conspiracy, Ars Technica, The New York Times, CNET and WIRED | Neil Gorsuch Joins Sonia Sotomayor in Questioning the Third-Party Doctrine and at: Cato at Liberty Blog, Hot Air , Slate, Washington Examiner and The Originalism Blog]

US – Eleventh Circuit LabMD Decision Potentially Limits FTC’s Remedial Powers

The Eleventh Circuit has issued its decision in LabMD v. FTC, a closely watched case in which LabMD challenged the FTC’s authority to regulate the data security practices of private companies. The Court of Appeals declined to decide that issue, instead finding that the FTC’s order requiring LabMD to implement certain data security reforms was unenforceable because it lacked specificity. The court’s decision may nevertheless impact many of the FTC’s consent orders. It is not yet clear how the FTC will respond to this decision. The Commission might seek rehearing en banc or appeal the decision to the Supreme Court in order to address some of the questions left unanswered by the Eleventh Circuit’s opinion. If the decision stands, however, it could affect the viability of some of the Commission’s remedial powers. Many of the consent orders that the FTC has required companies to adopt—particularly those involving data security but also some related to other issues—have included broad prophylactic remedies that are similarly premised on a reasonableness standard. [Inside Privacy andat: Ward PLLC Blog, Data Security Law Blog (Patterson Belknap), BNA on Data, Data Privacy Monitor (Baker Hostetler), Mayer Brown, Health IT Security and Law360 | FTC Rebuked in LabMD Case: What’s Next for Data Security?

US – Federal Appeals Court Throws Out FTC’s LabMD Ruling

A US federal appeals court has thrown out the Federal Trade Commission’s (FTC’s) ruling requiring LabMD to revamp its security policies and practices, saying that the FTC’s order is unenforceable. The FTC filed the complaint against the medical testing company, in 2013 following a series of breaches that compromised patient data. LabMD challenged the FTC’s ruling in court on the grounds that the agency lacked the authority to regulate how the company handled consumer data. A federal appeals court granted a stay of the FTC’s order, which LabMD challenged in 2016, filing a petition for review. files.consumerfinance.gov: Dwolla Consent Order (PDF) | healthitsecurity.com: Court Dismisses FTC Order on LabMD’s Data Security Lapses | media.ca11.uscourts.gov: Petition for Review of a Decision of the FTC.

US – FTC Hitting the Road for Ideas on Privacy & Regulating Tech

The FTC announced plans to embark on a cross-country listening tour to gauge how academics and average Web users believe the U.S. government should address digital-age challenges that include the rise of artificial intelligence and the data-collection mishaps [see PR here]. The tour includes 15 or more public sessions in a series of cities that have yet to be announced. The hearings are expected to touch on topics like the agency’s “remedial authority” to address privacy and security abuses, the potential risks posed by big data, and the commission’s tools to enforce antitrust laws as media, tech and telecom companies gobble each other up or seek to enter new lines of business [see comments topics here]. The public outreach will begin in September and continue into January 2019, the agency said. It could presage tougher scrutiny of Silicon Valley in response to complaints that the FTC has been too soft on tech giants and the ways they collect, swap and manipulate personal information about billions of people. [The Washington Post and at: The Hour, The Hill, Multichannel News, USA Today and The National Law Journal]

US – Court Rules No Privacy for Cellphone With 1-2-3-4 Passcode

A man serving 18 years in prison in South Carolina for burglary was rightfully convicted in part because he left his cellphone at the crime scene and a detective guessed his passcode as 1-2-3-4 instead of getting a warrant, the state Supreme Court ruled. Lawyers for Lamar Brown argued detectives in Charleston violated Brown’s right to privacy by searching his phone without a warrant. After storing the cellphone in an evidence locker for six days in December 2011, the detective guessed right on Brown’s easy passcode, found a contact named “grandma” and was able to work his way back to Brown. The justices ruled in a 4-1 decision that Brown abandoned his phone at the Charleston home and made no effort to find it. The law allows police to look at abandoned property without a court-issued warrant allowing a search. The Associated Press

US – Amazon, Microsoft, Uber Oppose California Consumer Privacy Act

Amazon, Microsoft, and Uber have made large contributions to a group attempting to prevent a privacy act from becoming law in California. As per state disclosure records, the three tech giants join a number of other well-known companies, including Facebook, Google, AT&T, and Verizon, which are all working against the proposed California Consumer Privacy Act by donating to the Committee to Protect California Jobs (CPCJ). Amazon and Microsoft recently donated $195,000 each to the Committee, while Uber has offered up $50,000. Facebook, Google, AT&T, and Verizon, on the other hand, have all contributed $200,000, though after Mark Zuckerberg faced tough questions from Congress about Facebook’s privacy practices, Facebook has pledged to withdraw support from the group. According to CPCJ spokesperson Steven Maviglio] tech giants are not the only ones opposed to the legislation …”Credit unions, grocers, and car manufacturers are among the many recent additions to the coalition and are the top of the iceberg” [Digital Trends and at: engadget, Techwire, The Verge, PYMNTS, Morgan Lewis Law Flash, Bloomberg BNA and Media Post]


US – Build Privacy Controls Into IoT Devices Now: Report

Limiting the cyber security risks of Internet of Thing devices has long been a plea by experts. But a new report says lawmakers, regulators and manufacturers need to pay equal attention to sealing off the privacy risks of sharing data through so-called smart devices, according to a new report from the University of California’s Center for Long-Term Cybersecurity and the IoT Privacy Forum. Policymakers should take steps to regulate the privacy effects of the IoT before mass sensor data collection becomes ubiquitous, rather than after, the authors say. Omnibus privacy legislation can help regulate how data is handled in the grey areas between sectors and contexts. At the same time makers of IoT products and services should employ a variety of standard measures to provide greater user management and control, as well as more effective notification about how personal data is captured, stored, analyzed, and shared. “The IoT has the potential to diminish the sanctity of spaces that have long been considered private, and could have a “chilling effect” as people grow aware of the risk of surveillance,” the report says. “Yet the same methods of privacy preservation that work in the online world are not always practical or appropriate for the personal types of data collection that the IoT enables.” Clearly Opaque: Privacy Risks of the Internet of Things | IT World Canada]

US – Cybersecurity: Advocates Push For Internet of Things Standards

EPIC responded to the Consumer Product Safety Commission’s request for comments on potential safety issues and hazards associated with internet-connected consumer products. The Consumer Product Safety Commission should develop mandatory privacy and security standards (e.g. certification before devices can be sold, vulnerability disclosure policies, system outage resiliency, mechanisms for consumers to delete their data), require IoT manufacturers to conduct PIAs (to examine data flows and flag potential hazards), and remove products from the marketplace where baseline requirements are not implemented. [Comments of EPIC to the Consumer Product Safety Commission on IoT and Consumer Product Hazards]

US – MIT Frequency Hopping Transmitter Could Help Secure IoT

Researchers at MIT have developed technology that could be used to help secure Internet of Things (IoT) devices. A frequency-hopping transmitter scatters data packets onto different, random radio frequency channels. [Eurekalert.org: Novel transmitter protects wireless devices from hackers | SC Magazine.com: MIT researchers develop frequency-hopping transmitter that fends off attackers | v3.co.uk: MIT researchers develop transmitter to prevent hackers from attacking IoT devices]


CA – Businesses Unprepared for Mobile Workplace Data Breaches: Study

While Canadian businesses are continuing to embrace workplace mobility, they aren’t implementing proper data protection policies and training, according to a new findings from the Shred-it Security Tracker [see PR here & report here] The study, conducted by Ipsos, found that nearly 90 percent of C-Suite Executives (C-Suites) and half of Small Business Owners SBOs) reported their employees are able to work off-site in some capacity. Further, more than two-thirds of businesses said they believe that the trend towards working remotely will only increase over the next five years. That said, 82 percent of C-Suites and 63 percent of SBOs said they feel that they are more susceptible to data breaches when employees work off-site. …Additionally, Shred-it found that out of all age groups, millennials (18-34) are less effective at implementing safe data protection practices than generation X (35-55) and baby boomers (55+). [MobileSyrup]

WW – 86% of CXOs Say Remote Workers Increase Chances of Breach

The majority of C-Suite executives and small business owners SBOs) agree cyber security risks increase with remote workers, according to Shred-it’s State of the Industry Report, released Wednesday [see here]. Shred-it’s report unveils information security risks currently threatening businesses and features survey results conducted by Ipsos. When studying the cause of cybersecurity breaches, 47% of CXOs and 42% of SBOs cited accidental loss or employee negligence as the top reason, according to the report. “The study’s findings clearly show that seemingly small habits can pose great security risk and add up to large financial, reputational and legal risks,” said Shred-it vice president Monu Kalsi in the press release [see here]. The report found 86% of business executives agreed data breaches are more likely to occur when employees are working out of office. While CXOs do have security plans in place for these occurrences, only 35% of SBOs currently have a policy for storing or deleting confidential data remotely, and 54% of SBOs have no policy whatsoever, said the report. [TechRepublic and at: CNBC, Infosecurity Magazine and Insurance Business]


EU – 60 NGOs Join Call to Halt Mandatory Communications Data Collection

UK-based Privacy International, Liberty, and Open Rights Group have joined more than 60 non-governmental organisations, community groups and academics across Europe in calling for a halt to the collection of communications data [see 4 pg PDF letter here]. The groups have filed complaints to the European Commission calling for EU governments to stop requiring companies to store all communications data. Despite the two major rulings by the CJEU in 2014 and 2016, which made blanket and indiscriminate retention of personal data unlawful, the groups said the majority of EU member states have yet to stop this form of surveillance. The groups say it is clear that current data retention regimes in Europe violate the right to privacy and other fundamental human rights. Complaints have been filed in 11 EU member states: Belgium, the Czech Republic, France, Germany, Ireland, Italy, Poland, Portugal, Spain, Sweden and the UK. [Computer Weekly and at: Infosecurity Magazine, Forbes, The Register]

CN – China to Mandate Car-Tracking Chips from 2019: Report

Tracking devices will soon be fitted to cars registered in China [ostensibly] in an effort to tackle the country’s notorious congestion and pollution problem. Starting in July the country will begin fitting cars with radio-frequency identification (RFID) tags at registration time. Although the scheme won’t be compulsory at first, it looks likely it will become mandatory for new cars starting from 2019. The program will be run by the Traffic Management Research Institute, which is part of the country’s Ministry of Public Security. This has raised fears it could be another plank in the country’s growing surveillance apparatus, which includes the recently-introduced social credit scheme and more widespread use of facial recognition technology. [CarAdvice and at: The Wall Street Journal, The Verge, BusinessInsider, Futurism and SiliconANGLE News]

Telecom / TV

US – Verizon, AT&T to End Location Data Sales to Brokers

Verizon and AT&T have pledged to stop providing information on phone owners’ locations to data brokers, stepping back from a business practice that has drawn criticism for endangering privacy. The data has apparently allowed outside companies to pinpoint the location of wireless devices without their owners’ knowledge or consent. Verizon said that about 75 companies have been obtaining its customer data from two little-known California-based brokers that Verizon supplies directly — LocationSmart and Zumigo. Verizon became the first major carrier to declare it would end sales of such data to brokers that then provide it to others. It did so in a June 15 letter to Sen. Ron Wyden, an Oregon Democrat who has been probing the phone location-tracking market. AT&T followed suit Tuesday after The Associated Press reported the Verizon move. Neither company said they are getting out of the business of selling location data. Verizon and AT&T are the two largest U.S. mobile carriers in terms of subscribers. [KSFY and at: CNET, Ars Technica and TechCrunch and also CBC – US Phone Companies Limit Sharing Of Location Data, While Canadian Carriers Insist They Already Do]

US Government Programs

US – NSA Deletes Hundreds of Millions of Call Records Over Privacy Violations

The NSA unfortunately has a long history of violating privacy rules, although this time the agency might not be entirely to blame. The NSA is deleting hundreds of millions of call and text message data records (collected since 2015) after learning of “technical irregularities” that led to receiving records it wasn’t supposed to obtain under the USA Freedom Act. General counsel Glenn Gerstell said in an interview that “one or more” unnamed telecoms had responded to data requests for targets by sending logs that included not just the relevant data, but records for people who hadn’t been in contact with the targets. As it was “infeasible” to comb through all the data and find just the authorized data, the NSA decided to wipe everything. The deletions began on May 23rd. It’s not certain when the purge ends, but this is all metadata, not the content of the calls and messages themselves. A spokesperson also told the NYT that it didn’t include location data, as the Freedom Act doesn’t allow gathering that information under this collection system. The companies involved have “addressed” the cause of the problem for data going forward, the NSA said. While the step shows that the NSA is willing to err on the side of caution, it continues a streak of privacy violations at the agency since its bulk phone data collection fell under the Foreign Intelligence Surveillance Act in 2004. It also illustrates the problem with keeping such large-scale monitoring in check. The system depends on both the NSA and telecoms strictly honoring the law, and all it takes is a mistake to create a serious privacy breach. [Engadget | The NSA and the USA Freedom Act and at: CSO Online, The Verge, The New York Times, The Associated Press, Tech Republic, and GIZMODO]

US Legislation

US – Legislation: California Enacts Comprehensive Privacy Rules

AB 375, the California Consumer Privacy Act of 2018, has been approved by the Legislature and signed by the Governor. Effective January 1, 2020, organizations must comply with individual requests to provide categories of personal information collected and shared, stop selling personal information Services cannot be refused and prices cannot be increased as a result), delete personal information, and provide their information in a portable format; the Attorney General can impose civil penalties for violations and there is a private right of action for breaches resulting from reckless behavior. [AB 375 – The California Consumer Privacy Act of 2018 – State of California]

US – California Data Privacy Bill Becomes Law

California Governor Jerry brown has signed the California Consumer Privacy Act of 2018. Taking effect on January 1, 2020, the law will give California residents the right to know what data companies collect about them and how that information is shared. Consumers will also have the authority to prohibit companies from selling their data. The bill bears similarities to the EU’s GDPR, which went into effect in late May. The bill’s passage has prompted the withdrawal of a state ballot initiative that would have accomplished many of the same things. One of the differences is that the ballot initiative would have prohibited companies from denying services to consumers who choose not to have their data stored and tracked; the bill allows companies to charge consumers varying rates for service depending on the level of data sharing they have chosen. [Wired: California Unanimously Passes Historic Privacy Bill | money.com: California passes strictest online privacy law in the country | Fortune: California Passes Groundbreaking Consumer Data Privacy Law With Fines for Violations | Mercury News: California data privacy bill signed to head off ballot initiative]



20 May–09 June 2018


CA – Canada Will Make Foreign Visitors Pay for Biometrics Collection

Details have emerged about the expansion of a program for collecting fingerprints and facial images from foreign nationals visiting Canada. The program previously applied only to refugee claimants, asylum seekers, and visa applicants from countries considered to present a heightened risk of ID document fraud. The previously announced expansion from 30 to roughly 150 countries will strengthen border security and immigration systems, Immigration Minister Ahmed Hussen said. Applicants will have to pay a CAD$85 fee to cover the cost of the program. It will apply to visitors from Europe, the Middle East, and Africa as of July 31, and to those from Asia, the Asia-Pacific region, and the Americas as of December 31. It only applies to those between 14 and 79 years old, and there are several exemptions, such as for U.S. citizens on work or student visas. [Biometrics Update and at: Digital JournalCBC NewsBusiness in Vancouver and One World Identity and also U of T researchers developing tool to jam facial recognition software and at Naked SecurityThe Toronto Star and Digital Journal]

US – Facial Recognition Product Should Not Be Sold to Government

A coalition of consumer and privacy advocacy, labor and legal groups wrote to Amazon.com about their Rekognition product. Amazon is providing product and consultation support to government customer for its Rekognition product, which can identify people in real-time by instantaneously searching databases containing tens of millions of faces; privacy advocates are concerned that Amazon does not restrict government use of the product, which could be used to identity certain vulnerable groups and minorities. [Letter to Amazon.com Regarding Rekognition – American Civil Liberties Union et al.]

US – JetBlue Will Test Facial Recognition for Boarding

Jetblue will test facia