1-15 October 2018

Biometrics

US – Feds Force Suspect to Unlock an Apple iPhone X With Their Face

A child abuse investigation unearthed by Forbes [PDF] includes the first known case in which law enforcement used Apple Face ID facial recognition technology to open a suspect’s iPhone. That’s by any police agency anywhere in the world, not just in America. It happened on August 10, when the FBI searched the house of 28-year-old Grant Michalski, a Columbus, Ohio, resident who would later that month be charged with receiving and possessing child pornography [see August 24 DoJ PR]. With a search warrant in hand, a federal investigator told Michalski to put his face in front of the phone, which he duly did. That allowed the agent to pick through the suspect’s online chats, photos and whatever else he deemed worthy of investigation. Whilst the feds obtained a warrant, and appeared to have done everything within the bounds of the law, concerns remain about the use of such tactics. “Traditionally, using a person’s face as evidence or to obtain evidence would be considered lawful,” said Jerome Greco, staff attorney at the Legal Aid Society. “But never before have we had so many people’s own faces be the key to unlock so much of their private information.” Thus far, there’s been no challenge to the use of Face ID in this case or others. But Fred Jennings, a senior associate at Tor Ekeland Law, said they could come thanks to the Fifth Amendment, which promises to protect individuals from incriminating themselves in cases. [Forbes  Additional coverage at: Naked Security (Sophos), The Verge and Ars Technica]

Canada

CA – Draft Guidance Released Regarding Mandatory Breach Reporting Under PIPEDA

On September 17, 2018, the Office of the Privacy Commissioner of Canada (OPC) released draft guidance regarding PIPEDA’s new mandatory security and privacy breach notification requirements, which come into force on November 1, 2018. This guidance contains helpful information regarding how and when to report breaches of security safeguards to the OPC, the corresponding notice that must be provided to individuals, and record-keeping obligations associated with such breaches. Of particular note, this guidance provides the following key pieces of information and clarification:

  • Not all breaches must be reported to the OPC. Only those breaches that create a “real risk of significant harm” to an individual are the subject of mandatory reporting obligations;
  • Reporting should commence as soon as possible once the organization determines that a breach creates a real risk of significant harm;
  • The obligation to report resides with the organization in control of the personal information that is the subject of the breach;
  • A report made to the OPC must contain information regarding the date of the breach, the circumstances of the breach, personal information involved, number of individuals affected;
  • When a breach creates a real risk of significant harm, the individuals whose personal information was the subject of the breach must also be notified of the breach;
  • If a breach may also be mitigated or the risk of harm reduced via notification of other government institutions or organizations, then notification of these bodies must also occur; and
  • The obligation to maintain records regarding breaches is not limited to only those breaches that are reportable to the OPC.

The draft guidance includes a PIPEDA breach report form, which can be used by organizations to report security and privacy breaches to the OPC following the effective date of the breach notification requirements. The draft guidance and breach report form are consultation documents, and as such, the OPC invited stakeholders to provide feedback on both documents by October 2, 2018. The final versions of both documents will be published in time for November 1, 2018. [Mondaq]

CA – OPC Seeks Federal Court Determination on Key Issue for Canadians’ Online Reputation

The Office of the Privacy Commissioner of Canada (OPC) is turning to the Federal Court to seek clarity on whether Google’s search engine is subject to federal privacy law when it indexes web pages and presents search results in response to queries of a person’s name. The OPC has asked the court to consider the issue in the context of a complaint involving an individual who alleges Google is contravening PIPEDA [OPC guidance] by prominently displaying links to online news articles about him when his name is searched. The complainant alleges the articles are outdated, inaccurate and disclose sensitive information about his sexual orientation and a serious medical condition. By prominently linking the articles to his name, he argues Google has caused him direct harm. Google asserts that PIPEDA does not apply in this context and that, if it does apply and requires the articles to be de-indexed, it would be unconstitutional. Following public consultations, the OPC took the view [see position paper] that PIPEDA provides for a right to de-indexing – which removes links from search results without deleting the content itself – on request in certain cases. This would generally refer to web pages that contain inaccurate, incomplete or outdated information. However, there is some uncertainty in the interpretation of the law. In the circumstances, the most prudent approach is to ask the Federal Court to clarify the law before the OPC investigates other complaints into issues over which the office may not have jurisdiction if the court were to disagree with the OPC’s interpretation of the legislation. A Notice of Application [see here], filed today in Federal Court, seeks a determination on the preliminary issue of whether PIPEDA applies to the operation of Google’s search engine. In particular, the reference asks whether Google’s search engine service collects, uses or discloses personal information in the course of commercial activities and is therefore subject to PIPEDA. It also asks whether Google is exempt from PIPEDA because its purposes are exclusively journalistic or literary. While Google has also raised the issue of whether a requirement to de-index under PIPEDA would be compliant with s. 2(b) of the Charter, the OPC has decided not to refer this issue to the Court at this stage. The Charter issue may not need to be addressed depending on how the reference questions are answered. The Charter issue is also highly fact based and would require an assessment of the facts of the complaint, making it inappropriate for a reference. Investigations into complaints related to de-indexing requests will be stayed pending the results of the reference. The Privacy Commissioner’s office will also wait until this process is complete before finalizing its position on online reputation. [Office of the Privacy Commissioner of Canada] | Coverage at: Will Canadians soon have the ‘right to be forgotten’ online? Here’s what you need to know | Privacy czar asks Federal Court to settle ‘right to be forgotten’ issue | Privacy watchdog asks Federal Court to rule on Google de-indexing question]

CA – B.C. Political Parties Face Personal Data Collection Investigation

How British Columbia’s political parties harvest and use personal information from social media will be subject to an Office of the Information and Privacy Commissioner investigation within the next month, Commissioner Michael McEvoy said Sept. 28 in his comments in Vancouver to B.C. Information Summit 2018 delegates. McEvoy said reviews of how parties use information has already led to auditing in the United Kingdom, where he has assisted the work of that country’s information commissioner, his B.C. predecessor. “That is something we are going to be doing in British Columbia,” he said. “Politicians realize that uses, misuses and abuses of data in a personal context can change elections,” University of Victoria political science professor Colin Bennett [here] said. “Political affiliation is something that should only be captured with individual consent.” He said political parties “are the major organizations that fall between the cracks of a privacy regime that is either federal or provincial or is corporate or government.” Political parties identifying their voter bases can vacuum up personal information shared on social media. And that can start with something as simple as an election voters’ list readily available to political parties. Bennett said use of the list is excluded from no-phone-call regulations of the Canadian Radio-television and Telecommunications Commission designed to prevent nuisance calls. As well, Bennett explained, parties are not covered by federal anti-spam legislation. He said the proposed federal Election Modernization Act [Bill C-76 here] sections supposed to deal with privacy are “basic and incomplete.” Further, Bennett said, parties do have privacy policies but those are vague and don’t necessarily mesh with each other. Other speakers said greater oversight is needed over how Canadian political parties collect and use voters’ personal information. [Kamloops Matters]

US – U.S.-Mexico-Canada Pact Covers Data Privacy, Local Storage Rules

The U.S., Canada, and Mexico would have to adopt data protection measures under a deal aimed at replacing the North American Free Trade Agreement. Those measures should include provisions on data quality, collection restrictions, and transparency, according to text of the U.S.-Mexico-Canada Agreement released by the U.S. Trade Representative’s Office. Under the deal, governments would have to publish information on how businesses can comply with the rules and the remedies that individuals can pursue. The agreement reflects an increased awareness of data protection issues following the EU’s adoption of new privacy rules and the Cambridge Analytica scandal involving Facebook Inc. data. It would direct the three countries’ governments to exchange information on data protection policies and work together to promote digital trade. The agreement also would ban rules requiring data to be stored locally and prohibit restrictions on data flows for business purposes. Lawmakers in all three countries must approve the deal for it to take effect. Tech industry groups supported the pact’s digital trade and data privacy provisions. [Bloomberg BNA See also: Key takeaways from the new U.S.-Mexico-Canada Agreement

CA – USMCA Falls Short on Digital Trade, Data Protection and Privacy: Geist

The United States-Mexico-Canada Agreement (USMCA) is more than just an updated version of the North American Free Trade Agreement. With the inclusion of a digital trade chapter, the deal sets a new standard for e-commerce that seems likely to proliferate in similar agreements around the world. The chapter raises many concerns, locking in rules that will hamstring online policies for decades by restricting privacy safeguards and hampering efforts to establish new regulation in the digital environment. For example, the USMCA includes rules that restrict data localization policies that can be used to require companies to store personal information within the local jurisdiction. Jurisdictions concerned about lost privacy in the online environment have increasingly turned to data localization to ensure their local laws apply. These include the Canadian provinces of British Columbia and Nova Scotia, which have data localization requirements to keep sensitive health information at home that may be jeopardized by the agreement. It also bans restrictions on data transfers across borders. That means countries cannot follow the European model of data protection that uses data transfer restrictions as a way to ensure that the information enjoys adequate legal protections. In fact, countries could find themselves caught in a global privacy battle in which Europe demands limits on data transfers while the USMCA prohibits them. The chapter fails to reflect many global e-commerce norms, and may ultimately restrict policy flexibility on key privacy issues will have been quietly established as the go-to international approach. [The Washington Post | Experts say USMCA frees Canadian data — but with unknown risks

Consumer

WW – Privacy Advocates Face Negative Stereotyping Online

New research from HideMyAss! has revealed that people around the world perceive privacy advocates as untrustworthy, paranoid, male loners with something to hide despite their own views towards privacy.[PR, blog post & report] The security software firm partnered with Censuswide to survey 8,102 people from the UK, US, France and Germany to compile its new report. Even though two fifths of those surveyed (41%) agreed that privacy is an indispensable human right, 80% believed their online history could be accessed without their knowledge by governments, hackers, police and partners. The research also highlighted a general apathy towards protecting privacy as more than one in five admitted they take no action to protect it. Of those who do take action, 78% rely on some form of password protection as their many privacy measure. More than half (56%) of respondents claim to never share their password with anyone and 22% do not save passwords on their browsers or devices. HideMyAss! also found that while there is overwhelming support for people using the Internet privately for legal actives (74%), 26% of respondents believe that people who aren’t willing to divulge what they do online have something to hide with 24% expecting them to be untrustworthy and more than a fifth (22%) of the opinion they are more likely to have a criminal record. When it comes to the particular traits of privacy advocates, respondents said they could be paranoid (52%), loners (37%) or people partial to spying on their neighbours (36%).  TechRadar

E-Government

US – DOJ Releases “Best Practices for Victim Response and Reporting of Cyber Incidents,” Version 2.0

On September 27, 2018, the U.S. Department of Justice Computer Crime and Intellectual Property (CCIPS) Cybersecurity Unit released Version 2.0 of its “Best Practices for Victim Response and Reporting of Cyber Incidents“ [PDF] Originally issued in 2015, the updated guidance seeks to help organizations better equip themselves to be able to respond effectively and lawfully to cyber incidents. The updated version distills insights from private and public sector experts, incorporating new incident response considerations in light of technical and legal developments in the past three years. While the guidance is designed to mostly be applicable to small- and medium-sized businesses, it may be useful to larger organizations as well. Similar to Version 1.0 [PDF] (see previous analysis here), the updated guidance is divided into several parts, advising companies on steps to take before, during, and after a cybersecurity incident. While the document is not intended to have any regulatory effect, the guidance is a useful tool for organizations seeking to make sure their data security policies align with today’s best practices. [Privacy & Data Security Blog (Alston & Bird)]

Electronic Records

CA – Clinical Trial Data Not Quite Confidential: Federal Court

On July 9, 2018, the Federal Court released its decision ordering Health Canada to provide the results of certain clinical trials, including participant level datasets, to an American researcher: Doshi v Canada (Attorney General), 2018 FC 710 [PDF]. Health Canada requires researchers to sign a standard confidentiality agreement in order to release clinical trial data for the purpose of research. On the basis of the researcher’s refusal to sign the standard confidentiality agreement, Health Canada unsuccessfully attempted to keep confidential the requested reams of clinical trial data. .At issue was the interpretation of subsection 21.1(3) of the Protecting Canadians from Unsafe Drugs Act (“Vanessa’s Law”) [Overview & FAQ]. The case is interesting not only because it was the first time the court was called upon to apply Vanessa’s Law, but also because the court was required to decide other important ancillary issues, such as the confidential nature of clinical trial data and the bearing such nature may have on freedom of expression under section 2(b) of the Canadian Charter of Rights and Freedoms. In light of administrative law principles concerning the exercise of discretionary powers, Justice Grammond held that it was unreasonable for Health Canada to impose a confidentiality requirement as a condition for the disclosure of the data requested by Dr. Doshi (para 87). Following the Federal Court decision, Health Canada indicated that it is working on regulations to publicly release a large amount of information in clinical trial reports for a wide range of medications. Stakeholders should watch out for new developments on this front. [CyberLex Blog (McCarthy Tetrault)]

EU Developments

EU – CNIL Publishes Initial Assessment on Blockchain and GDPR

Recently, the French Data Protection Authority (“CNIL“) published its initial assessment of the compatibility of blockchain technology with the EU General Data Protection Regulation (GDPR) and proposed concrete solutions for organizations wishing to use blockchain technology when implementing data processing activities [see 11 pg PDF in French]. The CNIL made it clear that its assessment does not apply to (1) distributed ledger technology (DLT) solutions and (2) private blockchains. In its assessment, the CNIL first examined the role of the actors in a blockchain network as a data controller or data processor. The CNIL then issued recommendations to minimize privacy risks to individuals (data subjects) when their personal data is processed using blockchain technology. In addition, the CNIL examined solutions to enable data subjects to exercise their data protection rights. Lastly, the CNIL discussed the security requirements that apply to blockchain. The CNIL made a distinction between the participants who have permission to write on the chain (called “participants”) and those who validate a transaction and create blocks by applying the blockchain’s rules so that the blocks are “accepted” by the community (called “miners”). According to the CNIL, the participants, who decide to submit data for validation by miners, act as data controllers when (1) the participant is an individual and the data processing is not purely personal but is linked to a professional or commercial activity; and (2) the participant is a legal personal and enters data into the blockchain. According to the CNIL, the exercise of the right to information, the right of access and the right to data portability does not raise any particular difficulties in the context of blockchain technology (i.e., data controllers may provide notice of the data processing and may respond to data subjects’ requests of access to their personal data or data portability requests.) However, the CNIL recognized that it is technically impossible for data controllers to meet data subjects’ requests for erasure of their personal data when the data is entered into the blockchain: once in the blockchain system, the data can no longer be rectified or erased. The CNIL considered that the security requirements under the GDPR remain fully applicable in the blockchain.  In the CNIL’s view, the challenges posed by blockchain technology call for a response at the European level. The CNIL announced that it will cooperate with other EU supervisory authorities to propose a robust and harmonized approach to blockchain technology. [Privacy & Information Security Law Blog (Hunton Andrews Kurth) with coverage at: JDSUPRA and PaymentsCompliance]

Facts & Stats

WW – Data Breaches Compromised 4.5 Billion Records in the First Half of 2018

According to the latest figures from the Gemalto Breach Level Index, 4.5 billion records were compromised in just the first six months of this year [PR, infographic & download report] . The US comes out the worst, with 3.25 billion records affected and 540 breaches — an increase of 356% in the last month and 98% over the same period in 2017. A total of six social media breaches accounted for over 56% of total records compromised. Of the 945 data breaches, 189 (20% of all breaches) had an unknown or unaccounted number of compromised data records. Europe was well behind America seeing 36% few incidents, but there was a 28% rise in the number of records breached indicating growing severity of attacks. The United Kingdom was the worst hit in its region suffering 22 data incidents. [Information Age | Disclosure laws lead to spike in reported data breaches: Gemalto | A Massive Bump In Data Breaches Is Stoking Bot-Driven Attacks On Retailers | What Drives Tech Internet Giants To Hide Data Breaches Like The Google+ Breach

Finance

CA – More Than a Dozen Federal Departments Flunked Credit Card Security Test

The Canada Revenue Agency, the RCMP, Statistics Canada and more than a dozen other federal departments and agencies have failed an international test of the security of their credit card payment systems. Altogether, half of the 34 federal institutions authorized by the banking system to accept credit-card payments from citizens and others have flunked the test — risking fines and even the revocation of their ability to accept credit and debit payments. Those 17 departments and agencies continue to process payments on Visa, MasterCard, Amex, the Tokyo-based JCB and China UnionPay cards, and federal officials say there have been no known breaches to date. These institutions all fell short of a global data-security standard PCI DSS, for “Payment Card Industry Data Security Standards.” It was established by five of the big credit-card firms. That’s meant to foil fraud artists and criminal hackers bent on stealing names, numbers and codes for credit and debit cards. Federal departments must self-assess against the standard annually. CBC News obtained the briefing note, to the deputy minister of Public Services and Procurement Canada (PSPC), under the Access to Information Act. The document suggests the main culprit is Shared Services Canada (SSC), the federal IT agency created in 2011 that operates and maintains data systems for 13 of the 17 non-compliant institutions. Eleven of the 13 SSC clients who fell short of the credit card security standard say the agency itself has not fixed the security problems. The institutions that failed the credit card security checks are: Health Canada, RCMP, Industry Canada, Transport Canada, National Research Council, Canada Border Services Agency, Natural Resources Canada, Immigration Refugees and Citizenship, Statistics Canada, Fisheries and Oceans, Canada Revenue Agency, Canada Food Inspection Agency and Library and Archives Canada, all of which depend on SSC for their IT. The Library of Parliament, National Defence, the National Film Board of Canada and the Canadian Centre for Occupational Health and Safety are also non-compliant, but are responsible for the security of their own IT systems. [CBC News]

FOI

CA – Bowing to Pressure, Feds Urge Senate to Change Access to Information Bill

After pushback from Indigenous groups and the information commissioner, the federal government is backing down on a number of changes proposed to the Access to Information Act that critics have called “regressive” that part of Bill C-58 that required access to info requesters to describe the document time period, subject, and type. Witnesses had warned that level of detail, particularly with First Nations attempts to get land-claim records, would limit access to records where such detail is not known and almost certainly lead to departments denying requests. Information commissioner Caroline Maynard also successfully convinced the government to give her order-making power when the bill reaches royal assent and is formally approved, rather than a year after the bill becomes law, as it’s currently written. Critics hav also raised alarms about adding the ability for government departments and agencies to decline “vexatious,” or overly broad requests. At a Senate committee Oct. 3, Treasury Board President Scott Brison closed the door on removing that power from the bill, noting the government had already accepted changes from the House Ethics Committee to address fears it would limit access and “address any concerns” of “inappropriate” use. The House passed the changed bill in December 2017. Now, agencies won’t be able to give a request that label unless they have approval from the information commissioner at the beginning of the process. The Access to Information Act lets Canadians pay $5 to request government documents, but critics for years have said it’s dysfunctional, too slow, and allows for big loopholes that limit the information released. [The Hill Times]

CA – Privileged Records and Access to Information Reviews: When to Produce?

Solicitor-client privilege is intended to foster candid conversation between a client and legal counsel in order to ensure that the client receives appropriate legal advice and can make informed decisions. It protects the solicitor-client relationship. By comparison, litigation privilege attaches to records that are created for the dominant purpose of preparing for litigation. It offers protection for clients to investigate and prepare their case. Both privileges are vital to an effective legal system. Enter access to information legislation. Legislation in each Atlantic province provides some form of exception to disclosure for privileged records. In New Brunswick, see The Right to Information and Protection of Privacy Act, SNB 2009, c R-10.6 at s 27 [here]; in Newfoundland and Labrador, see Access to Information and Protection of Privacy Act, 2015, SNL 2015 c A-1.2 at s 30 [here]; in Nova Scotia, see Freedom of Information and Protection of Privacy Act, SNS 1993, c 5 at s 16 [here]; and in Prince Edward Island, see Freedom of Information and Protection of Privacy Act, RSPEI 1988, c 15.01 at s 25 [here]. But a public body’s application of access to information legislation is overseen by a statutory office in every jurisdiction. What happens when the public body’s application of the exception for privileged records is challenged? That question gave rise to the Supreme Court of Canada’s well-known decision in Alberta (Information and Privacy Commissioner) v University of Calgary [here] In that case, a delegate of the Alberta Information and Privacy Commissioner issued a notice to the University to produce records over which the University had claimed solicitor-client privilege. The majority of the Court agreed with the University and determined that the University was not obligated to produce solicitor-client privileged records to the delegate for review. The University of Calgary decision received a great deal of attention when it was released. But little attention has been paid to the Majority’s closing comments regarding the appropriateness of the Alberta OIPC’s decision to seek production of records over which solicitor-client privilege was claimed the Supreme Court emphasized that “even courts will decline to review solicitor-client documents to ensure that privilege is properly asserted unless there is evidence or argument establishing the necessity of doing so to fairly decide the issue” [see note 2 at para 68 here]. The Court was mindful of the fact that the University had identified the records in accordance with the practice in civil litigation in the province, and found that in the absence of evidence to suggest that the University had improperly claimed privilege, the delegate erred in determining that the documents had to be reviewed. While civil litigation practice can – and does – vary from province to province, should you find yourself in a positon where the Commissioner is seeking review of records over which you have claimed solicitor-client or litigation privilege, the Supreme Court’s commentary and the Alberta approach may provide a means by which to have the Commissioner resolve the claim without risking privilege and requiring production of the records in issue. [Mondaq]

Genetics

WW – How Researchers Are Using DNA to Create Images of People’s Faces

Advancements in facial recognition and DNA sequencing technology have allowed scientists to create a portrait of a person based on their genetic information [A process called DNA phenotyping – wiki]. A study published last year and co-authored by biologist Craig Venter [wiki], CEO of San Diego-based company Human Longevity, showed how the technology works. The research team took an ethnically diverse sample of more than 1,000 people of different ages and sequenced their genomes. They also took high-resolution, 3D images of their faces and measured their eye and skin color, age, height and weight. This information was used to develop an algorithm capable of working out what people would look like on the basis of their genes. Applying this algorithm to unknown genomes, the team was able to generate images that could be matched to real photos for eight out of ten people. The success rate fell to five out of ten when the test was restricted to those of a single race, which narrows facial differences. The authors of the paper said the research has ‘significant ethical and legal implications on personal privacy, the adequacy of informed consent, the potential for police profiling and more’. Researchers have already produced images of faces based on genetic material or genome. For example, earlier this year, investigators in Washington State unveiled an image of a suspect created from DNA in the 30-year-old murder case of young Victoria (BC)-area couple Tanya Van Cuylenborg, 18, and Jay Cook, 20. [coverage here] And in Calgary in February police released a high-tech image they said was a likeness of the mother of a baby girl found dead in a dumpster on Christmas Eve. [CTV News]

Health / Medical

US – Fitbit Data Leads to Arrest of 90-Year-Old in Stepdaughter’s Murder

On Saturday, 8 September, at 3:20 pm, Karen Navarra’s Fitbit recorded her heart rate spiking. Within 8 minutes, the 67-year-old California woman’s heart beat rapidly slowed. At 3:28 pm, her heart rate ceased to register at all. She was, in fact, dead. Two pieces of technology have led the San Jose police police to charge Ms. Navarro’s stepfather, Anthony Aiello, with allegedly having butchered her. Besides the Fitbit records, there are also surveillance videos that undercut Aiello’s version of the events. When police compared the dead woman’s Fitbit data with video surveillance from her home, they discovered that Aiello’s car was still there at the point when her Fitbit lost any traces of her heartbeat. Later, police found bloodstained clothing in Aiello’s home. If Aiello turns out to be guilty, he certainly won’t be the first to learn a harsh lesson in how much of the quotidian technology that surrounds us these days can be used to contradict our version of events. One example was in April 2017, when a murder victim’s Fitbit contradicted her husband’s version of events. In another case, we’ve seen pacemaker data used in court against a suspect accused of burning down his house. The title of a paper by Nicole Chauriye says it all: Wearable devices as admissible evidence: Technology is killing our opportunity to lie. [Naked Security (Sophos) coverage at: The Mercury News, The New York Times, The Independent and Los Angeles Times]

US – Despite Patient Privacy Risks, More People Use Wearables for Health

Despite the patient privacy risks that collecting health data on insecure wearable devices could pose, the number of US consumers tracking their health data with wearables has more than doubled since 2013, according to the Deloitte 2018 Survey of US Health Care Consumers [PR – also see blog post]. The use of wearables and other tools for measuring fitness and health improvement goals jumped from 17 percent in 2013 to 42% in 2018. Of those who used wearables in the past year, 73 percent said they used them consistently. Sixty percent of the 4,530 respondents said they are willing to share PHI generated from wearable devices with their doctor to improve their health. 51% of respondents are comfortable using an at-home test to diagnose infections before seeing a doctor. More than one-third (35%) of respondents said they are interested in using a virtual assistant to identify symptoms and direct them to a caregiver. Close to one-third (31%) are interested in connecting with a live health coach that offers text messaging for nutrition, exercise, sleep, and stress management. “For health systems that are collecting this information, it is important that they safeguard the privacy of that information,” Sarah Thomas, managing director of Deloitte’s Center for Health Solutions, told HealthITSecurity.com. “If it is about their personal health, then it is clear that the information needs to be safeguarded and subject to HIPAA” [wiki here] she added. [HealthIT Security Additional coverage at: Health Populi, For The Record and Patient Engagement HIT]

WW – Study Finds Medical Records Are Breached Worryingly Often

A new study by two physicians from Massachusetts General Hospital has concluded that breaches to people’s health data are alarmingly frequent and large scale. Writing in the Journal of the American Medical Association [Temporal Trends and Characteristics of Reportable Health Data Breaches, 2010-2017], Dr Thomas McCoy Jr and Dr Roy Perlis state that 2,149 breaches comprising a total of 176.4 million records occurred between 2010 and 2017. Their data was drawn from the US Health and Human Services Office for Civil Rights breach database [last 24 months here & archive of earlier brecahes], where all breaches of American patient records must be reported under US law. With the except of 2015, the number of breach events has increased every year during that period paper and film-based information were the most commonly compromised type of medical record, with 510 breaches involving 3.4 million records, but the frequency of this type of breach went down across the study period and the largest share of breached records – 139.9 million – came from infiltration into network servers storing electronic health records (EHRs). The frequency of hacking-based breaches went up during the study period. The majority of breaches occurred due to the actions of health care providers, though compromised systems in health plan companies accounted for more total records infiltrated. The authors write that “Although networked digital health records have the potential to improve clinical care and facilitate learning [in] health systems, they also have the potential for harm to vast numbers of patients at once if data security is not improved” [IFLScience! Additional coverage at: Reuters and Healthcare Infomatics]

US – Eight Healthcare Privacy Incidents in September

Eight privacy incidents at healthcare organizations captured public attention last month. While media outlets reported on the following breaches in September, healthcare organizations experienced breaches as early as 2014. Here are the eight incidents presented in order of number of patients affected: 1) The Fetal Diagnostic Institute of the Pacific in Honolulu notified 40,800 patients about a potential data breach after it fell victim to a ransomware attack in June; 2) Blue Cross Blue Shield of Rhode Island notified 1,567 members that an unnamed vendor responsible for sending members’ benefits explanations breached their personal health information; 3) An employee at Kings County Hospital’s emergency room stole nearly 100 patients’ private information and sold it through an encrypted app on his phone; 4) Claxton-Hepburn Medical Center in Ogdensburg, N.Y., terminated an undisclosed number of employees after hospital officials identified breaches of patient health information during a recent internal investigation; 5) Reliable Respiratory in Norwood, Mass., discovered unusual activity on an employee’s email account in July, which may have allowed hackers to access an undisclosed number of patients’ protected health information; 6) Independence Blue Cross in Pennsylvania notified an undisclosed number of plan members about a potential compromise of their protected health information after an employee uploaded a file containing personal data to a website that was publicly accessible for three months; 7) Nashville, Tenn.-based Aspire Health lost some patient information to an unknown cyberattacker who gained access to its internal email system in September, federal court records filed Sept. 25 show; and 8) Lutheran Hospital in Fort Wayne, Ind., canceled all remaining elective surgeries Sept. 18 after its IT team discovered a computer virus on its systems. [Becker’s Hospital Review]

Horror Stories

WW – Google Exposed User Data, Feared Repercussions of Disclosing to Public

Google exposed the private data of hundreds of thousands of users of the Google+ social network and then opted not to disclose the issue this past spring, in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage, according to people briefed on the incident and documents reviewed by The Wall Street Journal. As part of its response to the incident, the Alphabet Inc. unit on Monday announced [see blog post] a sweeping set of data privacy measures that include permanently shutting down all consumer functionality of Google+. A software glitch in the social site gave outside developers potential access to private Google+ profile data including: full names, email addresses, birth dates, gender, profile photos, places lived, occupation and relationship status between 2015 and March 2018, when internal investigators discovered and fixed the issue. A memo prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica. Chief Executive Sundar Pichai was briefed on the plan not to notify users after an internal committee had reached that decision. The question of whether to notify users went before Google’s Privacy and Data Protection Office, a council of top product executives who oversee key decisions relating to privacy. In weighing whether to disclose the incident, the company considered “whether we could accurately identify the users to inform, whether there was any evidence of misuse, and whether there were any actions a developer or user could take in response. None of these thresholds were met here” a Google spokesman said in a statement During a two-week period in late March, Google ran tests to determine the impact of the bug, one of the people said. It found 496,951 users who had shared private profile data with a friend could have had that data accessed by an outside developer. Some of the individuals whose data was exposed to potential misuse included paying users of G Suite, a set of productivity tools including Google Docs and Drive. G Suite customers include businesses, schools and governments. In its contracts with paid users of G Suite apps, Google tells customers it will notify them about any incidents involving their data “promptly and without undue delay” and will “promptly take reasonable steps to minimize harm.” That requirement may not apply to Google+ profile data, however, even if it belonged to a G Suite customer. [The Wall Street Journal | Google exposed data for hundreds of thousands of users | Google+ shutting down after data leak affecting 500,000 users | Google+ Is Shutting Down After a Security Bug Exposed User Info | Google did not disclose security bug because it feared regulation, says report | Laughing at the Google+ bug? You’re making a big mistake | Here’s how to quickly check if you have a Google+ account — and delete it]

Online Privacy

WW – Instagram Prototypes Handing Your Location History to Facebook

Instagram has been spotted prototyping a new privacy setting that would allow it to share your location history with Facebook. That means your exact GPS coordinates collected by Instagram, even when you’re not using the app, would help Facebook to target you with ads and recommend you relevant content. The geo-tagged data would appear to users in their Facebook Profile’s Activity Log, which include creepy daily maps of the places you been. This commingling of data could upset users who want to limit Facebook’s surveillance of their lives. A Facebook spokesperson tells TechCrunch that “To confirm, we haven’t introduced updates to our location settings. As you know, we often work on ideas that may evolve over time or ultimately not be tested or released. Instagram does not currently store Location History; we’ll keep people updated with any changes to our location settings in the future.” That effectively confirms Location History sharing is something Instagram has prototyped, and that it’s considering launching but hasn’t yet. Delivering the exact history of where Instagram users went could assist Facebook with targeting them with local ads across its family of apps. If users are found to visit certain businesses, countries, neighborhoods, or schools, Facebook could use that data to infer which products they might want to buy and promote them. It could even show ads for restaurants or shops close to where users spend their days. Just yesterday, we reported that Facebook was testing a redesign of its Nearby Friends feature that replaces the list view of friends’ locations with a map. Pulling in Location History from Instagram could help keep that map up to date. [TechCrunch | Facebook tests Snapchat-like map for Nearby Friends

WW – Google’s New Chrome Extension Rules Improve Privacy and Security

Google has announced several rules aimed at making Chrome extensions safer and more trustworthy. Many extensions request blanket access to your browsing data, but you’ll soon have the option to whitelist the sites they can view and manipulate, or opt to grant an extension access to your current page with a click. That feature is included in Chrome 70, which is scheduled to arrive later this month and includes other privacy-focused updates.  Developers can no longer submit extensions that include obfuscated code. Google says 70% of malicious and policy-violating extensions use such code. More easily accessible code should speed up the review process too. Developers have until January 1st to strip obfuscated code from their extensions and make them compliant with the updated rules. Additionally, there will be a more in-depth review process for extensions that ask you for “powerful permissions”, Google says. The company is also more closely monitoring those with remotely hosted code. Next year, developers will need to enable two-step verification on their Chrome Web Store accounts. Google also plans to introduce an updated version of the extensions platform manifest, with the aim of enabling “stronger security, privacy and performance guarantees.” Google says half of Chrome users actively employ extensions, so the changes could make browsing the web more secure for millions of people. [engadget – additional coverage at: TechCrunch, CNET News and VentureBeat]

US – Tim Cook Chides Tech Companies for Collecting Personal Data -But Apple Does it Too (Opinion)

Apple CEO Tim Cook took aim at the tech industry’s privacy practices. In an interview with Vice News, he said, “The narrative that some companies will try to get you to believe is, ‘I’ve got to take all your data to make my service better.’ Well, don’t believe that. Whoever’s telling you that, it’s a bunch of bunk.”  Is this a case of the kettle calling the pot black? Apple has cultivated and established a reputation for concern over privacy. There’s a privacy webpage that lists the steps the company takes to safeguard user information and what it refrains from doing. And then there’s the legal privacy policy page that lists the things Apple can and does do with your information. Reading it is enlightening. The page, updated May 22, 2018, “covers how we collect, use, disclose, transfer, and store your personal information.” The details are important. The main one is the first definition: “Personal information is data that can be used to identify or contact a single person.” Is information about a person, such as activities on a website, personal in the sense of being able to identify an individual? No, but it is associated with personal information to become useful. According to Goldman Sachs analyst Rod Hal, Google pays Apple $9 billion a year to remain Safari’s default search engine [coverage]. At the very least, there is a financial incentive for Apple to allow Google access to all the search information. Here is a partial list of “non-personal information” that Apple collects, according to its posted terms: a) Occupation, language, ZIP code, area code, unique device identifier, the URL where your browser was previously, your location and time zone when you used the Apple product; b) product name and device ID; c) details of how you use Apple services, including search queries; d) data stored in Apple log files includes “Internet protocol (IP) addresses, browser type and language, internet service provider (ISP), referring and exit websites and applications, operating system, date/time stamp, and clickstream data”; and e) Apple and its partners “may collect, use, and share precise location data, including the real-time geographic location of your Apple computer or device.”  Perhaps Apple is more concerned with privacy than other companies. Certainly, there’s been no news of a Facebook-style fiasco. Don’t necessarily assume that means you get real privacy. [Inc.com] Coverage at: Apple’s Tim Cook: ‘Don’t believe’ tech companies that say they need your data  | ‘It’s a Bunch of Bunk.’ Apple CEO Tim Cook on Why Tech Firms Don’t Need All Your Data—and Why Apple Expelled Alex Jones | Apple’s Tim Cook is sending a privacy bat-signal to US lawmakers | Apple chief says firm guards data privacy in China | Tim Cook: Don’t Get Hung Up on Where Apple Stores iCloud Data | Tim Cook to talk consumer privacy and data ethics at European data protection conference later this month

WW – Privacy Search Engine Duckduckgo Up 50% Searches in a Year

Privacy-focused search engine DuckDuckGo [wiki] which has just announced it’s hit 30 million daily searches a year after reaching 20M — a year-on-year increase of 50% [see traffic stats]. Hitting the first 10M daily searches took the search engine a full seven years, and then it was another two to get to 20M. DDG’s search engine offers a pro-privacy alternative to Google search that does not track and profile users in order to target them with ads. Instead it displays ads based on the keyword being searched for at the point of each search — dispensing with the need to follow people around the web, harvesting data on everything they do to feed a sophisticated adtech business, as Google does. Google handles least 3BN+ daily searches that daily. This year it expanded from its core search product to launch a tracker blocker to address wider privacy concerns consumers have by helping web users keep more of their online activity away from companies trying to spy on them for profit. [TechCrunch | Privacy: A Business Imperative and Pillar of Corporate Responsibility | DuckDuckGo, the privacy-focused search engine, grows daily searches by 50% to 30 million]

Other Jurisdictions

CA – APEC Cross-Border Privacy Rules Enshrined in U.S.-Mexico-Canada Trade Agreement

On September 30, 2018, the U.S., Mexico and Canada announced a new trade agreement (the “USMCA”) aimed at replacing the North American Free Trade Agreement. Notably, the USMCA’s chapter on digital trade will require the U.S., Canada and Mexico to each “adopt or maintain a legal framework that provides for the protection of the personal information of the users” includeing key principles such as: limitations on collection, choice, data quality, purpose specification, use limitation, security safeguards, transparency, individual participation and accountability. Article 19.8(2) directs the Parties to consider the principles and guidelines of relevant international bodies, such as the APEC Privacy Framework overview here] and the OECD Recommendation of the Council concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data, and Article 19.8(6) formally recognizes the APEC Cross-Border Privacy Rules (the “APEC CBPRs”) [here] within their respective legal systems. In addition, Article 19.14(1)(b) provides that “the Parties shall endeavor to cooperate and maintain a dialogue on the promotion and development of mechanisms, including the APEC Cross-Border Privacy Rules, that further global interoperability of privacy regimes.” The USMCA must still pass the U.S. Congress, the Canadian Parliament, and the Mexican Senate. [Privacy & Information Security Law Blog (Hunton Andrews Kurth coverage at: Womble Bond Dickinson via National Law Review, The Washington Post, Michael Geist Blog, Private Internet Access blog | Data localization concerns in USMCA may be overblown

Privacy (US)

US – FTC Continues to Enforce EU-U.S. Privacy Shield

The U.S. Federal Trade Commission (FTC) recently settled enforcement actions [PR] against four companies accused of misleading consumers about their participation in the European Union-United States Privacy Shield framework [see here, here & wiki here], which allows companies to transfer consumer data from EU member states to the United States in compliance with EU law. These collective actions demonstrate the FTC’s ongoing commitment under new Chairman Joseph Simons to enforce U.S. companies’ filing obligations with the U.S. Department of Commerce as part of their efforts to comply with the Privacy Shield. These actions are also consistent with a recent statement [coverage here] by Gordon Sondlan, U.S. Ambassador to the European Union, that the U.S. is complying with EU data protection rules. Key Takeaways:

  • The FTC will continue to hold companies accountable for the promises they make to consumers regarding their privacy policies, including participation in the Privacy Shield;
  • Companies participating in the Privacy Shield should re-evaluate their privacy procedures and policies regularly to ensure compliance with the various requirements of the Privacy Shield framework;
  • Once a company initiates the Privacy Shield certification process, it must complete that process to claim participation in the Privacy Shield framework; and
  • Companies looking to participate in the Privacy Shield or a similar privacy program should consult counsel to ensure the program is the best option for their particular business needs.

[Dechert LLP Blog | FTC continues aggressive enforcement of Privacy Shield Additional coverage at: Privacy & Information Security Law Blog (Hunton Andrews Kurth), Privacy and Cybersecurity Perspectives (Murtha Cullina), Legal News Line]

US – Google Faces Mounting Pressure from Congress Over Google+ Privacy Flaw

In March, Google discovered a flaw in its Google+ API that had the potential to expose the private information of hundreds of thousands of users. Officials at Google opted not to disclose the vulnerability to its users or the public for fear of bad press and potential regulatory action [in an internal memo first reported here]. Now, lawmakers are asking to see those communications firsthand. Republican leaders from the Senate Commerce Committee are demanding answers from Google CEO Sundar Pichai about a recently unveiled Google+ vulnerability, requesting the company’s internal communications regarding the issued in a letter [PR & PDF]. Some of the senators’ Democratic counterparts on the committee reached out to the Federal Trade Commission to demand that the agency investigate the Google+ security flaw, saying in a letter [3 pg PDF here] that if agency officials discover “problematic conduct, we encourage you to act decisively to end this pattern of behavior through substantial financial penalties and strong legal remedies.” Google has until October 30th to respond to the senators’ inquiries, just weeks before Pichai is scheduled to testify in front of the House Judiciary Committee following the November midterm elections. An exact date for that hearing has yet to be announced. [The Verge | Senators demand Google hand over internal memo urging Google+ cover-up | Senators Demand Memo Behind Google+ Privacy Debacle Cover-Up | Google Draws Bipartisan Criticism Over Data Leak Coverup | Senator Blumenthal Wants FTC To Investigate Google Over Data Leak | Google+ vulnerability comes under fire in Senate hearing | Google facing scrutiny from Australian regulator over Google+ data breach | Google+ Glitch Revelation Sparks German Probe | U.S., European regulators investigating Google glitch

US – Privacy Advocates Tell Senators What They Want in a Data Protection Law

Privacy advocates and tech giants like Google, Amazon and Apple all want a federal privacy law. But while tech companies essentially want a federal privacy bill to be a ceiling that would limit how far states could go with their own privacy rules, privacy advocates want it to be more of a floor that states can build on. During the Oct 10 hearing before the Senate Committee on Commerce, Science and Transportation, privacy advocates stressed the need for a federal privacy law that could work in tandem with state laws instead of overwriting them. Representatives included Andrea Jelinek, the chair of the European Data Protection Board [statement]; Alastair Mactaggart, the advocate behind California’s Consumer Privacy Act [statement]; Laura Moy, executive director of the Georgetown Law Center on Privacy and Technology [statement]; and Nuala O’Connor, president of the Center for Democracy and Technology [statement]. [CNET News | Privacy Groups Urge Congress To Create New National Privacy Law | CDD to Senate: Privacy Legislation Should Be Tough, Comprehensive, Enforceable | Lawmakers Push to Rein In Tech Firms After Google+ Disclosure | Senator calls for FTC investigation into Google+ data exposure

US – Facebook Accused of Violating Children’s Privacy Law

Several US groups advocating public and children’s health have urged the FTC to take action against social media giant Facebook for allegedly violating children’s privacy law. The 18-member group led by the Campaign for a Commercial-Free Childhood (CCFC) have filed a complaint asserting that Facebook’s Messenger Kids, a controversial messaging application for children as young as five, collects kids’ personal information without obtaining verifiable parental consent [PR & Complaint] Messenger Kids is the first major social platform designed specifically for young children, but the complaint argues that] Facebook’s parental consent mechanism does not meet the requirements of the Children’s Online Privacy Protection Act (COPPA) because any adult user can approve any account created in the app and “even a fictional ‘parent’ holding a brand-new Facebook account could immediately approve a child’s account without proof of identity.” The complaint further accused Facebook of disclosing data to unnamed third parties for “broad, undefined business purposes.” In January the CCFC on behalf of the advocacy groups sent Facebook CEO Mark Zuckerberg a letter signed by over 100 experts and advocates asking him to remove Messenger Kids from its platform. Critics have been skeptic about Facebook’s Messenger Kids security measures in protecting children’s privacy, and have been pushing for its closure since its debut last year [see CCFC petition]. [Financial Express]

Privacy Enhancing Technologies (PETs)

WW – Blockchain’s Role as a Privacy Enhancing Technology

Many of us hear the word “blockchain” [wiki & beginner’s guide], mentally file it under “something to do with Bitcoin,” and then swiftly move on. But there is more to this new technology than the cryptocurrencies. Top of mind is blockchain’s potential to enable greater data privacy and data security, says Florian Martin-Bariteau, who runs the University of Ottawa’s Blockchain Legal Lab [here], a research team investigating the practical uses of the technology — and the legal issues those uses raise. He’s also on a panel at the forthcoming CBA Access to Information and Privacy Law Symposium in Ottawa (Oct. 19 and 20) that will compare uses of blockchain in other industries. “The blockchain technology is actually a protocol for information or asset exchange, and an infrastructure for data storage and management,” he says. “It is literally a chain of blocks of information which are interlinked in a secure way.” It was conceived as a kind of secure spreadsheet — a way to timestamp documents in a ledger that could not be edited or tampered with.  Martin-Bariteau describes it as a digital notary system. The technology has since developed to become “a secure, immutable database shared by all parties in a distributed network.” Its utility where privacy is an issue is plain to see.  But part of the attraction of blockchain — the notion that data can’t be edited, altered or erased — is also part of the challenge it creates. For example, in the European Union and elsewhere, GDPR compliance includes the right to erasure. This has enormous implications for any system that requires registered users as part of its design. Martin-Bariteau is clear about the risks involved. “You need to be very careful about the information you register on an immutable ledger,” he notes. “You want to avoid including any personal information, so you need to design your implementation, or advise your clients to design it, in a way that it can use personal information without storing it.” [CBA National and see also: CNIL Publishes Initial Assessment on Blockchain and GDPR

RFID / Internet of Things

US – NIST Seeks Public Comment on Managing Internet of Things Cybersecurity and Privacy Risks

The U.S. Department of Commerce’s National Institute of Standards and Technology recently announced that it is seeking public comment on Draft NISTIR 8228, Considerations for Managing Internet of Things (“IoT”) Cybersecurity and Privacy Risks (the “Draft Report”). The document is to be the first in a planned series of publications that will examine specific aspects of the IoT topic. The Draft Report identifies three high-level considerations with respect to the management of cybersecurity and privacy risks for IoT devices as compared to conventional IT devises: (1) many IoT devices interact with the physical world in ways conventional IT devices usually do not; (2) many IoT devices cannot be accessed, managed or monitored in the same ways conventional IT devices can; and (3) the availability, efficiency and effectiveness of cybersecurity and privacy capabilities are often different for IoT devices than conventional IT devices. The Draft Report also identifies three high-level risk mitigation goals: (1) protect device security; (2) protect data security; and (3) protect individuals’ privacy. Comments are due by October 24, 2018 [download the NIST Comment Template for submitting your comments] [Privacy & Information Security Law Blog (Hunton Andrews Kurth)]

Security

WW –Two-Thirds of Data Security Pros Looking to Change Jobs

Nearly two-thirds of security pros are looking to leave their current jobs. That is one of the findings of a new study on IT security trends by staffing firm Mondo [PR & report] which says that 60% of these workers can be easily hired away. Lack of growth opportunities and job satisfaction are tied as the top reasons to leave a job, according to the survey. The study found several other top reasons why IT security experts leave a job. They include: 1) Unhealthy work environment (cited by 53%); 2) Lack of IT security prioritization from C-level or upper management (cited by 46%); 3) Unclear job expectations (cited by 37%); and 4) Lack of mentorship (cited by 30%). To help retain IT security experts, the study recommends that organizations offer the following benefits, based on responses from security pros: 1) Promoting work-life balance; 2) Taking worker security concerns seriously; 3) Sponsorship of certifications or courses; 4) Increased investment in emerging tech; and 5) CISO leadership/defined ownership of security needs Mondo gathered this data by surveying more than 9,000 IT security professionals and decision-makers. [Information Management]

Smart Cars / Cities

WW – Google’s Plans for First Wired Urban Community Raise Data-Privacy Concerns

A unit of Google’s parent company Alphabet is proposing to turn a rundown part of Toronto’s waterfront into what may be the most wired community in history — to “fundamentally refine what urban life can be.” [see overview here] Sidewalk Labs [here] has partnered with a government agency known as Waterfront Toronto [here] with plans to erect mid-rise apartments, offices, shops and a school on a 12-acre site — a first step toward what it hopes will eventually be an 800-acre development. But some Canadians are rethinking the privacy implications of giving one of the most data-hungry companies on the planet the means to wire up everything from streetlights to pavement. And some want the public to get a cut of the revenue from products developed using Canada’s largest city as an urban laboratory. “The Waterfront Toronto executives and board are too dumb to realize they are getting played,” said former BlackBerry Chief Executive Jim Balsillie who also said the federal government is pushing the board to approve it. “Google knew what they wanted. And the politicians wanted a PR splash and the Waterfront board didn’t know what they are doing. And the citizens of Toronto and Canada are going to pay the price,” Balsillie said. Julie Di Lorenzo, a prominent Toronto developer who resigned from the Waterfront Toronto board over the project [see coverage], said data and what Google wants to do with it should be front and center in the discussions. She also believes the government agency has given the Google affiliate too much power over how the project develops. “How can (Waterfront Toronto), a corporation established by three levels of democratically elected government, have shared values with a limited, for-profit company whose premise is embedded data collection?” Di Lorenzo asked.  Bianca Wylie, an advocate of open government, said it remains deeply troubling that Sidewalk Labs still hasn’t said who will own data produced by the project or how it will be monetized. Google is here to make money, she said, and Canadians should benefit from any data or products developed from it. “We are not here to be someone’s research and development lab,” she said, “to be a loss leader for products they want to sell globally.” Ottawa patent lawyer Natalie Raffoul said the fact that the current agreement leaves ownership of data issues for later shows that it wasn’t properly drafted and means patents derived from the data will default to Google. [The Seattle Times]

Surveillance

US – That Sign Telling You How Fast You’re Driving May Be Spying on You

According to recently released US federal contracting data, the Drug Enforcement Administration will be expanding the footprint of its nationwide surveillance network with the purchase of “multiple” trailer-mounted speed displays “to be retrofitted as mobile License Plate Reader (LPR) platforms.” The DEA is buying them from RU2 Systems Inc., a private Mesa, Arizona company. For overviews of LPRs see EFF] Two other, apparently related contracts, show that the DEA has hired a small machine shop in California, and another in Virginia, to conceal the readers within the signs. An RU2 representative said the company providing the LPR devices themselves is a Canadian firm called Genetec. DEA expects to take delivery of its new license plate-reading speed signs by October 15. The DEA launched its National License Plate Reader Program in 2008; it was publicly revealed for the first time during a congressional hearing four years after that. The DEA’s most recent budget describes the program as “a federation of independent federal, state, local, and tribal law enforcement license plate readers linked into a cooperative system, designed to enhance the ability of law enforcement agencies to interdict drug traffickers, money launderers or other criminal activities on high drug and money trafficking corridors and other public roadways throughout the U.S.” What is a game-changing crime-fighting tool to some, is a privacy overreach of near-existential proportion to others. License plate readers, which can capture somewhere in the neighborhood of 2,000 plates a minute, cast an astonishingly wide net that has made it far easier for cops to catch serious criminals. On the other hand, the indiscriminate nature of the real-time collection, along with the fact that it is then stored by authorities for later data mining is highly alarming to privacy advocates. [QUARTZ | How roadside speed signs in the U.S. could be tracking you using Canadian-made tech]

 

+++

 

 

Advertisements

16–30 September 2018

Biometrics

US – Use of Facial-Recognition Technology Fuels Debate at Seattle School

RealNetworks is offering schools a new, free security tool “Secure, Accurate Facial Recognition — or SAFR, pronounced “safer” — is a technology that the company began offering free to K-12 schools this summer. It took three years, 8 million faces and more than 8 billion data points to develop the technology, which can identify a face with near perfect accuracy. The software is already in use at one Seattle school, and RealNetworks is in talks to expand it to several others across the country. But as the technology moves further into public spaces, it’s raising privacy concerns and calls for regulation — even from the technology companies that are inventing the biometric software. Privacy advocates wonder if people fully realize how often their faces are being scanned, and advocates and the industry alike question where the line is between the benefits to the public and the cost to privacy. “There’s a general habituation of people to be tolerant of this kind of tracking of their face,” said Adam Schwartz, a lawyer with digital privacy group Electronic Frontier Foundation. “This is especially troubling when it comes to schoolchildren. It’s getting them used to it.” School security is a serious issue, he agreed, but he said the benefits of facial recognition in this case are largely unknown, and the damage to privacy could be “exceedingly high.” Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law Center, but she finds the lack of transparency into how the technology is being used and the lack of federal laws troubling. Garvie was on a team that conducted a widespread study that found 54% of U.S. residents are in a facial-recognition database accessible by law enforcement [see PR here & study report here] — usually in the form of a driver’s license photo. “It is unprecedented to have a biometric database that is composed primarily of law-abiding citizens,” Garvie said. “The current trajectory might fundamentally change the relationship between police and the public,” she said. “It could change the degree to which we feel comfortable going about our daily lives in public spaces.” Alessandro Acquisti [here & here], a professor of information technology and public policy at Carnegie Mellon University pointed out that facial recognition can be used for good — to combat child trafficking — and for bad — to track law-abiding citizens anywhere they go. That doesn’t mean it’s neutral, he said. Anonymity is becoming more scarce with the proliferation of photos on social media and the technology that can recognize faces. [Seattle Times, See also: Are You on Board with Using Facial Recognition in Schools? | Is Facial Recognition in Schools Worth the High Price?

Big Data / Analytics

WW – ‘Predictive Policing’: Law Enforcement Revolution or Spin on Old Biases?

Los Angeles has been put on edge by the LAPD’s use of an elaborate data collection centre, a shadowy data analysis firm called Palantir, and predictive algorithms to try to get a jump on crime. Los Angeles isn’t the only place where concerns are flaring over how citizens’ data is collected and used by law-enforcement authorities. Police forces across the U.S. are increasingly adopting the same approach as the LAPD: employing sophisticated algorithms to predict crime in the hope they can prevent it. Chicago, New York City and Philadelphia use similar predictive programs and face similar questions from the communities they are policing, and even legal challenges over where the information is coming from and how police are using it. A sophisticated program called PredPol, short for predictive policing, is used to varying degrees by 50 police forces across the United States. The genesis of the program came from a collaboration between LAPD deputy chief Sean Malinowski and Canadian Jeff Brantingham, an anthropology professor at UCLA. Canadian police forces are very aware of what their U.S. counterparts are doing, but they are wary of jumping in with both feet due to concerns over civil liberties issues. Sarah Brayne, a Canadian sociologist, spent two years inside the LAPD studying its use of predictive policing. She says the LAPD has been using predictive policing since 2012, and crunching data on a wide range of activities — from “where to allocate your resources, where to put your cars, where to put your personnel, to helping investigators solve a crime. And even for some risk management, like tracking police themselves, for performance reviews and different accountability reasons.” But PredPol is just one of the police systems that community watchdogs are concerned about. The Rampart division of the LAPD uses another program to pinpoint individuals who are at risk of committing crimes in the future. This is known as person-based predictive policing. … The program is called Los Angeles Strategic Extraction and Restoration (LASER). At the moment it generates a list of approximately 20 “chronic offenders” that is updated monthly. LAPD documents show how LASER gives people specific scores, which increase with each police encounter. You get five points if you are a gang member. Five points if you are on parole or probation. Five point for arrests with a handgun. And one point for every “quality” police contact in the past two years, which includes what the LAPD calls “Field Interviews.” In Canada, field interviews are called “carding,” referring to the cards police use to record information about the people they have stopped — even when there are no grounds to think they’ve committed an offence. On the chronic offender bulletin there are names, addresses, scores ranging from six to 28, dates of birth and gang affiliations (Crazy Riders, Wanderers, 18th Street, and so on). The police try to track down the people on the bulletin and hand-deliver an “At Risk Behaviour” letter to each one — if they can find them. Officers are given instructions to contact the offenders on the list every month “to check their status” and to remind them to use the community services. They are also encouraged to door-knock on adjacent residences to “spark interest and gather info.” [CBC News]

CA – Q&A: Data Ownership Conundrum in the Data Driven World

Modern society is increasingly reliant upon data and driven by data gathering and data analytics. This leads to many questions that need to be unraveled relating to privacy, data rights and smart cities. One person well-placed to tackle these issues is Teresa Scassa [University of Ottawa law professor & fellow at the Waterloo-based Centre for International Governance Innovation]. In her latest research paper, Data Ownership, Scassa describes how in most jurisdictions the ownership of data is often based in copyright law or protected as confidential information. In Europe, database protection laws also play a role. However, there are limitations and major areas where laws fall short. For example, “Copyright protection requires a human author. Works that are created by automated processes in which human authorship is lacking cannot, therefore, be copyright protected. This has raised concerns that the output of artificial intelligence processes will not be capable of copyright protection,” warns Scassa. To discuss these important issues further, Digital Journal recently asked Teresa Scassa the following questions: 1) How important has data become for businesses?; 2) Are consumers too willing to provide personal data?; 3) How concerned should people be about what is done with personal data?; 4) How about data security issues. How secure is most personal data that is held by companies?; and 5) How are new technologies, like artificial intelligence, affecting data privacy? [Digital Journal] In a follow up interview, Teresa Scassa discusses data privacy laws, considering the recent changes affecting Europe and the possible implications for the U.S. [here]

Canada

CA – OPC Publishes Draft Guidelines for Mandatory Breach Reporting

On September 17, 2018, the Office of the Privacy Commissioner of Canada (OPC) published draft guidelines on mandatory breach reporting under the “Personal Information Protection and Electronic Documents Act” (PIPEDA). The guidelines are intended to assist organizations in meeting their breach reporting and record-keeping obligations under PIPEDA’s mandatory breach reporting regime, which comes into force on November 1, 2018. Organizations have until October 2, 2018 to provide feedback on these draft guidelines In April 2018, the federal government published the Breach of Security Safeguards Regulations setting out the requirements of the new regime, and announced that the Regulations would come into force on November 1, 2018 …organizations will be required to notify the OPC and affected individuals of “a breach of security safeguards” involving personal information under the organization’s control where it is reasonable in the circumstances to believe that the breach creates a “real risk of significant harm” to affected individuals. Other organizations and government institutions must also be notified where such organization or institution may be able to mitigate or reduce the risk of harm to affected individuals. Organizations must also keep and maintain records of all breaches of security safeguards regardless of whether they meet the harm threshold for reporting. Failure to report a breach or to maintain records as required is an offence under PIPEDA, punishable by a fine of up to C$100,000. The draft guidelines are intended to assist organizations in meeting their breach reporting and record-keeping obligations under PIPEDA. Unfortunately for stakeholders, much of the information in the draft guidelines is simply a reiteration of the legal requirements as set out in PIPEDA and the Regulations. However, the draft guidelines provide additional guidance in certain areas, including: 1) Who Is Responsible for Reporting a Breach?; 2) When Does a Breach Create a Real Risk of Significant Harm?; 3) Form of Report; and 4) What Information Must Be Included in a Breach Record? [Business Class (Blakes) Additional coverage at: BankInfo Security]

CA – Upcoming Canadian Breach Notification Requirements Still in Flux

Canada’s national breach notification requirements are coming online November 1st, meaning companies experiencing a data breach will soon have new reporting obligations. These requirements were created in 2015 by the Digital Privacy Act, which amended the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada’s main privacy statute. In April 2018, in preparation for the national implementation of the new law, the Office of the Privacy Commissioner of Canada (OPC), with authority to issue promulgating regulations under PIPEDA, issued Regulations that establish detailed requirements regarding the content and methodology of breach notifications to the OPC and affected individuals. After issuing those Regulations, the OPC continued to receive requests for further clarity and guidance regarding the breach notification requirements under PIPEDA and the OPC Breach Regulations. In response to those further requests for guidance, the OPC announced that it would issue further guidance (“What You Need To Know About Mandatory Reporting Of Breaches Of Security Safeguards”) on breach notification and reporting. On September 17th, the OPC invited public feedback on the draft guidance. The OPC will accept feedback until October 2, 2018. Comments can be sent to OPC-CPVPconsult2@priv.gc.ca and must be either in the body of the email or attached as a Word or PDF document. The OPC will publish the final guidance soon after the October 2nd deadline to ensure guidance is in place when the amendment becomes effective in November. … the OPC’s September 17th announcement indicates there is still uncertainty around what exactly will be required of companies that experience a breach. Companies that hold or control information on Canadian residents have one more opportunity to impact the final requirements or pose questions for clarity in the OPC’s guidance, and should submit their views before the October 2nd deadline. [Eye on Privacy (SheppardMullin) and at: BankInfo Security]

CA – OPC Denounces Slow Progress on Fixing Outdated Privacy Laws

Federal Privacy Commissioner Daniel Therrien’s annual report to Parliament was tabled. [see here, Commissioner’s Message here &103 pg PDF here] It outlines the work of the Office of the Privacy Commissioner of Canada (OPC) as it relates to both the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada’s federal private sector privacy law and the Privacy Act, which applies to the federal public sector. It covers important initiatives over the last year, including key investigations, work on reputation and privacy, new consent guidance as well as work on national security and Bill C-59 [here]. In his report, Therrien also reiterated calls for the government to increase his office’s resources. “My office needs a substantial budget increase to keep up our knowledge of the technological environment and improve our capacity to inform Canadians of their rights and guide organizations on how to comply with their obligations,” he says. “Additional resources are also needed meet our obligations under the new breach reporting regulations that come into force in November.” [see here] Under the regulations, companies will be required to report all privacy breaches presenting a real risk of significant harm. While imperfect, Therrien calls the regulations “a step in the right direction.” As breach notification regulations come into force on the private sector side, serious concerns have also emerged about the federal government’s ability to prevent, detect and manage privacy breaches within its own institutions. An OPC review of privacy breach reporting by federal government institutions found thousands of breaches occur annually, and while some go unreported, others likely go entirely unnoticed at many institutions. Therrien [also] warns privacy concerns are reaching crisis levels and is calling on the federal government to take immediate action by giving his office new powers to more effectively hold organizations to account. “Unfortunately, progress from government has been slow to non-existent … There’s no need to further debate whether to give my office new powers to make orders, issue fines and conduct inspections to ensure businesses respect the law. It’s not enough for the government to ask companies to do more to live up to their responsibilities. To increase trust in the digital economy, we must ensure Canadians can count on an independent regulator with the necessary tools to verify compliance with privacy law. If my Office had order making powers, our guidelines would be more than advice that companies can choose to ignore. They would become real standards that ensure real protection for Canadians.” Therrien says. [Office of the Privacy Commissioner of Canada Also see the OPC’s “Alert” Key lessons for public servants from the 2017-18 Annual Report Coverage: Canada’s privacy laws ‘sadly falling behind’ other countries: Privacy commissioner | Privacy commissioner slams ‘slow to non-existent’ federal action in light of major data breaches | Watchdog says Ottawa moving too slowly on privacy threats | Watchdog slams government’s ‘slow to non-existent’ action to protect Canadians’ privacy | Time of ‘self-regulation’ is over, privacy czar says in push for stronger laws]

CA – ‘Right to Be ForgottenCould Trigger Battle Over Free Speech in Canada

A push by some for a “right to be forgotten” for Canadians is setting up what could be a landmark battle over the conflict between privacy and freedom of expression on the internet. In his annual report issued September 27 – PR, Report, Commissioner’s Message &103 pg PDF] Privacy Commissioner Daniel Therrien served notice he intends to seek clarity from the Federal Court on whether existing laws already give Canadians the right to demand that search engines remove links to material that is outdated, incomplete or incorrect, a process called “de-indexing.” Following a round of consultations he launched in 2016, Therrien concluded in a draft report earlier this year that Canadians do have that right under PIPEDA. Google disagrees — and warns that a fundamental charter right is being threatened. [Section 2 (b) — expression & press freedom, wiki here, Charter here, guidance here] “The right to be forgotten impinges on our ability to deliver on our mission, which is to provide relevant search results to our users,” said Peter Fleischer [here], Google’s global privacy counsel. “What’s more, it limits our users’ ability to discover lawful and legitimate information.” University of Ottawa law professor Michael Geist, also blog posts here & here], who specializes in internet and e-commerce law, said “Given the complexity, given the freedom of expression issues that arise out of this, I think the appropriate place is within Parliament to explicitly go through the policy process and decide what’s right for Canada on this” Internet lawyer Allen Mendelsohn [blog posts here & here] worries about the “slippery slope” implied in a right to be forgotten. With no easy answers on how to move forward, he said it’s Parliament’s duty to debate the concept and decide on appropriate standards. “Parliament represents the people, and if the will of the people think this is a good thing to do, then there’s no good reason why they shouldn’t go ahead and do it,” he said. Google argues that freedom of expression is a fundamental human right. While the European court upheld the right to be forgotten, Chile, Colombia and the U.S. have all rejected it. According to Peter Fleischer “As the privacy commissioner considers translating the European model to Canada, it will also have to confront the challenges of how to balance one person’s right to privacy with another’s right to know, and whether the European right to be forgotten would be consistent with the rights outlined in Canada’s Charter of Rights and Freedoms, which assures Canadians ‘freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication.’“ [CBC News | Privacy watchdog to seek ruling on ‘right to be forgotten’

CA – Liberals Won’t Put Political Parties Under Privacy Laws

The Liberal government will not accept a recommendation — endorsed by MPs from the three major parties on Access to Information, Privacy and Ethics Committee [see here & report here also 56 pg PDF] — to develop a set of privacy rules for political parties or bring them under existing laws. Instead, under the Liberals’ electoral rule changes, parties will simply have to post a privacy policy online. Bill C-76 [here] does not allow for any independent oversight, however, to ensure parties are actually following their policies. Because they’re specifically exempted from federal privacy laws, parties are also not required to report if they’ve been hacked or suffered a data breach involving sensitive information about Canadians. The decision means federal political parties can continue to collect, store and use the personal information of Canadian citizens without limitations, laws or independent oversight. Federal Privacy Commissioner Daniel Therrien — along with his counterparts at the provincial and territorial levels — issued a joint statement calling on all levels of government to put some form of restrictions on parties’ data operations — an increasingly crucial aspect of electioneering in Canadian politics [see PR here & Joint Resolution here]. In exempting political parties from privacy laws, Canada is largely an outlier. The United Kingdom, New Zealand, and much of the European Union subjects parties to privacy rules. [Toronto Star coverage at: Toronto Star Editorial | Political parties excused from privacy laws: Why Albertans’ personal information is at risk]

CA – Buyers’ Privacy Top Priority, Says Ontario’s Online Pot Retailer

Ontario’s government-run cannabis retailer is assuring its future customers that their privacy is the top priority, an issue ranked as a major concern for marijuana users in a recent report which ranked privacy and data security among the top demands of Canadian marijuana consumers, noting one in five listed it as the most important feature. [see Deloitte’s 2018 cannabis report, PR]. Critics have raised concerns about how Ontario Cannabis Store (OCS) [here] customers’ data will be used and stored after the online delivery service launches on Oct. 17. There are worries the data may be stored in the United States, where American border agents could access it and ban travellers from entering the U.S. for using a drug that’s illegal there under federal law. The OCS this week announced it’s taking steps to safeguard customers’ privacy and keep their buying history confidential. Ensuring data is stored within Canada and other privacy considerations were key factors in deciding to partner with Shopify, the Ottawa-based e-commerce platform. All information collected will be deleted and no information will be sold to third parties after it’s held for a minimum time, the company says. While dispensaries across the country are getting ready to open their doors on Oct. 17 — when Canada becomes the second country in the world to legalize recreation marijuana — Ontario residents will be able to legally buy pot only through a government-run delivery service. However, new Ontario Premier Doug Ford has rejected the government-monopoly on cannabis sales — a model set up under the previous Liberal government — [and] storefront pot sales are to begin on April 1. [The London Free Press]

CA – TREB CEO Concerned About Homeowner Privacy, Security

The Toronto Real Estate Board is “pressing ahead” with the Competition Bureau’s demand to make home sales data available on realtors’ password-protected websites, but that doesn’t mean the board’s concerns around privacy are gone. In his first interview since the Supreme Court of Canada refused in August to hear TREB’s seven-year fight [read Competition Bureau PR here & TREB PR here] to keep the numbers under wraps – effectively forcing them to be made public – the board’s chief executive officer John DiMichele told The Canadian Press, “the element of privacy in our opinion hasn’t been settled completely yet.” DiMichele is particularly concerned because he claims to have seen evidence of brokers’ remarks about homeowners being posted online, information that is not included in the home sales data feed TREB had to make available to realtors. DiMichele wouldn’t reveal how he discovered such violations [and he did not] discuss in detail what kind of action will be taken against anyone who is caught posting unauthorized information or home sales data without password protections – conditions mandated in a Competition Tribunal ruling [5 pg PDF here] that came into effect recently, after the Competition Bureau argued that TREB’s refusal to release the data was anti-competitive and stifled innovation. In early September, the board sent cease-and-desist letters to real estate companies warning it will revoke data access and TREB memberships or bring legal action against members it believes are violating its user agreement by posting sales numbers online “in an open and unrestricted fashion.” [The Globe & Mail Additional coverage at: The Toronto Star]

Consumer

WW – Yes Facebook is Using Your 2FA Phone Number to Target You With Ads

Facebook has confirmed it does in fact use phone numbers that users provided it for security purposes to also target them with ads. Specifically a phone number handed over for two factor authentication (2FA) — a security technique that adds a second layer of authentication to help keep accounts secure. Facebook’s confession follows a story Gizmodo ran related to research work carried out by academics at two U.S. universities [Northeastern University and Princeton University] who ran a study [see Investigating sources of PII used in Facebook’s targeted advertising – 18 pg PDF here] in which they say they were able to demonstrate the company uses pieces of personal information that individuals did not explicitly provide it to, nonetheless, target them with ads. Some months ago Facebook did say that users who were getting spammed with Facebook notifications to the number they provided for 2FA was a bug. “The last thing we want is for people to avoid helpful security features because they fear they will receive unrelated notifications,” Facebook then-CSO Alex Stamos wrote in a blog post at the time. Apparently not thinking to mention the rather pertinent additional side-detail that it’s nonetheless happy to repurpose the same security feature for ad targeting. [TechCrunch coverage at: DeepLinks Blog (EFF), The Mercury News and Tom’s Harware]

Facts & Stats

CA – Federal Workers Cited 3,075 Times for Lapses in Document Security

Office workers at Public Services and Procurement Canada were cited 3,075 times last year for failing to lock up documents, USB keys and other storage devices containing sensitive information, says a new security report. And six of those employees were found to be chronic offenders during a “security sweep” at the department in 2017-2018, with each of them leaving confidential material unsecured at least six times over the 12-month period. According to a June 2018 briefing note, obtained by CBC News under the Access to Information Act. [CBC News]

WW – Cyber Crime’s Toll: $1.1 Million in Losses and 1,861 Victims per Minute

Every minute more than $1.1 million is lost to cyber crime and 1,861 people fall victim to such attacks, according to a new report [Evil Internet Minute 2018] from threat management company RiskIQ [see PR, Blog Post & Infographic]. Despite the best efforts of organizations to guard against external cyber threats, spending up to $171,000 every 60 seconds, attackers continue to proliferate and launch successful campaigns online, the study said. Attacker methods range from malware to phishing to supply chain attacks aimed at third parties. Their motives include monetary gain, large-scale reputational damage, politics and espionage. One of the biggest security threats is ransomware. The report said 1.5 organizations fall victim to ransomware attacks every minute, with an average cost to businesses of $15,221. [Information Management]

FOI

CA – N.S. Premier Calls Election Promise to Increase OIPC Powers “a Mistake’

In 2013, Stephen McNeil said that if he became premier, he would “expand the powers and mandate of the Office of the Information and Privacy Commissioner, particularly through granting her order-making power.” At the time he responded to a report by the Centre of Law and Democracy [12 pg PDF] that recommended a complete overhaul of the province’s freedom-of-information policy, writing “If elected Premier, I will expand the powers and mandate of the Review Officer, particularly through granting her order-making power” Nearly five years later and with no follow-through on that commitment, he says the pledge was a “mistake.” He said that he thinks the office is functioning “properly” the way it is and that it has all the power it needs. But experts say that McNeil’s failure to institute meaningful reforms in government transparency five years after taking office indicate a larger failure to take government transparency seriously. Catherine Tully, the province’s current privacy commissioner, has issued her own calls to update the legislation including giving her order-making power. She has said that legislation written in 1993 is outdated for the current digital world. [Global News]

US – Privacy Group Sues Archives for Kavanaugh Surveillance Records

The Electronic Privacy Information Center [EPIC] has filed a federal Freedom of Information Act lawsuit seeking records related to U.S. Supreme Court nominee Brett Kavanaugh’s involvement in the George W. Bush administration’s government surveillance programs between 2001 and 2006 during enactment of the Patriot Act and while the administration was conducting warrantless surveillance for counter-terrorism purposes. [see announcement here & 21 pg PDF claim here] The group alleged that Kavanaugh said in 2006 Senate testimony on his nomination to the U.S. Court of Appeals for the District of Columbia Circuit that he didn’t know anything about the warrantless wiretapping program, which was carried out in secret until 2005. His White House email communications and records related to the program have not been made available to the public, the group alleged. Bloomberg BNA

Genetics

WW – Please Don’t Give Your Genetic Data to AncestryDNA as Part of Their Spotify Playlist Partnership

Ancestry, the world’s largest for-profit genealogy company, has announced a new partnership with Spotify to create playlists based on your DNA. The partnership combines Spotify’s personalized recommendations with Ancestry’s patented DNA home kit data to give users recommendations based on both their Spotify habits and their ancestral place of origin. A ThinkProgress investigation last year found that buried in their terms of service, Ancestry claims ownership of a “perpetual, royalty-free, worldwide license” that may be used against “you or a genetic relative” as the company and its researchers see fit. Upon agreeing to the company’s terms of service, you and any genetic relatives appearing in the data surrender partial legal rights to the DNA, including any damages that Ancestry may cause unintentionally or purposefully. At the same time, maybe their mission isn’t all that different from Spotify’s, who’ve spent the last few years preaching the Big Data gospel in their aim to deliver the most highly-personalized experience to users through data collection. However you feel about data privacy, the Ancestry partnership feels like another big move for Spotify, who have continued to partner with auto manufacturers, telecom behemoths, video providers and more in recent months. [SPIN coverage at: Jezebel, Quartzy, Complex and Campaign]

Health / Medical

US – Congress Urged To Align 42 CFR Part 2 with HIPAA Privacy Rule

The Partnership to Amend 42 CFR Part 2 is urging Congress to include the Overdose Prevention and Patient Safety Act (HR 6082), which would align 42 CFR Part 2 with the HIPAA Privacy Rule, in compromise opioid legislation that the House and Senate are considering. HR 6082 would allow the sharing of information about a substance abuse patient without the patient’s consent. The House passed its comprehensive opioid crisis legislation (HR 6) [here & 9 pg PDF overview here] in June, while the Senate just passed its legislation (S 2680). The two chambers are working on compromise legislation thaty they hope to pass before the mid-term elections. Currently, 42 CFR Part 2 prevents providers from sharing any information on a patient’s substance abuse history unless the patient gives explicit consent. The Partnership to Amend 42 CFR Part 2 wants current law to be amended because, it argues, the stricter confidentiality requirements have a negative effect on medical treatment of individuals undergoing treatment for addiction. They emphasized their case] In a Sept. 18 letter to the Senate and House majority and minority leaders. Not everyone in healthcare favors changing 42 CFR Part 2. The American Medical Association (AMA) has come out against the effort to change current law [arguing in a letter sent to Congress – coverage here] that amending 42 CFR Part 2 would discourage addicted individuals from seeking treatment out of concern that their addiction treatment information will be shared without their permission. [HealthIT Security]

Horror Stories

CA – Proposed Class Action Lawsuit Launched After Alleged NCIX Data Breach

Kipling Warner, a Vancouver software engineer has launched a proposed class action lawsuit in the wake of an alleged data breach involving personal information belonging to former customers of bankrupt computer retailer NCIX. [The issue is being investigated by the RCMP and the BC OIPC see here]. The notice of civil claim filed in B.C. Supreme Court [here] says he gave the company his name and address along with his debit and credit card details in the course of purchasing computer products. He’s seeking to certify a lawsuit against NCIX and the company tasked with auctioning off the computer firm’s old equipment. Warner claims NCIX failed to properly encrypt the information of at least 258,000 people. And he claims the auctioneer failed to take “appropriate steps to protect the private information on its premises.” Warner is suing for loss including damage to credit reputation, mental distress, “wasted time, frustration and anxiety” and time lost “engaging in precautionary communication” with banks, credit agencies and credit car companies. His lawyer, David Klein [here], told CBC that customers dealing with a technology company would expect anyone who comes into contact with their information to take steps to ensure confidentiality. The provincial privacy act says organizations doing business in British Columbia have a duty to protect the personal information entrusted to them. The federal regulation says personal information that is “no longer required to fulfil the identified purposes should be destroyed, erased or made anonymous.” The proposed class action lawsuit says millions of customers could be affected. [CBC News]

US – Uber Agrees to $148M Settlement With States Over Data Breach

Uber will pay $148 million and tighten data security after the ride-hailing company failed for a year to notify drivers that hackers had stolen their personal information, according to a settlement reached with all 50 states and the District of Columbia after a massive data breach in 2016 [here] announced Wednesday. [see California AG PR here, Illinois AG PR here, Alaska AG PR here, New York AG PR here & New Mexico AG PR here] Instead of reporting [the breach], Uber hid evidence of the theft and paid ransom to ensure the data wouldn’t be misused. “This is one of the most egregious cases we’ve ever seen in terms of notification; a yearlong delay is just inexcusable,” Illinois Attorney General Lisa Madigan [wiki here] told The Associated Press. “And we’re not going to put up with companies, Uber or any other company, completely ignoring our laws that require notification of data breaches.” Uber, whose GPS-tracked drivers pick up riders who summon them from cellphone apps, learned in November 2016 that hackers had accessed personal data, including driver’s license information, for roughly 600,000 Uber drivers in the U.S. The company acknowledged the breach in November 2017, saying it paid $100,000 in ransom for the stolen information to be destroyed. The hack also took the names, email addresses and cellphone numbers of 57 million riders around the world. The settlement requires Uber to comply with state consumer protection laws safeguarding personal information and to immediately notify authorities in case of a breach; to establish methods to protect user data stored on third-party platforms and create strong password-protection policies. The company also will hire an outside firm to conduct an assessment of Uber’s data security and implement its recommendations. The settlement payout will be divided among the states based on the number of drivers each has. [The Washington Post coverage at: TechCrunch, PYMNTS, The Wall Street Journal and engadget]

US – Wendy’s Faces Lawsuit for Unlawfully Collecting Employee Fingerprints

A class-action lawsuit has been filed in Illinois against fast food restaurant chain Wendy’s accusing the company of breaking state laws in regards to the way it stores and handles employee fingerprints. The lawsuit was filed on September 11, in a Cook County court [here], according to a copy of the complaint obtained by ZDNet. [The case is: Martinique Owens and Amelia Garcia v. Wendy’s International LLC, et al., Case No. 2018­-ch-­11423, in the Circuit Court of Cook County — complaint here.] The complaint is centered around Wendy’s practice of using biometric clocks that scan employees’ fingerprints when they arrive at work, when they leave, and when they use the Point-Of-Sale and cash register systems. Plaintiffs, represented by former Wendy’s employees Martinique Owens and Amelia Garcia, claim that Wendy’s breaks state law — the Illinois Biometric Information Privacy Act (BIPA) [here] — because the company does not make employees aware of how the company handles their data. Wendy’s does not inform employees in writing of the specific purpose and length of time for which their fingerprints were being collected, stored, and used, as required by the BIPA, and nor does it obtain a written release from employees with explicit consent to obtain and handle the fingerprints in the first place. Nor does it provide a publicly available retention schedule and guidelines for permanently destroying employees’ fingerprints after they leave the company, plaintiffs said. The class-action also names Discovery NCR Corporation [here], which is the software provider that supplies Wendy’s with the biometric clocks and POS and cash register access systems used in restaurants. Plaintiffs said they believe NCR may hold fingerprint information on other Wendy’s employees [ZDNet coverage at: Top Class Actions, The Daily Dot, Human Capital (HRD), Gizmodo and Biometric Update]

WW – Facebook Forces Mass Logout After Breach

Facebook logged 90 million users out of their accounts after the company discovered that hackers had been exploiting a flaw in Facebook code that allowed them to steal Facebook access tokens and take over other people’s accounts. The stolen tokens could also be used to access apps and websites linked to the Facebook accounts. The hackers exploited a trio of flaws that affected the “View As” feature, which lets users see how their profiles appear to other people. Facebook has fixed the security issue; it has also reset the access tokens for 90 million accounts. Facebook became aware of the issue on September 16, when it noticed an unusual spike in people accessing Facebook. [newsroom.fb.com: Security Update | Wired.com: The Facebook Security Meltdown Exposes Way More Sites Than Facebook | Wired: Everything We Know About Facebook’s Massive Security Breach | eWeek: Facebook Data Breach Extended to Third-Party Applications | – ZDnet: Facebook discloses network breach affecting 50 million user accounts |  krebsonsecurity: Facebook Security Bug Affects 90M Users | The Register: Facebook: Up to 90 million addicts’ accounts slurped by hackers, no thanks to crappy code]

WW – Facebook Says Big Breach Exposed 50 Million Accounts to Full Takeover

Facebook Inc said [notice & details here] that hackers stole digital login codes allowing them to take over nearly 50 million user accounts in its worst security breach ever given the unprecedented level of potential access, adding to what has been a difficult year for the company’s reputation. It has yet to determine whether the attacker misused any accounts or stole private information. It also has not identified the attacker’s location or whether specific victims were targeted. Its initial review suggests the attack was broad in nature. Chief Executive Mark Zuckerberg described the incident as “really serious” in a conference call with reporters [see transcript]. His account was affected along with that of Chief Operating Officer Sheryl Sandberg, a spokeswoman said. The vulnerability had existed since July 2017, but the company first identified it on Tuesday after spotting a “fairly large” increase in use of its “view as” [here] privacy feature on Sept. 16, executives said. “View as” allows users to verify their privacy settings by seeing what their own profile looks like to someone else. The flaw inadvertently gave the devices of “view as” users the wrong digital code, which, like a browser cookie, keeps users signed in to a service across multiple visits. That code could allow the person using “view as” to post and browse from someone else’s Facebook account, potentially exposing private messages, photos and posts. The attacker also could have gained full access to victims’ accounts on any third-party app or website where they had logged in with Facebook credentials. Facebook fixed the issue. It also notified the U.S. Federal Bureau of Investigation, Department of Homeland Security, Congressional aides and the Data Protection Commission in Ireland, where the company has European headquarters. Facebook reset the digital keys of the 50 million affected accounts, and as a precaution temporarily disabled “view as” and reset those keys for another 40 million that have been looked up through “view as” over the last year. About 90 million people will have to log back into Facebook or any of their apps that use a Facebook login, the company said. [Reuters See also: Facebook Security Bug Affects 90M Users | Facebook’s spam filter blocked the most popular articles about its 50m user breach | Here’s what to do if you were affected by the Facebook hack | Facebook Says Three Different Bugs Are Responsible For The Massive Account Hacks | Facebook warns that recent hack could have exposed other apps, including Instagram, Tinder, and Spotify | Facebook Faces Class Action Over Security Breach That Affected 50 Million Users | Facebook Could Face Up to $1.63 Billion Fine for Latest Hack Under the GDPR | Facebook could be fined up to $1.63 billion for a massive breach which may have violated EU privacy laws | Until data is misused, Facebook’s breach will be forgotten]

Internet / WWW

EU – Report Warns of Smart Home Tech Impact on Children’s Privacy

Dr. Veronica Barassi of Goldsmiths, University of London, leads the Child Data Citizen research project, and submitted a report on “Home Life Data and Children’s Privacy“ to the Information Commissioner’s Office (ICO), arguing that data collected from children by home automation devices is both personal data and is “home life data,” which is made up of family, household, biometric and highly contextual data. She calls for the ICO to launch a review the impact of home life data on children’s privacy, and to include the concept in future considerations. [Biometric Update coverage at: TechCrunch]

Law Enforcement

CA – RCMP’s Ability to Police Digital Realm ‘Rapidly Declining’

Privacy watchdogs have warned against any new encryption legislation. A note tucked into the briefing binder prepared for RCMP Commissioner Brenda Lucki when she took over the top job earlier this year obtained by CBC News may launch a renewed battle between the national police service and privacy advocates. “Increasingly, criminality is conducted on the internet and investigations are international in nature, yet investigative tools and RCMP capacity have not kept pace. Growing expectations of policing responsibilities and accountability, as well as complexities of the criminal justice system, continue to overwhelm the administrative demands within policing” [says the memo]. Encryption of online data has a been a persistent thorn in the RCMP’s side. “Approximately 70% of all communications intercepted by CSIS and the RCMP are now encrypted. 80 organized crime groups were identified as using encryption in 2016 alone,” according to the 274-page [briefing binder]. Lucki’s predecessor lobbied the government for new powers to bypass digital roadblocks, including tools to get around encryption and warrantless access to internet subscriber information. Some critics have noted that non-criminals — journalists, protesters and academics, among others — also use encryption tools online and have warned any new encryption legislation could undermine the security of financial transactions and daily online communication. Ann Cavoukian …called the RCMP’s push for more online policing power “appalling.” … “I guess we should remind them that we still live in a free and democratic society where people have privacy rights, which means that they should be in control of their personal information … If you’re a law abiding citizen, you get to decide how your information is used and to whom it’s disclosed. The police have no right to access your personal information online, unless of course they have a warrant” she said. [CBC News]

Online Privacy

US – Facebook Scolds Police for Using Fake Accounts to Snoop on Citizens

In a September 19 letter, addressed to Memphis Police Department Director Michael Rallings, Facebook’s Andrea Kirkpatrick, director and associate general counsel for security, scolded the police for creating multiple fake Facebook accounts and impersonating legitimate Facebook users as part of its investigations into “alleged criminal conduct unrelated to Facebook.” Facebook’s letter was sent following a civil rights lawsuit filed by the American Civil Liberties Union (ACLU) of Tennessee that accused the MPD of illegally monitoring activists to stifle their free speech and protests. The lawsuit claimed that Memphis police violated a 1978 consent decree that prohibits infiltration of citizen groups to gather intelligence about their activities. After two years of litigation, the city of Memphis had entered into a consent decree prohibiting the government from “gathering, indexing, filing, maintenance, storage or dissemination of information, or any other investigative activity, relating to any person’s beliefs, opinions, associations or other exercise of First Amendment rights.” Before the trial even began over the ACLU’s lawsuit last month, US District Judge Jon McCalla issued a 35-page order agreeing with the plaintiffs, but he also ruled that police can use social media to look for specific threats: a ruling that, one imagines, would condone the use of fake profiles during undercover police work… butt not the illegal surveillance of legal, Constitutionally protected activism. the ACLU lawsuit uncovered evidence that Memphis police used a fake “Bob Smith” account to befriend and gather intelligence on Black Lives Matter activists. According to the Electronic Frontier Foundation (EFF), Facebook deactivated “Bob Smith” after the organization gave it a heads-up. Then, Facebook went on to identify and deactivate six other fake accounts managed by Memphis police. [Naked Security (Sophos)]

WW – Google Promises Chrome Changes After Privacy Complaints

Google, on the defensive from concerns raised about how Chrome tracks its users, has promised changes to its web browser. Complaints in recent days involve how Google stores data about browsing activity in files called cookies and how it syncs personal data across different devices. Google representatives said there’s nothing to be worried about but that they’ll be changing Chrome nevertheless. In a recent blog post [Zach Koch, Chrome Product Manager said – here] that it will add new options and explanations for its interface and reverse one Chrome cookie-hoarding policy that undermined people’s attempts to clear those cookies. [CNET News Coverage of complaints at: Bloomberg (video), CNBC, WIRED, TechCrunch, Forbes and Popular Mechanics]

WW – Privacy and Anonymity in the Modern World — CyberSpeak Podcast

On this episode of the CyberSpeak with InfoSec Institute podcast [YouTube here], Lance Cottrell, chief scientist at Ntrepid, talks about the evolution of privacy and anonymity on the Internet, the impact of new regulations and laws, and a variety of other privacy-related topics.In the podcast, Cottrell and host Chris Sienko discuss:

  • What about the early Internet drove you to focus on online anonymity and security? (1:45)
  • Do the early privacy tools and concepts hold up in today’s environment? (3:50)
  • When it did become apparent that fraudsters and phishers were taking over the Internet? (5:00)
  • What are some of the most effective social engineering attacks being used? (8:10)
  • Have you ever been scammed or phished? (11:35)
  • Why is online anonymity important? (14:50)
  • What are some examples of privacy and security issues while traveling? (20:50)
  • How will GDPR and California’s new privacy law affect anonymity and privacy? (23:25)
  • What would be your dream privacy regulation or law? (24:55)
  • What are your thoughts on privacy certifications? (28:50)
  • What’s the future of online privacy and anonymity? (29:40)

[Security Boulevard]

Privacy (US)

US – In Senate Hearing, Tech Giants Push Lawmakers for Federal Privacy Rules

A recent hearing at the Senate Commerce Committee [here] with Apple, Amazon, Google and Twitter, alongside AT&T and Charter, marked the latest in a string of hearings in the past few months. This time, privacy was at the top of the agenda. The problem, lawmakers say, is that consumers have little of it. The hearing said that the U.S. was lagging behind Europe’s new GDPR privacy rules and California’s recently passed privacy law, which goes into effect in 2020, and lawmakers were edging toward introducing their own federal privacy law. Here are the key takeaways: 1) Tech giants want new federal legislation, if not just to upend California’s privacy law; 2) Google made “mistakes” on privacy, but evades China search questioning; and 3) Startups might struggle under GDPR-ported rules, companies claim …Committee chairman, Sen. John Thune (R-SD) said [here] that the committee won’t “rush through” legislation, and will ask privacy advocates for their input in a coming hearing. [Watch the full hearing here and read witness statements: Len Cali of ATT – 6 pg PDF here; Andrew DeVore of Amazon – 5 pg PDF here; Keith Enright of Google – 6 pg PDF here & 3pg PDF here; Damien Kieran of Twitter 5 pg PDF here; Guy (Bud) Tribble of Apple 2 pg PDF here; and Rachel Welch of Caharter Communications 5 pg PDF here TechCrunch Coverage: During Senate Hearing, Tech Companies Push for Lax Federal Privacy Rules | Tech Execs Offer Senate Help Writing a Toothless National Privacy Law | US privacy law is on the horizon. Here’s how tech companies want to shape it | Here’s why tech companies are in favor of *federal* regulation | Google confirms Dragonfly project in Senate hearing, dodges questions on China plans | Google confirms secret Dragonfly project, but won’t say what it is]

US – EFF Opposes Industry Efforts to Have Congress Roll Back State Privacy Protections

The Senate Commerce Committee is holding a hearing on consumer privacy [here & PR here], but consumer privacy groups like EFF were not invited. Instead, only voices from big tech and Internet access corporations will have a seat at the table. In the lead-up to this hearing, two industry groups (the Chamber of Commerce and the Internet Association) have suggested that Congress wipe the slate clean of state privacy laws in exchange for weaker federal protections. EFF opposes such preemption, and has submitted a letter to the Senate Commerce Committee to detail the dangers it poses to user privacy. Current state laws across the country have already created strong protections for user privacy. Our letter identifies three particularly strong examples from California’s Consumer Privacy Act, Illinois’ Biometric Privacy Act, and Vermont’s Data Broker Act. If Congress enacts weaker federal data privacy legislation that preempts such stronger state laws, the result will be a massive step backward for user privacy. … The companies represented at Wednesday’s hearing rely on the ability to monetize information about everything we do, online and elsewhere. They are not likely to ask for laws that restrain their business plans. [DeepLinks Blog (Electronic Frontier Foundation)]

US – NTIA Seeks Comment on New Approach to Consumer Data Privacy

The U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) issued a Request for Comments on a proposed approach to consumer data privacy designed to provide high levels of protection for individuals, while giving organizations legal clarity and the flexibility to innovate [see PDF]. The Request for Comments is part of a transparent process to modernize U.S. data privacy policy for the 21st century. In parallel efforts, the Commerce Department’s National Institute of Standards and Technology is developing a voluntary privacy framework [here & here] to help organizations manage risk; and the International Trade Administration is working to increase global regulatory harmony. The proposed approach focuses on the desired outcomes of organizational practices, rather than dictating what those practices should be. With the goal of building better privacy protections, NTIA is seeking comment on the following outcomes: 1) Organizations should be transparent about how they collect, use, share, and store users’ personal information: 2) Users should be able to exercise control over the personal information they provide to organizations; 3) The collection, use, storage and sharing of personal data should be reasonably minimized in a manner proportional to the scope of privacy risks. 4) Organizations should employ security safeguards to protect the data that they collect, store, use, or share; 5) Users should be able to reasonably access and correct personal data they have provided; 6) Organizations should take steps to manage the risk of disclosure or harmful uses of personal data; and 7) Organizations should be accountable for the use of personal data that has been collected, maintained or used by its systems. Comments are due by October 26, 2018 [Newsroom (National Telecommunications and Information Administration) coverage at: Multichannel News, Reuters, CBS News and engadget]

US – NTIA Seeks Comment on New, Outcome-Based Privacy Approach

The U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) [here] issued a Request for Comments [4 pg PDF Federal Register post — also here & PR here] on a new consumer privacy approach that is designed to focus on outcomes instead of prescriptive mandates. The RFC presents an important opportunity for organizations to provide legal and policy input to the administration, and comments are due October 26. The RFC proposes seven desired outcomes that should underpin privacy protections: 1) Transparency, 2) control, 3) reasonable minimization (of data collection, storage length, use, and sharing), 4) security, 5) access and correction, 6) risk management, and 7) accountability. According to the RFC, the outcome-based approach will provide greater flexibility, consumer protection, and legal clarity. Additionally, the RFC describes eight overarching goals for federal action on privacy: 1) Regulatory harmonization; 2) Legal clarity while maintaining the flexibility to innovate; 3) Comprehensive application; 4) Risk and outcome-based approach; 5) Interoperability; 6) Incentivize privacy research; 7) FTC enforcement; and 8) Scalability. The NTIA is seeking comments on the listed outcomes and goals, as well as other issues such as if the FTC needs additional resources to achieve the goals. [Chronicle of Data Protection (Hogan Lovells) coverage at: Multichannel News, Reuters, CBS News and engadget]

US – SEC Brings First Enforcement Action for Violation of ID Theft Rule

On September 26, 2018, the SEC brought its first ever enforcement action [PR] for violations of Regulation S-ID (the “Identity Theft Red Flags Rule”), 17 C.F.R. § 248.201 [here & here also guidance here], in addition to violations of Regulation S-P, 17 C.F.R. 30(a) (the “Safeguards Rule”) [see here & here]. Regulation S-ID and Regulation S-P apply to SEC-registered broker-dealers, investment companies, and investment advisers, and require those entities to maintain written policies and procedures to detect, prevent and mitigate identity theft, and to safeguard customer records and information, respectively. The SEC’s action against Voya Financial Advisors (“Voya”) cements the SEC’s focus on investment adviser and broker-dealer cybersecurity compliance, both in terms of its examination program—which referred the matter to Enforcement—as well as the Division of Enforcement’s Cyber Unit, which investigated and resolved the matter with Voya. The SEC’s enforcement action against Voya arose out of an April 2016 “vishing” intrusion (voice phishing) that allowed one or more persons impersonating Voya representatives to gain access to personal identifying information of approximately 5,600 Voya’s customers. The SEC’s action against Voya was resolved through a settled administrative order, in which Voya neither admitted nor denied the SEC’s findings, but agreed to engage and follow the recommendations of an independent compliance consultant for two years, certify its compliance with the consultant’s recommendations, and pay a $1 million fine. Voya was also enjoined from future violations of Regulation S-P or Regulation S-ID and was censured by the SEC. The SEC noted that, in reaching the settlement, it considered the remedial actions that Voya promptly undertook following the attack. [Privacy & Data Security (Alston & Bird) and at: Reuters, Infosecurity Magazine, Business Record, InvestmentNews and Law 360]

US – Google Releases Framework to Guide Data Privacy Legislation

Google released a set of privacy principles [3 pg PDF & blog post here] to guide Congress as it prepares to write legislation aimed at governing how websites collect and monetize user data. The framework largely consists of privacy principles that Google already abides by or could easily bring itself into compliance with. It calls for allowing users to easily access and control the data that’s collected about them and requiring companies to be transparent about their data practices. The set of proposals is designed to be a baseline for federal rules regarding data collection. Google appears to be the first internet giant to release such a framework, but numerous trade associations have published their own in recent weeks. The industry has gotten on board with the idea of a national privacy law in the weeks since California passed its own strict regulations aimed at cracking down on data collection and increasing user control. Internet companies have universally opposed the measure and have begun pushing Congress to establish a national law that would block states from implementing their own. [The Hill coverage at: AdWeek Coverage at: Charter: Parity Is Key to Online Privacy Protection | In Reversal, IAB Says Congress Should Consider Privacy Legislation

Surveillance

US – Revealed: DoJ Secret Rules for Targeting Journalists With FISA Court Orders

Revealed for the first time are the Justice Department’s rules for targeting journalists with secret FISA court orders. The documents [PDF] were obtained as part of a Freedom of Information Act lawsuit brought by Freedom of the Press Foundation and Knight First Amendment Institute at Columbia University. While civil liberties advocates have long suspected secret FISA court orders may be used (and abused) to conduct surveillance on journalists, the government—to our knowledge—has never acknowledged they have ever even contemplated doing so before the release of these documents today. [These DOJ] FISA court rules are entirely separate from—and much less stringent—than the rules for obtaining subpoenas, court orders, and warrants against journalists as laid out in the Justice Department’s “media guidelines,” which were strengthened in 2015 after scandals involving surveillance of journalists during the Obama era. The DOJ only must follow its regular FISA court procedures (which can be less strict than getting a warrant in a criminal case) and get additional approval from the Attorney General or Assistant Attorney General. FISA court orders are also inherently secret, and targets are almost never informed that they exist. The documents raise several concerning questions: 1) How many times have FISA court orders been used to target journalists?; 2) Why did the Justice Department keep these rules secret — even their very existence — when the Justice Department updated its “media guidelines” in 2015 with great fanfare? and 3) If these rules can now be released to the public, why are the FBI’s very similar rules for targeting journalists with due process-free National Security Letters still considered classified? And is the Justice Department targeting journalists with NSLs and FISA court orders to get around the stricter “media guidelines”? [Freedom of the Press Foundation coverage at: The Intercept]

CA – Cameras on School Buses Are an Option, Says N.L. Privacy Commissioner

The privacy commissioner of Newfoundland and Labrador says the English School District has the right to put cameras on school buses. The issue came up last week when CBC News reported on allegations of sexual assault on a school bus in Western Newfoundland … [where] a teenaged boy has been charged and faces three counts in relation to incidents involving two alleged victims. The family of one of the alleged victims — an eight-year-old girl — is calling on the school district to install cameras on school buses. … “The school district has the ability to put cameras on school buses. They have lots of cameras in many schools across the province,” information and privacy commissioner Donovan Molloy told CBC’s Corner Brook Morning Show [listen here]. School board CEO Tony Stack has said cameras would only be considered as a “last resort” due to privacy reasons. But Privacy Commissioner Molloy says there’s nothing in the law that says cameras are not allowed. He did say, however, that] other measures should be attempted first, such as assigned seating to separate younger and older students, and the use of student monitors, which is permitted under the law. Molloy emphasized that the Office of the Information and Privacy Commissioner has not forbidden the use of cameras on school buses. At the same time he cautioned that he is not advocating for such a change, because constant surveillance may do more harm than good, taking away children’s sense of independence. [CBC News see also: Teenage boy charged with sexual assaults after incidents on school bus | Renewed Calls for Cameras After Alleged School Bus Sexual Assault | North Shore parent starts petition over safety concerns for children riding school buses | School Bus Cameras Not a Cure-All, says Privacy Commissioner

CA – Maps Show All Secret Surveillance Cameras Spying on Canadians

Canadian police agencies have taken part in the increasingly intense law enforcement protocols that have become common across North America and Europe. The most controversial of these efforts, of course, is public surveillance. While Canada’s public surveillance system is less famous than those in the United States and United Kingdom, it does exist. Road cameras are the most well-known and there are potentially thousands of them across the country, all of which are regularly if not constantly monitored. The cameras are designed to catch traffic violations, but they can also be used as a method of public surveillance more broadly, according to Wired. The cameras, of course, also capture activity on sidewalks and public open spaces. According to the Office of the Privacy Commissioner of Canada, Canadian law enforcement agencies “increasingly view it as a legitimate tool to combat crime and ward off criminal activity—including terrorism … however, they present a challenge to privacy, to freedom of movement and freedom of association.” [see here] While the locations of the cameras are (now) public information, most Canadians are unaware that authorities have placed them so extensively in every Canadian city. To give you a sense of the scope of road surveillance in Canada, we’ve compiled these maps, which depict the exact locations of road cameras in every major Canadian city Including: Vancouver, Calgary, Edmonton, Winnipeg, Toronto, Ottawa and Montreal. [MTL Blog coverage at: CBC News]

US Legislation

US – California Approves Bills Tightening Security, Privacy of IoT Devices

Gov. Jerry Brown has signed two bills that could make manufacturers of Internet-connected devices more responsible for ensuring the privacy and security of California residents. Gov. Jerry Brown’s office announced on September 28 that Brown had signed the legislation, Assembly Bill 1906 and Senate Bill 327. The two bills could make manufacturers of Internet-connected devices more responsible for ensuring the privacy and security of California residents. Both pieces of legislation specified they must be signed by the governor and can only become law if the other bill is also signed. Both bills will become law in about 15 months, on Jan. 1, 2020. Senate Bill 327 is the older of the two and was introduced in Feb. 2017 by state Sen. Hannah-Beth Jackson [wiki here], but as currently amended, the senator told Government Technology, is “pretty much a mirror” of AB 1906, introduced in January by Assemblywoman Jacqui Irwin [wiki here] … Both require manufacturers of connected devices to equip them with a “reasonable security feature or features” that are appropriate to their nature and function, and the information they may collect, contain or transmit — and are designed to protect the device and its information from “unauthorized access, destruction, use, modification or disclosure.” The bills also specify that if such a device has a “means for authentification outside a local area network,” that will be considered a reasonable security feature if either the preprogrammed password is unique to each device made; or the device requires a user to create a new “means of authentication” before initial access is granted. The question of what defines a “reasonable security feature or features” is one of several that industry groups cited in their opposition to AB 1906. In a statement provided to GT, the CMTA [California Manufacturers and Technology Association] said the bills are an attempt to “create a cybersecurity framework by imposing undefined rules on California manufacturers,” but instead create a loophole allowing imported devices to “avoid implementing any security features.” This, it said, makes the state less attractive to manufacturers, less competitive and increases the risk of cyberattacks. The Entertainment Software Association in opposition to SB 327, said existing law already requires manufacturers to set up “reasonable privacy protections appropriate to the nature of the information they collect.” [Government Technology See also: California governor signs country’s first IoT security law | Hey, Alexa, California’s New IoT Law Requires Data Protections

US – Amendments to the California Consumer Privacy Act of 2018

Amendments to California’s expansive Consumer Privacy Act of 2018 [AB – 375 here] include new provisions that may significantly impact the timing of enforcement and provide exemptions for large amounts of personal data regulated by other laws. Because the Act was hastily passed [in June, 2018] … it was expected that the Act would undergo significant amendments before it enters into effect on January 1, 2020. The first amendments were passed by the California State Legislature on August 31, 2018, in the form of SB-1121, and Governor Brown [signed it into law September 23, 2018 – see here]. While SB-1121 is labeled as a “technical corrections” bill designed to address drafting errors, ambiguities, and inconsistencies in the Act, in fact, it creates new provisions in addition to those already contained within the Act. One notable provision of the Bill is that it grants a six-month grace period from the date the California AG issues regulations or July 1, 2020, whichever is earlier, before enforcement actions can be brought. Another key effect of the Bill is that it fully exempts data that is regulated by the Gramm-Leach-Bliley Act, the California Financial Information Privacy Act, HIPAA, the California Confidentiality of Medical Information Act, the clinical trials Common Rule, and the Driver’s Privacy Protection Act from the privacy requirements of the Act. However, these industries are still subject to the privacy provisions of the Act if they engage in activities falling outside of their applicable privacy regulations (except for the health care industry, if it treats all data as PHI, then it remains exempt as to all data). As we previously predicted, the Act will continue to evolve prior to its January 1, 2020 enactment. While the current Bill attempts to clarify the Act, it does not address all of the ambiguities and uncertainties. We anticipate further changes and guidance regarding the Act and will continue to monitor the latest developments. [Security & Privacy Bytes (Squire Patton Boggs) Additional coverage at: Privacy and Cybersecurity Perspectives (Murtha Cullina), Workplace Privacy Report (Jackson Lewis), Privacy & Data Security (Alston & Bird) and Data Privacy Monitor (BakerHostetler)]

US – California Consumer Privacy Act: What to Expect

This is the fourth installment in Hogan Lovells’ series [here] on the California Consumer Privacy Act [see installment 1 here, installment 2 here and installment 3 here]. It discusses litigation exposure that businesses collecting personal information about California consumers should consider in the wake of the California Legislature’s passage of the California Consumer Privacy Act of 2018 (CCPA). [AB – 375 here] For several years, the plaintiffs’ bar increasingly has relied on statutes like the Confidentiality of Medical Information Act, Cal. Civ. Code § 56 et seq. [here], and the Customer Records Act, Cal. Civ. Code § 1798.81, et seq. [here], to support individual and classwide actions for purported data security and privacy violations. The CCPA creates a limited private right of action for suits arising out of data breaches. At the same time, it also precludes individuals from using it as a basis for a private right of action under any other statute. Both features of the law have potentially far-reaching implications and will garner the attention of an already relentless plaintiffs’ bar when it goes into effect January 1, 2020. [This post covers] what you need to know [under two headings]: 1) The CCPA Provides a Limited Private Right of Action for Data Breach Suits; and 2) Plaintiffs Likely Will Argue the CCPA Provides a Basis for Unfair Competition Law Claims. Chronicle of Data Protection (Hogan Lovells)

Workplace Privacy

WW – Many Employee Work Habits Seem Innocent but Invite Security Threats

While most employees are generally risk averse, many engage in behaviors that could lead to security incidents, according to a new report from Spanning Cloud Apps LLC [here], a provider of cloud-based data protection. [see Trends in U.S. Worker Cyber Risk-Aversion and Threat Preparedness here] The company surveyed more than 400 full-time U.S. employees, and found that more than half (55%) admitted to clicking links they didn’t recognize, while 45% said they would allow a colleague to use their work computer and 34% were unable to identify an unsecure ecommerce site. The results paint a picture of a workforce that has a general understanding of security risks, but is underprepared for the increasing sophistication and instance of ransomware and phishing attacks, the report said. Employees would rather be “nice” than safe, the study said. Of workers with administrative access, only 35% responded that they would refuse to allow a colleague to access their device. And they like to shop from work, with more than 52% saying they shop online from their work computer. Workers are underprepared for sophisticated phishing emails. When presented with a visual example, only 36% correctly identified a suspicious link as being the key indicator of a phishing email, the study said. [Information Management coverage at: BetaNews]

 

+++

 

01-15 September 2018

Canada

CA – Case on Privacy Rights Headed To SCC

An Ontario case — Tom Le v. Her Majesty The Queen — headed to the Supreme Court of Canada [see docket] will focus on whether guests in a backyard have a reasonable expectation of privacy in police searches.  The Supreme Court decision could have wide-ranging effects for people who currently do not have standing to challenge a search or detention when they are an invited guest on a property. According to Emily Lam, a partner at Kastner Law and one of the lawyers representing Le: “Richer people can essentially purchase their privacy, in terms of building taller fences, walls, gates. People in poverty don’t have that same ability. If privacy rights are linked to ownership and control, that means that people living communally or in social housing might not get the same privacy protection as more affluent people.” The case, which will be heard by the SCC on Oct. 12, revolves around Le, who in 2012 was visiting friends in a fenced backyard at the Atkinson Housing Co-operative, a subsidized housing complex in Toronto. The factum said that police did a “walk-through” of the common area around the edge of the backyard, looking for two people that were neither Le or his friends.  The police “started questioning the young men in the backyard, asking who they were, if they lived there, and what was going on.” When questioned, Le ran and two officers tackled him to the ground nearby. The police found a gun, cash and 13 grams of crack cocaine on Le’s person and in his bag, the factum said (Le was subsequently convicted and the conviction was upheld on appeal). At the Court of Appeal, Justice Peter Lauwers [here] dissented, writing: “The police entry was an unlawful trespass and this tainted everything that followed. I doubt that the police would have brazenly entered a private backyard and demanded to know what its occupants were up to in a more affluent and less racialized community” [also see earlier news coverage here].  Samara Secter [here], an associate at Addario Law Group and one of the lawyers representing Le, says the case “is really about the competing interests of community policing versus privacy rights.” [Law Times NewsCBC News]

CA – Ontario Sued for $32 Million-Plus for Alleged Wrongful Retention of Exonerated Suspects’ DNA Test Results

On Sept. 14, Kirk Baert and Jody Brown, of Koskie Minsky LLP in Toronto, moved in Ontario Superior Court to certify a class proceeding for $32 million-plus, alleging that the government is wrongfully retaining — rather than destroying — DNA test results in the Ontario Centre of Forensic Sciences that were obtained from potential suspects who were exonerated by the DNA samples they volunteered to give police during criminal investigations during the past 18 years. Tthey allege the government committed torts, and statutory and Charter privacy violations, and to appoint Micky Granger as the representative plaintiff. Granger’s statement of claim describes him as a “migrant worker” who voluntarily provided a bodily sample to the Ontario Provincial Police (OPP) during their investigation into an unspecified violent crime that occurred in Bayham, Ont., in 2013. He seeks a declaration that the Ontario Ministry of Community Safety and Correctional Services, which operates and oversees the Ontario Centre of Forensic Sciences (OCFS), has unlawfully stored and retained the class members’ DNA results, including DNA profiles, contrary to s. 487.09(3) [see here] of the Criminal Code [here] which requires that “bodily substances that are provided voluntarily by a person and the results of forensic DNA analysis shall be destroyed or, in the case of results in electronic form, access to those results shall be permanently removed, without delay after the results of that analysis establish that the bodily substance referred to in paragraph 487.05(1)(b) was not from that person.” The statement of claim contends that certain OCFS officials still retain access to such results stored in a database. The claim for damages includes $2 million in punitive damages. [The Lawyer’s Daily]

CA – Understanding Ontario’s Civil Privacy Rights: Reasonable Doubt

On August 23, 2018, the Supreme Court of Canada announced that it would not hear an appeal from the Toronto Real Estate Board (TREB) [PR here] in TREB’s long-running legal battle against the federal Commissioner of Competition [PR here]. As a result of this decision, TREB can no longer prevent the dissemination of listing and sold price data for properties in Toronto.  One of TREB’s primary (and ultimately unsuccessful) arguments against the release of this data related to what TREB characterized as the privacy interests of the individual purchasers and sellers of property. Specifically, TREB argued that the Personal Information Protection and Electronic Documents Act (PIPEDA) [here], which prohibits companies from distributing the personal information of their customers without their customers’ consent, applied to prevent the distribution of this data. In dismissing this argument, the Federal Court of Appeal, which made the decision that TREB was seeking to appeal further to the Supreme Court, noted that purchasers and sellers had consented to the distribution of this information when they signed their respective agreements with TREB agents and brokerages. The court also noted that the way in which TREB had raised this issue made it appear to be after-the-fact justification for anticompetitive behaviour, rather than a legitimate concern on the part of TREB. [Now Toronto,The Globe and Mail, CBC News, Global News and Toronto Star]

CA – OMA Turns to Supreme Court to Stop Release of Names of Highest Paid MDs

The Ontario Medical Association [here] has announced plans to ask the Supreme Court of Canada to hear an appeal of a lower court decision [here] to make the names of top billing doctors public. The association represents the province’s 28,100 practising doctors …in a written statement [here] OMA president Dr. Nadia Alam said: “Physician billings constitute private, personal information. Privacy is an important and fundamental right in Canada that is protected by legislation and the Charter of Rights and Freedoms.” If such information is to be made public, it should be up to the provincial Legislature to do, she said. Reporting billings without context would provide an incomplete and sometimes misleading picture of physician pay structure. In 2016, the Information and Privacy Commissioner of Ontario ruled [see blog post here & order PO-3617 here] in the Toronto Star’s favour in ordering the release of names of doctors paid the most from the publicly funded Ontario Health Insurance Plan. The OMA twice appealed — first to the Ontario Divisional Court, then to the province’s Court of Appeal [read decision here] — losing both times. …The case originated in 2014 when the Star submitted a freedom-of-information request to Ontario’s health ministry for information on top billers. [The Toronto Star]

CA – Preparing for Compliance with New Privacy Consent Guidelines

Commencing January 1, 2019, the Privacy Commissioner of Canada will begin enforcing [the May 2018] “Guidelines for obtaining meaningful consent” [see here], which impose new requirements for private sector organizations to obtain legally valid privacy consents. The Guidelines criticize “the use of lengthy, legalistic privacy policies” that too often make individual control enabled by consent “nothing more than illusory”, and explain that the requirements and best practices summarized in the Guidelines are intended to “breathe life” into the ways that consent is obtained… Compliance with the Guidelines will likely require many organizations to revise their privacy policies/notices and adjust some of their personal information practices and procedures. The Guidelines identify seven principles for private sector organizations to follow to obtain meaningful consent including: 1) Emphasize key elements;  2) Allow individuals to control the level and timing of detail;  3) Provide individuals with clear options to say “yes” or “no”;  4) Be innovative and creative;  5) Consider the consumer’s perspective;  6) Make consent a dynamic and ongoing process; and  7) Be accountable.  The Guidelines also provide guidance regarding issues related to consent, including:  1) Form of Consent;  2) Consent and Children;  3) Appropriate Purposes;  4) Withdrawal of Consent; and  5) Other Obligations  The Guidelines are generally consistent with previously issued guidance — For example: “Ten Tips for a Better Online Privacy Policy and Improved Privacy Practice Transparency” (October 2013); “Interpretation Bulletin: Form of Consent” (March 2014); Guidelines for Online Consent and Frequently Asked Questions for Online Consent (May 2014); Ten Tips for Communicating Privacy Practices to Your App’s Users (September 2014); and Interpretation Bulletin: Openness (August 2015) — but impose new requirements for the form and content of privacy policies/notices, and for providing individuals with clear and easily accessible choices for the collection, use or disclosure of their personal information beyond what is necessary for requested products and services. The Guidelines will likely be a key enforcement tool for the PIPEDA Compliance Directorate, which was established in 2018 to investigate PIPEDA complaints by individuals and complaints initiated by the Privacy Commissioner of Canada. [Privacy Bulletin (BLG)]

CA – Commissioner Seeks Feedback on Breach Reporting Guidance

The Office of the Privacy Commissioner of Canada (OPC) is inviting public feedback on draft guidance to help businesses comply with new mandatory breach reporting requirements under the federal private sector privacy law.  Amendments to the Personal Information Protection and Electronic Documents Act (PIPEDA) to create new provisions requiring organizations to report breaches of security safeguards will come into force November 1, 2018. Prior to coming into force, Innovation, Science, and Economic Development undertook two public consultations and the final regulations were published in the Canada Gazette in April 2018. The OPC has developed guidance and a breach reporting form to help organizations meet their new obligations under the law. [News and announcements]

CA – Political Parties Need Privacy Rules, Watchdogs Say

A joint statement [see PR here & Joint Resolution here] from federal, provincial and territorial watchdogs released Monday morning [says] political parties should not be allowed to collect and use Canadians’ private information without rules or oversight …There are currently no laws restricting how federal political parties collect, store and use Canadians’ private information, even as those parties are increasingly reliant on sophisticated data operations to win elections.  British Columbia is the only jurisdiction at the provincial level that has concrete rules around how parties can amass private information on voters.  In June, MPs on the House of Commons’ privacy and access to information committee unanimously endorsed a recommendation to subject federal parties to privacy laws, and called for more transparency on how parties use big data and analytics. The recommendation was endorsed by MPs from all three major parties on the committee. [The Toronto Star, The Canadaina Press (via G&M)]

CA – Federal Court Refuses to Authorize Abusive “Fishing Expedition” by Canada Revenue Agency

The recent Federal Court decision in Canada (National Revenue) v. Hydro-Québec, 2018 FC 622 [summary] made a strong statement against an interpretation of the CRA’s powers that would allow virtually unlimited invasions of taxpayer privacy. The decision considered the scope of the CRA’s power to compel information about unnamed taxpayers from third parties under section 231.2 of the “Income Tax Act” (and the analogous section 289 of the “Excise Tax Act”). In this context, the decision held that the Court will both strictly interpret the CRA’s powers, and exercise its discretion in appropriate cases, to protect taxpayers from unjustified intrusions by the government and to prevent abusive “fishing expeditions”. This case highlights the CRA’s attempt to construe its powers in the broadest possible terms. The Court found the CRA’s request was “a full-fledged fishing expedition”, of “unprecedented magnitude”, of “practically unlimited scope” and “a complete lack of consideration for the invasion of privacy and the consequences for all taxpayers involved in the request.” Not only did the CRA interpret its own powers in s. 231.2(3) as practically limitless, but its interpretation of the Court’s discretion was so narrow as to render the judicial involvement useless, making the protection of taxpayers “deceptive in practice.” The force with which the Court rejected the self-serving interpretation advanced by the CRA should be encouraging for taxpayers. The case serves as an important reminder that despite its considerable powers, the CRA is not entitled to act outside the bounds of law and it is the courts, not the CRA, that interpret the law. [Thorsteinssons Blog at: Finacial Post]

Encryption

WW – Controversy Erupts Over Five Eyes Countries’ Statement on Encryption

Are Canada, the U.S. and other members of the Five Eyes intelligence alliance preparing to sacrifice online privacy to increase security? Are the five countries about to increase pressure on telecom and software companies to install ways of defeating encryption?  Yes, if you believe privacy advocates after seeing a communique issued last week by security and public safety ministers following their annual meeting in Australia [who] …”agreed to the urgent need for law enforcement to gain targeted access to data, subject to strict safeguards, legal limitations, and respective domestic consultations” [communique] No, if you believe a spokesperson for Public Safety Ralph Goodale. In an email Scott Bardsley, the minister’s senior communications advisor, noted the statement also says the Five “have no interest or intention to weaken encryption mechanisms,” and that any action on the ministers’ statement “will adhere to requirements for proper authorization and oversight, and to the traditional requirements that access to information is underpinned by warrant or other legal process.” The communique …does includes a separate Statement of Principles on Access to Evidence and Encryption, [part of which reads] ”The increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security,” the ministers “encourage information and communications technology service providers to voluntarily establish lawful access solutions to their products and services that they create or operate in our countries. Should governments continue to encounter impediments to lawful access to information necessary to aid the protection of the citizens of our countries, we may pursue technological, enforcement, legislative or other measures to achieve lawful access solutions.”  While some Five Eyes countries are hotter about the issue than others. In June, Australia introduced legislation that would force tech companies to give access to customer encrypted data to its security agencies. In 2016 the U.K. government of the day talked about legislation giving the Home Secretary the power to force telcos to remove or disable end-to-end encryption.  Bardsley pointed out a 2017 committee report [76 pg PDF] called for “no changes to the lawful access regime for subscriber information and encrypted information be made.” In response to that report Public Safety Minister Goodale said that while encryption poses challenges to law enforcement and intelligence agencies the government doesn’t believe in a legislative solution. It is looking at other solutions. [IT World Canada]

EU Developments

UK – Top European Court Says British Spies Broke Human Rights Rules With Their Mass Surveillance Tactics

British spy agencies broke human rights by conducting mass surveillance without proper oversight or safeguards, the European Court of Human Rights has ruled. According to the court, the spies were able to find out far too much about people’s habits and contacts, by examining their online activities. It also said the surveillance had an illegally chilling effect on the free press, by monitoring journalists’ communications. The case was brought to the court a couple years back by more than a dozen human rights groups, including Amnesty International and the American Civil Liberties Union, who were frustrated that the revelations of NSA whistleblower Edward Snowden had not sufficiently reined in the U.K.’s GCHQ intelligence agency. The human rights groups did not get what they wanted from the U.K.’s intelligence services watchdog, the Investigatory Powers Tribunal, which said GCHQ’s use of NSA-intercepted data had been illegal, but became legal when people found out about it, thanks to Snowden. So the groups turned to the European Court of Human Rights.  The court said Thursday that GCHQ’s mass surveillance scheme was not intrinsically illegal, but its design broke two crucial elements of the European Convention on Human Rights: Article 8, the part that guarantees privacy; and Article 10, which guarantees freedom of expression.  The spies infringed on people’s privacy rights because there wasn’t enough oversight or safeguards regarding how data was selected for surveillance. They infringed on free-expression rights because the system did not include proper safeguards for protecting confidential journalistic material—effectively limiting what the press can do without the authorities finding out.  However, the court ruled that GCHQ had not broken European human rights law by using data gathered by U.S. spies, as the safeguards around those procedures were sufficient. It also threw out a set of complaints about the U.K. Investigatory Powers Tribunal being insufficiently independent and impartial.  This is the first time the European Court of Human Rights has dealt with a case involving intelligence sharing. It has however examined cases involving mass surveillance before, and this ruling is in line with those earlier rulings—the court seems to take a somewhat more permissive stance on the issue than the Court of Justice of the European Union, which has repeatedly stamped down on the practice due to the fact that it is indiscriminate. [Fortune]

EU – Google Blasts French Bid to Globalize Right to Be Forgotten

Google shot down efforts by France’s privacy watchdog [CNIL] to globalize the so-called right to be forgotten, telling European Union judges that the regulator “is out on a limb.”  In a hearing at the EU Court of Justice, Google said extending the scope of the right all over the world was “completely unenvisagable.” Such a step would “unreasonably interfere” with people’s freedom of expression and information and lead to “endless conflicts” with countries that don’t recognize the right to be forgotten. “The French CNIL’s global delisting approach seems to be very much out on a limb,” Patrice Spinosi, a French lawyer who represents Google, told a 15-judge panel at the court in Luxembourg. It is in “utter variance” with recent judgments. The hearing will help judges to clarify the terms of the EU tribunal’s landmark 2014 ruling that forced the search engine to remove links to information about a person on request if it’s outdated or irrelevant. Google’s arguments received some support from newspapers, who have often battled the search engine in Europe on other issues. Removing links globally gives too much power to private companies, such as Google, to decide “what pieces of news the public should find or not,” the World Association of Newspapers and News Publishers told the French court in a 2016 letter. Newspapers get dozens of requests every day to remove information from online archives by claiming a “right to be forgotten,” it said. Microsoft Corp. and groups like the Internet Freedom Foundation [here] and the Wikimedia Foundation [here] intervened in Tuesday’s hearing, as well as legal representatives for France, Ireland, Greece, Austria and Poland. Lawyers for the U.K. didn’t show up. While the right to be forgotten concerns all search engines, Google’s dominance in Europe means the company has taken center stage in the wake of the 2014 EU ruling [see here & wiki here]. An advocate general at the EU court is scheduled to deliver an advisory opinion on Dec. 11. [Bloomberg and at: Politico, Business Insider, TechCrunch, and GIZMODO]

EU – When Do Organisations Need to Carry Out a Data Protection Impact Assessment? German Authorities Provide Guidance

The German data protection authorities (German DPAs) have jointly released a list of processing activities [4 pg PDF] that are subject to a data protection impact assessment (DPIA) [see Article 35 GDPR here]. DPIAs shall help identifying, assessing and minimising the data protection risks of a project in which personal data are processed. Especially broader risks to the rights and freedoms of individuals, resulting from the processing, shall be assessed and mitigated by appropriate countermeasures.  The List provides 16 examples and thereby the areas that German DPAs consider constituting “high risk” processing activities. The List gives organisations a first overview over the various use cases of DPIAs. However, the List is not exhaustive and is subject to future revisions. The fact that a process is not mentioned in the List does not necessarily mean that a DPIA will not have to be carried out nonetheless. Other member states have also released their lists. For example, the list of the ICO can be accessed here. [Technology Law Dispatch (ReedSmith)]

EU – German Court Issues GDPR Ruling on Data Subject’s Consent for Persons Under Custodianship

On 16 July 2018, the District Court of Gießen, Germany, ruled that a custodian’s representation rights also cover consent to data processing activities related to the person under custodianship. Under the EU General Data Protection Regulation (GDPR) [here], the processing of personal data is, in principle, prohibited unless there is a legal basis for such processing. Pursuant to Art. 6 para. 1 lit. a) GDPR, one possible legal basis is the data subject’s consent [see here]. However, the legitimacy of a declaration of consent may be in doubt if the data subject lacks the capabilities to declare consent. [All About IP Blog (Mayer/Brown)]

UK – Sir Cliff Richard v the BBC: a Landmark Case on Privacy Rights

In “Sir Cliff Richard OBE v the BBC and SYP” [see 122 pg PDF here & overview here], Mann J addresses the question of whether a suspect in a criminal investigation has a right to privacy, either under Article 8 of the European Convention on Human Rights (ECHR) [see wiki here] as against a public body, in this case South Yorkshire Police (SYP) or under the tort which is developing out of Article 8 jurisprudence as against non‑public authorities, in this case the BBC.  The question as to whether there is a right to privacy, more formally, ‘a reasonable expectation of privacy’, of a suspect in a criminal investigation (Murray v Express Newspapers plc) [see here] has not previously been judicially determined. Hence, the court’s ruling on this important subject represents, at least until any appeal is decided, a landmark decision in human rights jurisprudence. [This essay is a thorough discussion of how Mann J came to his decision] Lexology [8 pg PDF version here]

Finance

US – Credit Freezes Will Be Fee-Free Starting September 21

After Sept. 21, all of the three major consumer credit bureaus will be required to offer free credit freezes to all Americans and their dependents [Equifax, Experian and TransUnion]. A credit freeze – also known as a “security freeze” – restricts access to your credit file, making it far more difficult for identity thieves to open new accounts in your name. Maybe you’ve been holding off freezing your credit file because your home state currently charges a fee for placing or thawing a credit freeze, or because you believe it’s just not worth the hassle. If that accurately describes your views on the matter, this post may well change your mind. Currently, many states allow the big three bureaus to charge a fee for placing or lifting a security freeze. But thanks to a federal law enacted earlier this year [see S.2155 here & coverage here & here], after Sept. 21, 2018 it will be free to freeze and unfreeze your credit file and those of your children or dependents throughout the United States. If you’d like to go ahead with freezing your credit files now, this Q&A post from the Equifax breach explains the basics, and includes some other useful tips for staying ahead of identity thieves. Otherwise, check back here later this month for more details on the new free freeze sites. [Krebs on Securit and at: KOMO News]

Genetics

US – EFF Urges Gov. Brown to Sign Sensible California Bill Imposing Stricter Requirements for DNA Collection from Minors

When the San Diego police targeted black children for DNA collection without their parents’ knowledge in 2016, it highlighted a critical loophole in California law. The California State Legislature recently passed a new bill, A.B. 1584, to ensure that law enforcement cannot stop-and-swab youth without either judicial approval or the consent of a parent or attorney. The bill, introduced by Assemblymember Lorena Gonzalez Fletcher, is now on Gov. Jerry Brown’s desk. EFF has strongly supported this bill from the start and now urges the governor to sign the bill into law.  California’s existing DNA collection law, Proposition 69, attempts to place limitations on when law enforcement can collect DNA from kids, but SDPD found a gaping loophole in the law and crafted a policy to take advantage of that loophole. Under Proposition 69, law enforcement can collect DNA from minors only in extremely limited circumstances. That includes after a youth is convicted or pleads guilty to a felony, or if they are required to register as a sex offender. But here’s the loophole: this only applies to DNA that law enforcement seizes for inclusion in statewide or federal databases. That means local police departments have been able to maintain local databases not subject to these strict limitations.  A.B. 1584 will fix this loophole by requiring law enforcement to obtain a court order, a search warrant, or the written consent of both the minor and their parent, legal guardian, or attorney before collecting DNA directly from the minor. In cases where law enforcement collects a minor’s DNA with proper written consent, A.B. 1584 also requires law enforcement to provide kids with a form for requesting expungement of their DNA sample. Police must make reasonable efforts to promptly comply with such a request. Police must also automatically expunge after two years any voluntary sample collected from a minor if the sample doesn’t implicate the minor as a suspect in a criminal offense. [DeepLinks Blog (EFF), ACLU San Diego and Courthouse News Service]

Health / Medical

CA – Yukon IPC Advises on Security Audits

The Yukon Information and Privacy Commissioner is reminding healthcare custodians of their obligation to conduct security audits, pursuant to the Health Information Privacy and Management Act. Healthcare custodians must identify information management practices that must be audited every 2 years, determine how current policies and procedures measure up against minimum standards, and address gaps within a specified period; audit documentation should be maintained, and can be voluntarily submitted to the IPC. [Yukon IPC – News Release – Reminder to Custodians About Duty to Audit]

CA – Health Care Worker Wins Lawsuit for Being Wrongfully Accused of Accessing Patient Records

An east-central Alberta woman feels vindicated after winning a wrongful termination case against a medical centre society where she worked as a receptionist. The woman claimed she was terminated without just cause and publicly humiliated. Red Deer Judge Andreassen agreed and awarded her $25,600 in compensation [$15,000 in punitive damages and $10,600 in compensatory damages].  The Consort and District Medical Centre Society, months after terminating Sherri Galloway, claimed she violated privacy laws by viewing confidential patient medical records. Judge Andreassen, however, not only ruled there was no evidence to back up the board’s claims but also slammed their actions. In his ruling on this civil matter in Red Deer provincial court he wrote: “In effect the (board) publicly accused (Galloway) of distributing the confidential records of people from this tight-knit community, thus accusing (Galloway) of a whole new level of reprehensible behaviour … a marked departure from ordinary standards of decent behaviour. The conduct of the Defendant (the board) was deliberate, motivated by the search for evidence to criticize (Galloway) and vindicate (the board), took place over a lengthy period, and was known by (the board) to be deeply personal to (Galloway).” Galloway fought for two years to prove she did nothing wrong. The board suspended Galloway on Feb. 2, 2016. On Feb. 11 the board terminated her without cause.  Judge Andreassen said the board accused Galloway of accessing patient records but didn’t have evidence of her doing so. He added the board started the privacy investigation without any allegations of a privacy breach being made against Galloway or even any reason to suspect there were privacy breaches. The board had 30 days after the June 5 ruling to file an appeal but did not. [Red Deer Advocate]

Horror Stories

WW – Spy App for Parents Leaks Millions of Sensitive Records of Customers and Targets Online

Mobile spyware maker mSpy accidentally leaked millions of personal and sensitive records of users and targets online. The software-as-a-service bills itself as the “ultimate monitoring software for parental control” to spy on the mobile devices of their children or partners. According to a report by cybersecurity expert Brian Krebs, security researcher Nitish Shah alerted him to an open online database without password protection that allowed anyone to look for up-to-the-minute mSpy records for both customers and targeted mobile devices. The exposed database contained millions of records including passwords, text messages, call logs, contacts, notes and even location data covertly collected from phones running mSpy. It also included the username, password and private encryption key of every mSpy customer who logged into the site or purchased an mSpy license over the past six months. The private encryption key allows anyone to view and track details of the mobile device running the software. Apple iCloud usernames, authentication tokens, references to iCloud backup files as well as WhatsApp and Facebook messages uploaded from mobile devices running mSpy could be viewed as well. Other exposed records included transaction details of mSpy licenses purchased over the past six months such as customer name, email address, mailing address and amount paid. mSpy user logs including browser and Internet address information of people visiting the mSpy website were also listed in the database. Official response to the incident. Shah said he attempted to alert mSpy of his findings, but was reportedly ignored and blocked by the firm’s support team. [Cyware]

Law Enforcement

WW – Apple Launches Global Law Enforcement Web Portal for Data Access

Apple [in a letter, dated Sept. 4, from Apple General Counsel Kate Adams to U.S. Sen. Sheldon Whitehouse (D. RI) according to a report by Reuters here] is launching a portal that law enforcement agents can use to file requests for data, track their historical requests and be granted access to information if Apple thinks it’s a worthy cause.  However, the company has stressed that it wouldn’t interfere with its commitment to protect its customers, vowing it will ensure law enforcement organisations will only be able to access data that Apple sees fit to provide them with.  The new portal will allow police forces to submit requests and these will be assessed by Apple’s legal teams. Apple will also create another team to train law enforcement officers around the world and online training about how to put through data requests and how they deal with such demands.  Apple’s policies state that anyone requesting the information must properly request access only to the data they specifically need or feel that may help their investigation rather than asking for everything. [IT Pro and at: Naked Security (Sophos) and CNET News]

WW – Ungagged Google Warns Users About FBI Accessing Their Accounts

Dozens of people say they’ve received an email from Google informing them that the FBI has been sniffing around for information on their accounts. Now that a gag order has been lifted, the company is able to “disclose the receipt of the legal process” to any affected users, Google said. The gag orders that often accompany FBI information requests keep organizations such as Google, Microsoft, Facebook and Apple from disclosing the order for a given period of time. Any email provider worth its salt nowadays issues transparency reports, and the biggest companies have called for increased transparency in government surveillance requests.  The emails lack specific details about whatever the FBI was investigating, though they did contain a case number that corresponded to a sealed case on PACER [here]. Some of the recipients have a hunch regarding what it’s all about. In threads on Reddit, Twitter, and Hack Forums, conjecture is that the FBI was looking for information on people associated with LuminosityLink: an easy to use, remote access Trojan (RAT) that Europol snuffed out in February, following a UK-led dragnet in September 2017 that involved over a dozen law enforcement agencies in Europe, Australia and North America that went after hackers linked to the tool [details here & here]. Buying LuminosityLink doesn’t necessarily brand somebody a cybercrook. It had a split personality when it came to its marketing: it was sold as a legitimate tool for Windows admins and also a cheap, easy-to-use, multi-purpose pocket knife with a slew of malware tools you could flip out. While it’s not unusual for a gag order to be subsequently lifted, it is perhaps unusual for the FBI to try to track down every person who purchased software that may not be considered illegal. [Naked Security (Sophos), Motherboard and Daily Mail]

Online Privacy

CA – Researchers Scrutinize Apps for Undisclosed Ties to Advertisers, Analytics Firms

If you want to better understand how an app or a service plans to use your personal information, its privacy policy is often a good place to start. But a recent study [funded by the Office of the Privacy Commissioner of Canada] found there can be a gap between what’s described in that privacy policy, and what the app actually collects and shares.  An analysis by University of Toronto researchers found hundreds of Android apps that disclosed the collection of personal information for the app developer’s own purposes — but, at the same time, didn’t disclose the presence of third-party advertising or analytics services that were collecting the personal information, too. …To generate revenue, app developers often embed software code, known as ad libraries [see this FTC discussion], allowing them to display ads within their app. Because they want to make the ads relevant to individual users, ad libraries often want specific information about those users. For those who may be more familiar with the cookies that track your online browsing habits, that on mobile devices, “you’re being tracked through these ad libraries and these analytics libraries in a very similar way” says Lisa Austin [here], a U of T law professor and one of the study’s co-authors. The researchers have been working on a software project called AppTrans, with the goal of making undisclosed data collection practices more transparent. The software looks for evidence of data collection that isn’t spelled out in a privacy policy by comparing the policy’s language with an analysis of the app’s code …in part, using machine learning — artificial intelligence — to automatically scour privacy policies …Of the 757 apps analyzed, the researchers found nearly 60 per cent of apps collected more information than stated in their privacy policies. For Austin, it’s also an example of how artificial intelligence can be used to society’s benefit, in spite of very legitimate concerns about algorithmic bias and automated decision-making run amok. “This is technology that we can use to to make the digital world more transparent,” she said. “And that’s a real win.” [CBC News]

W – Business Travelers Highlight Public Wifi Security Risk

The latest research by global travel management company Carlson Wagonlit Travel indicates that the majority of business travelers have grave reservations about the safety of their data when using public Wifi networks. Public WiFi security has long been an area that has concerned security professionals – but these concerns are now shared by those who access public WiFi hotspots in airports and other stopover points (such as coffee shops) during their global travels. According to the research 72% of travelers in the Asia Pacific region were not confident about the safety of employer data during their trips. U.S. travelers on the other hand were the most sanguine about the safety of their data with 46% were confident that public WiFi security was adequate. European travelers were least confident with only 27% unworried about their data security when using public WiFi networks. Of the 2,000 global business travelers surveyed, 65% were less than confident about public WiFi security issues. However, it was not only public WiFi security that concerned these travelers. The top three concerns were first and foremost physical theft of devices (or simply losing the devices) and secondly exposing company data to prying eyes while working on their devices. However, by far and away the greatest concern was being hacked while using public WiFi. The concerns around public WiFi were echoed in some of the other issues that were voiced by these travelers. Many were worried about cyber security while using email or even opening company documents. [Anti-Corruption Digest]

Privacy (US)

US – Tech Industry Group Calls for a National Privacy Framework

40 major internet and technology firms, called for the establishment of a national privacy framework [see PR here] anchored by six privacy principles including: 1) Transparency; 2) Controls; 3) Access; 4) Correction; 5) Deletion; and 7) Portability.  In describing the context for the principles the IA noted that its members comply with the range of strong federal privacy, data security, consumer protection, and anti-discrimination laws. Coupled with following state laws, and self-regulatory principles that govern how they do business, this “patchwork” leads to inconsistent experiences for individuals. Accordingly, a new, comprehensive national framework would create more “consistent privacy protections that bolster consumers’ privacy and ease compliance for companies.”  Further, the IA identified key components of a National Privacy Framework to include: a) Fostering privacy and security innovation; b) A national data breach notification law; c) Technology and sector neutrality; d) Performance standard-based approach; e) Risk-based framework; and f) A modern and consistent national framework for individuals and companies.  The U.S. Senate Committee on Commerce, Science, & Transportation will hold a hearing examining consumer privacy protection on September 26, 2018.  [DBR on Data]

US – Chamber Proposes Curbs on Consumer Lawsuits Over Data Privacy

The U.S. Chamber of Commerce released a set of policy positions on internet privacy [Press Release, Issue Brief], including a proposal that companies be shielded from lawsuits if they violate laws governing how they collect and use data on their customers.  The group’s proposal, which would also override stricter state regulations such as ones put in place by California, is all-but-certain to anger consumer advocates who have argued that people should be able to control data about themselves. In addition to calling for the preemption of state law “to provide certainty and consistency to consumers and businesses alike,” the 10-point framework says consumers should not be given a right to sue “for privacy enforcement, which would divert company resources to litigation that does not protect consumers.” In lieu of a consumer right to sue, the Chamber suggested a “reasonable opportunity for businesses to cure deficiencies in their privacy compliance practices before government takes punitive action.” The plan also urges companies to be transparent on their data “collection, use and sharing” while calling for “privacy innovation” and saying that protections “should be considered in light of the benefits provided and the risks presented by data.” [Bloomberg News at: Axios and The Hill]

US – Big Tech Calls On Congress for Privacy Regulation, Pushing Back On State Mandates

Google, Microsoft, Facebook and IBM are lobbying the Trump administration for a federal privacy law in a bid to overrule California’s newly minted state privacy bill [California Consumer Privacy Act of 2018 – AB-375 here]. While big tech views the majority of regulations as a threat, privacy is a simpler issue. California’s Consumer Privacy Act of 2018, boils down to having a granular level understanding of where sensitive data lives and how efficiently it can be accessed. …Big tech wants to change a lot of the language in California’s bill, if not remove it completely. Among the key changes are where information is stored, how quickly businesses need to respond to a consumer’s data request and steep fines, according to Callum Corr, data analytics specialist at ZL Technologies [here], in an interview with CIO Dive. No text has been revealed yet, making the tech industry’s specific desires unknown …Organizations are trying to find a compromise of sorts. The U.S. Chamber of Commerce, Internet Association and Information Technology Industry Council are making efforts to craft voluntary standards in place of legal mandates. The National Institute of Standards and Technology (NIST) announced a collaborative project modeled after its Cybersecurity Framework. The evolving framework is to “provide an enterprise-level approach” for aiding organizations in developing privacy strategies. The first public workshop takes place in October.  The framework is a voluntary tool for organizations to model and big tech is encouraged to participate, said Naomi Lefkovitz, senior privacy policy advisor and lead for the project at NIST, in an interview with CIO Dive. The program allows organizations to “pick your outcomes,” which helps optimize privacy procedures in a collaborative manner. [CIO Dive]

US – Stolen Federal Device Wasn’t Encrypted, Violating Government Rules

The federal government now says the device with personal information on 227 employees of Infrastructure Canada that was reported stolen last month was an unencrypted USB key. News of the theft was first revealed Sept. 13 A Public Services and Procurement Canada (PSPC) [here] employee notified Ottawa police of the theft August 20, and then told their government supervisor the next day, Rania Haddad, a PSPC spokesperson said in an email. The statement didn’t detail what was on the device, but on Sept. 17 PSCP said it was a USB key, which, contrary to government rules, wasn’t encrypted. “An internal investigation is underway to examine why and how this happened and identify measures to ensure this does not happen again.” Deputy Minister Marie Lemay sent an email Sept. 7 to affected staff that “no banking or social insurance information was affected. However, your name, personal record identifier (PRI), date of birth, home address and salary range may have been on the stolen device.” …so far no incident has been reported about malicious use of the stolen information. …The federal privacy commissioner has also been notified. PSPC hasn’t explained why it took 17 days for employees to be notified. [IT World Canada, Global News, Ottawa Citizen and CBC News]

Security

US – Consumers Have Most Confidence in Physician’s Health Data Security

A full 87% of consumers surveyed by Rock Health [see results here] said that they had confidence in the health data security of their physician, but that number dropped to 68% for pharmacies and 60% for health insurance companies.  Confidence in data security correlated closely with the consumers’ willingness to share their health data. Eighty-six percent of consumers said they were willing to share their health data with their physician, but only 58% were willing to share their data with health insurance companies and 52% were willing to share data with pharmacies.  Those numbers dropped significantly for other organizations, according to the survey of close to 4,000 consumers. 47% of respondents had confidence in the health data security of research institutions, 35% had confidence in pharmaceutical companies, 26% had confidence in government organizations, and 24% had confidence in tech companies. These results closely correlate to willing[ness] to share health data. [HealthIT Security]

WW – Vendors Could Be the Weakest Link in Many Cyber Defense Strategies

The growing frequency and intensity of cyberattacks, combined with new data privacy and security regulations, has made cybersecurity a top priority for most organizations. But while there is plenty of attention paid to a firm’s own data, many forget that their partners and suppliers hold a wealth of information on them.  Information Management recently spoke with Jessica Ortega, a web security research analyst at SiteLock, about the risks that may be posed by a firm’s vendor partners. These are the questions she answered:

1) What level of responsibility do organizations have to ensure the cyber hygiene of their supply chain partners?;

2) In what ways could vendors potentially put the data of a client organization at risk, or endanger the client’s compliance status?;

3) What questions should an organization ask a potential vendor to evaluate that company’s cybersecurity practices and cyber hygiene?;

4) Once an organization has started to work with a vendor, what can be done to ensure the security of that vendor’s data?; and

5) Have any of your clients learned about vendor cybersecurity the hard way? Can you provide some lessons that our readers can take away from those mistakes? [Information Management]

WW – Targeted Ransomware on the Rise

Targeted ransomware attacks has been gathering pace and size, a trend for stealthier and more sophisticated ransomware attacks – attacks that are individually more lucrative, harder to stop and more devastating for their victims than attacks that rely on email or exploits to spread. And they do it in a way that’s hard to stop and easy to reproduce. The criminals behind targeted ransomware attacks rely on tactics that can be repeated successfully, commodity tools that are easily replaced, and ransomware that makes itself hard to analyse by staying in its lane and cleaning up after itself. Targeted attacks can lock small businesses out of critical systems or bring entire organisations to a grinding halt, just as a recent SamSam attack against the city of Atlanta showed. For every Atlanta-style attack that hits the headlines though, many more go unreported. Attackers don’t care if the victims are big organisations or small ones, all that matters is how vulnerable they are. All businesses are targets, not just ones that hit the headlines. Ths post considers the anatomy of a targeted attack and looks at some real ransomware — including Dharma, SamSam and BitPaymer — and gives some recommendations about responding.  [Naked Security]

Surveillance

US – Privacy Concerns Cancel Planned Wi-Fi Kiosks in Seattle

The plan to install kiosks with free Wi-Fi and create bus stops with Wi-Fi in Seattle was stopped by Seattle Mayor Jenny Durkan. The mayor’s spokesperson told KIRO 7 she has concerns about privacy.  The transit advertising company Intersection proposed a plan to the city in 2016 that would pay the City of Seattle $97-$167 million dollars for exclusive access for Link Wi-Fi kiosks in Seattle for the next 20 years, according to the company. Intersection’s proposal said the city could make another $100 million in ad revenue during that same period. The kiosks are already installed in New York City and London. Intersection also say they only ask for an email address to log on, not your name or other identifying information. Jen Hensley, president of Link, Intersection says the kiosks do not collect any phone or wireless data. They do have video cameras on the devices to help maintain and protect against vandalism. Link says the cameras are similar to what is used in an ATM. Intersection owns the footage, not the city, and for it to be obtained by law enforcement it would require a subpoena or court order. The video is destroyed after seven days, according to the company. The ACLU applauded Mayor Durkan’s decision to scrap the plan. [KIRO 7 Seattle,The Urbanist and GeekWire]

US Government Programs

US – NSA Metadata Program “Consistent” With 4th Amendment, Kavanaugh Argued

During the second-to-last day of hearings before the Senate Judiciary Committee, Sen. Patrick Leahy (D-Vt.) had an interesting exchange over recent privacy cases with the Supreme Court judicial nominee, Judge Brett Kavanaugh.  Opening their six-minute tête-à-tête, Leahy began by asking Kavanaugh about what he wrote in November 2015 in a case known as “Klayman v. Obama” [805 F.3d 1148 (2015) here]. The complaint argued that the National Security Agency’s telephone metadata program (“Section 215”) [see overviews here & here & wiki here], which gathered records of all incoming and outgoing calls for years on end, was unconstitutional.  [In his concuring oppinion, Kavanaugh said] that even if the Section 215 metadata program was a search, it should be considered “reasonable” in the name of national security. “The Fourth Amendment allows governmental searches and seizures without individualized suspicion when the Government demonstrates a sufficient ‘special need’—that is, a need beyond the normal need for law enforcement—that outweighs the intrusion on individual liberty,” he wrote. “Examples include drug testing of students, roadblocks to detect drunk drivers, border checkpoints, and security screening at airports.” Responding to Leahy, Kavanaugh said “I was trying to articulate what I thought based on precedent at the time, when your information went to a third party and when the government went to a third party, the existing privacy Supreme Court precedent was that your privacy interest was essentially zero. The opinion by Chief Justice Roberts this past spring in the “Carpenter” case [119 pg PDF here & legal coverage here] is a game changer.””Do you think if Carpenter had been decided you would have written the concurrence you did in Klayman?” Leahy asked. “I don’t see how I could have,” Kavanaugh said.  While he didn’t come right out and say it, Leahy seemed to be probing whether Kavanaugh ascribes to what many legal scholars call the “mosaic theory.” This is the notion that, while a series of discrete surveillance or near-surveillance actions in and of themselves may be legal, there comes a point when those are aggregated over a long enough period of time that they become an unreasonable search in violation of the Fourth Amendment.  But when Kavanaugh addressed whether or not he agreed with the mosaic theory, he was measured in his answer. Kavanaugh seemed to suggest that he disagreed with his DC appeals court colleagues on this point.  The Senate Judiciary Committee is expected to vote on his nomination on September 17 [Ars Technica and at: The Washington Times]

US – Federal Court Says NSA PRISM Surveillance Good and Legal Because the Gov’t Said It Was Good and Legal

Three years after its inception, a prosecution [United States of America v. Yahya Farooq Mohammad, case # 3:15-cr-385 – N.D. Ohio Sep. 12, 2018] involving possibly unlawful FISA-authorized surveillance, hints of parallel construction, and a very rare DOJ notification of Section 702 evidence has reached a (temporary) dead end. The defendants challenged the evidence on multiple grounds — many of which weren’t possible before the Snowden leaks [wiki here] exposed the breadth and depth of the NSA’s domestic surveillance.  The federal judge presiding over the case — which involved material support for terrorism charges — has declared there’s nothing wrong with anything the NSA or FISA Court did, so long as the surveillance was authorized and possibly had something to do with national security. …The court says these more-recent exposures are no reason to upset the precedential apple cart.  So, to add this all up: leaked documents from 2013 onward, exposing routinely-abused programs that massively expanded following the 2008 FISA Amendments Act, mean nothing when stacked up against a 2005 case predating the NSA’s admissions of surveillance abuse and the exposure of the FBI’s backdoor searches of domestic communications. Furthermore, the court declares — based on documents provided by the government directly to the court, but not to the defendants (in ex parte hearings) — the FISA-authorized surveillance was on the up-and-up because the government provided documents declaring the FISA-authorized surveillance was on the up-and-up. As for the Fourth Amendment challenge to Section 702 surveillance generally, the court says there’s really no Fourth Amendment issues as this does not apply to “aliens in foreign territory.” The court goes even further, though, suggesting the collection of communications outside of the country does not even require a warrant, even if it “inadvertently” sweeps up Americans’ communications during the process. [TechDirt]

US Legislation

US – House Bill Would Create Financial Data Breach Notification Standard

A bill introduced Sept. 7 by Rep. Blaine Luetkemeyer, R-Mo., of the House Subcommittee on Financial Institutions and Consumer Credit [here & wiki here], aims to create a national standard for financial institutions to notify consumers of data security breaches [“Consumer Information Notification Requirement Act” – see notice here & 5 pg PDF text here]. The legislation would amend the Gramm-Leach-Bliley Act [145 pg PDF here, FTC here & wiki here] to require financial institutions to issue breach notices “in the event of unauthorized access that is reasonably likely to result in identity theft, fraud, or economic loss.” Banks would be covered, along with non-banking financial institutions “to the extent appropriate and practicable,” according to the bill’s language.  The bill does not appear to have companion legislation in the Senate, and its chances for becoming law in the short term are unlikely in the current session of Congress. [MeriTalk]

US – California Passes Bill That Regulates Security for Internet of Things Devices

The California State Legislature recently passed a first-of-its-kind bill on Internet of Things (IoT) security titled SB-327 Information Privacy: Connected Devices and sent it to the governor for his signature. The bill introduces regulations for all connected devices sold in the United States. A quick read-through shows the bill leaves a lot to be desired. Specific guidelines are not established, and many features that need to be included in a bill centered around security are not present. For example, manufacturers should be required to perform a security audit on components purchased from overseas. Despite not being complete, this legislation is a step toward much-needed oversight of security measures. Manufacturers like Google and Amazon place strong security protocols on their products, but even these can be broken by a determined hacker or via a weak link in a connected system. A bill like this will place pressure on American manufacturers to ensure all connected devices provide device-level protection against attacks. A connected device is defined as any device that connects to the Internet and has an IP or Bluetooth address. [Digital Trends,The Washington Post]

+++

 

 

16-31 August 2018

Canada

CA – Mandatory Data Breach Response Obligations Effective November 1, 2018

Every organization subject to Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) must act now to ensure they’re ready to comply with the Digital Privacy Act’s [here] new mandatory data breach response requirements as of November 1, 2018 – or face significant non-compliance consequences [also see coverage]. Basic data breach risk management planning, including steps to reduce the risk of breaches in the first place and creating action plans to ensure readiness for when breaches do occur, are key to ensuring compliance in this evolving legal landscape. But complying with these new obligations won’t happen overnight: the new record-keeping, reporting and notification rules are strict and onerous and the advance preparation necessary to reduce the associated liability and reputational risks when a breach does occur requires time and coordination of external expertise and internal stakeholders. This post explores five key areas to focus on when preparing to comply with the Digital Privacy Act’s new mandatory data breach response requirements including: 1) Understand the New Obligations – Well; 2) Deal With Third Party Contractor Risks; 3) Deal With Employee Risks; 4) Record-Keeping and Reporting Requirements; and 5) Protect Your Legal Privilege. [Insights (McInnes Cooper)]

CA – Privacy Laws Prevent Disclosure of Sexual Misconduct by University Staff

Administrators at the University of Manitoba want to re-examine the provincial privacy laws that they say prevent them from sharing details about the sexual misconduct of past employees with potential new employers. Last year, jazz professor Steve Kirby retired from the university after an internal investigation report said he repeatedly made inappropriate sexual comments and unwanted sexual contact with a female student. Kirby was subsequently hired by the Berklee College of Music in Boston, but was fired after the U of M complainants told Berklee administration about the sexual harassment allegations [CBC Report]. The university says Manitoba’s Freedom of Information and Protection of Privacy Act [FAQ] and labour laws prohibit sharing internal investigations with other potential employers. This interpretation of the law is sparking student concern that professors could continue the behaviour with other students. A University of Manitoba law professor Karen Busby who has researched sexual assault policies across Canada says Manitoba institutions are bound by the same privacy laws that exist in every province and territory. “It’s really hard for people to understand that, but privacy law is clear that a disciplinary matter is a private matter and the employer is not free to disclose disciplinary matters to the media, to future employers, to other employees, unless they need to know for compelling and safety reasons,” she said. [CBC News and at: The Blobe and Mail]

Electronic Records

US – CISOs Unite to Improve IT Security in Healthcare Supply Chain

Healthcare CISOs have set up a council to develop, recommend, and promote security best practices to bolster IT security throughout the healthcare supply chain [hress Release]. Founding members of the Provider Third Party Risk Management Council include CISOs from Allegheny Health Network, Cleveland Clinic, University of Rochester Medical Center, University of Pittsburgh Medical Center, Vanderbilt University Medical Center, and Wellforce/Tufts University. Healthcare organizations rely on a plethora of vendors of all sizes for support, including processing and maintaining data, providing analytics, and performing operational tasks. Vendor security is one of the biggest risk for healthcare organizations and one of the biggest sources of frustration for CISOs. To address this challenge, the council is working with the Health Information Trust Alliance (HITRUST) to improve third-party vendor security. The HITRUST Common Security Framework (CSF) will serve as the security standard for the council. [Health IT Security, HelpNetSecurity, Healthcare Informatics and Health Data Management]

Encryption

US – Feds Want to Wiretap Facebook Messenger Voice Calls

The U.S. federal government is trying to force Facebook to break the encryption of its Messenger app in a lawsuit that’s under seal. The government wants to be able to intercept Messenger voice calls in its investigations, but Facebook is reportedly contesting the government’s request. On August 14, the judge in the Messenger case heard the government’s arguments to hold Facebook in contempt of court over the company’s refusal to break the Messenger app’s encryption. The government wishes to carry out a surveillance request in a case involving a criminal group of undocumented immigrants. Facebook claimed in court that the Messenger app uses end-to-end encryption for all voice calls, which means the company itself can’t intercept those calls and neither can the government. The only way for Facebook and the government to intercept those conversations would be for Facebook to cripple or remove the end-to-end encryption between all users. Alternatively, either Facebook or the government would need to hack the users’ devices in order to obtain the conversations that are automatically decrypted locally by the application. Both Facebook and the U.S. government have refused to comment on this case. [Tom’s Hardware, Reuters, The Verge and Fortune]

US – Tech Industry Told ‘Privacy Is Not Absolute’ and End-to-End Encryption ‘Should Be Rare’

An international network of intelligence agencies has told the tech industry that ‘privacy is not an absolute’ and that the use of end-to-end encryption ‘should be rare’. The statements were made in a joint communiqué and statement of principles following a meeting of the so-called Five Eyes nations – the US, UK, Canada, Australia and New Zealand. The statement on privacy contains a veiled threat to tech companies that they may face legislation if they don’t take steps to ensure that they can allow access to ‘appropriate government authorities.’ The documents acknowledge the importance of encryption, but effectively argue that end-to-end encryption should not be made routinely available for messaging. [9 to 5 Mac and at [AppleInside, CSO Online, SiliconRepublic, IT Pro and The Register]

EU Developments

EU – Top Human Rights Court Denies Right to Be Forgotten in Old Murder Case

On June 28, 2018, the European Court of Human Rights decided that Germany had correctly denied two individuals their “right to be forgotten” requests in connection with press archives relating to a 1991 murder [in “M.L. and W.W. v. Germany” – see Press Releasee]. The two individuals were convicted of the murder of a well-known German actor. They were released from prison in 2008 and brought actions against a German radio station and a weekly magazine asking that articles and radio interviews relating to the 1991 murder be removed from their website archives. The matter reached the German Supreme Court, which held that the interests of the public in having access to the information outweighed the interference with the plaintiffs’ privacy rights. The two individuals then sued Germany before the European Court of Human Rights (ECtHR) arguing that Germany had violated their privacy rights under Article 8 of the European Convention on Human Rights. The ECtHR found that the German Supreme Court had correctly applied the balancing test relating to right to be forgotten claims. Although the Court analyzed extensively the CJEU’s Google Spain case law, the ECtHR’s finding is based solely on Article 8 of the European Convention on Human Rights, which provides for a broad right to privacy. The ECtHR said that the availability of the press articles on the 1991 murder created an interference with the plaintiffs’ privacy rights under Article 8, and that consequently a right to be forgotten request of this type can potentially be made under the European Convention. However, the Court then pointed out that the privacy right under Article 8 had to be balanced against freedom of expression and freedom to access information under Article 10 of the European Convention. [Chronicle of Data Protection (Hogal Lovells) and at: Inforrm’s Blog]

EU – Privacy Shield on Shaky Ground

One of the most pressing privacy and data protection issues is the uncertain fate of Privacy Shield, the framework governing the flow of data between the EU and the U.S. for commercial purposes. The Trump Administration has been given an ultimatum: comply with Privacy Shield, or risk a complete suspension of the EU-U.S. data sharing agreement. In a letter dated July 26, EU commissioner for justice Věra Jourová wagered to U.S. commerce secretary Wilbur Ross that suspension of the EU-U.S. Privacy Shield system would incentivize the U.S. to comply fully with the terms of the agreement. But Jourová’s urging that Ross “be smart and act” in appointing senior personnel to oversee the data sharing deal is hardly new. The July letter closely echoes a European Parliament (EP) resolution passed just three weeks earlier, and the European Commission (EC) voiced similar sentiments in its review of the Privacy Shield Framework last September. Further adding to the chorus of voices raising concerns about Privacy Shield compliance are tech and business groups, which jointly called for the nomination of a Privacy Shield ombudsperson in an Aug. 20 letter. In addition to admonishing the EC’s failure to hold the U.S. accountable thus far, the EP resolution calls for a suspension of Privacy Shield if the U.S. has not fully complied by Sept. 1—though no such suspension has yet been announced. It also expresses serious concerns regarding the U.S.’s recent adoption of the Clarifying Lawful Overseas Use of Data (Cloud) Act and the legislation’s potential conflict with EU data protection laws. With the General Data Protection Regulation (GDPR) having come into effect on May 25, 2018, the EP considers the EC in contravention of GDPR Article 45(5). This article requires the EC to repeal, amend, or suspend an adequacy decision to the extent necessary once a third country no longer ensures an adequate level of data protection— until the U.S. authorities comply with its terms. The immediate tug-of-war between the U.S. and the EU on the validity of Privacy Shield will signal quite a bit about the strength of the EU’s convictions and the future of global privacy legislation. [Lawfare Blog and at: The Register, Cloud Tech, ComputerWeekly and Computer Business Review]

Filtering

WW – Google Employees Push Back Over Plans to Build a Censored Search Engine for China

A war is raging inside Google over the company’s plans to launch a censored version of its search engine in China [see The Intercept]. Thousands of its employees reportedly oppose the move [see the Intercept here]. A letter condemning the plan has been circulated on Google’s internal communication systems, signed by more than 1,400 employees, according to the New York Times [and The Intercept here], which first reported the uproar. According to the letter [see GIZMODO], Project Dragonfly, as the secret operation is known, raised “urgent moral and ethical questions” and the signatories asked Google’s leadership to be more transparent on the move. Earlier this week, Brandon Downey, a former Google engineer who says he worked on an earlier version of its censored Chinese search platform, published an essay criticizing the plans. The app would conform to the Chinese government’s strict censorship rules and remove content on sensitive topics such as political dissidents, free speech, democracy, human rights, and peaceful protest. After its existence was made public, a source at Google said it was unclear if the product would ever get the green light. Google CEO Sundar Pichai said the company is “not close” to launching a search product in China but added that it was very interested in the market and that the company is “exploring many options,” according to sources speaking to CNBC. [Vice News, Media Post, Vox, Naked Security and Finacial Times and also: A majority of Google employees are content with offering a censored search engine in China

Finance

US – Fintech Apps: Consumer Privacy Concerns Remain High

Nearly one-third of U.S. banking consumers use online and mobile fintech apps to help manage their money, according to a new survey by The Clearing House [see PR, key findings here]. But those users are concerned about data privacy and want more control over the financial data their apps can access, says David Fortney, the organization’s executive vice president. The survey asked app users: “What’s your level of comfort sharing data with the fintech apps?” And virtually all “had some level of concern or discomfort,” Fortney says in an interview with Information Security Media Group [listen here]. So who would consumers trust as custodians of their data? The research shows they would trust financial institutions. Fortney also discusses: 1) The types of data that consumers are most and least comfortable providing to fintech apps; 2) Demographic trends in fintech app privacy and security; and 3) How to get consumers to feel safer using fintech apps. [GovInfo Security and at: Bankrate.com, FinExtra Blog and Financial Regulation News]

FOI

CA – Nova Scotia re-launches FOIPOP website after 152 days of being offline

A 152-day saga came to an end on September 5 as the Nova Scotia government brought its Freedom of Information and Protection of Privacy (FOIPOP) website back online after it was revealed in April that a data breach had exposed social insurance numbers, birth dates and personal addresses to the general public. However, the new website does not currently have the same features its predecessor did. “Only publicly released access to information requests are available on the site. The site does not host any personal information and is not connected to the case management system,” said a press release announcing the launch. Any releases made since April 1 will soon be available on the site. With the service at least partially restored, the remainder of this post includes everything we know about the breach, the website and what has happened behind the scenes, detailed through internal emails, briefing documents and reports obtained through FOIPOP requests. [Global News]

CA – OIPC AB Permits Utility Regulator to Disregard Request

The OIPC AB responded to a request by the Alberta Energy Regulator to disregard an access request pursuant to section 55(1) of the Freedom of Information and Protection of Privacy Act. An email request by landowners was vexatious because the purpose of their request was not to obtain access but to use their email request, copied to dozens of unrelated email addresses, to have a public platform to insult and degrade the regulator and its staff; the landowners may clearly confirm in writing and using non-abusive language that they want access to information already specified by the regulator. [OIPC AB – Request for Authorization to Disregard an Access Request under section 55(1) of the Freedom of Information and Protection of Privacy Act – Alberta Energy Regulator]

Genetics

US – FPF Best Practices for Consumer Genetic Testing Services

The Future of Privacy Forum issues best practices for the use of genetic data generated by consumer genetic and personal genomic testing services. Consumer genetic testing services (e.g., Ancestry, MyHeritage) must obtain express consent for collection, analysis and marketing of genetic data (parental consent is required for consumers under 18), and secure data through encryption, data user agreements and access controls; genetic data may be disclosed to law enforcement without consent only where required by valid legal process. [FPF – Privacy Best Practices for Consumer Genetic Testing]

US – 23andMeSays Privacy-Loving Customers Need to Opt Out of its Data Deal With GlaxoSmithKline

Customers of 23andMe (the genetics testing company) need to be aware of how the company is using data that users may have earlier consented to give without anticipating its newer initiatives. One new tie-up was a particular point of interest at TechCrunch’s massive Disrupt show in San Francisco. 23andMe CEO and co-founder Anne Wojcicki was asked a series of questions about 23andMe’s pact with pharmaceutical giant GlaxoSmithKline, which announced in July that it acquired a $300 million stake in 23andMe. As part of the four-year-deal, GSK gains exclusive rights to mine 23andMe’s customer data to develop drug targets. 23andMe has for the last three-and-a-half years been sharing insights with GSK and six other pharmaceutical and biotechnology firms. Now, GSK alone will be able to access the aggregated and wholly anonymized customer information. 23andMe customers have expressed some chagrin about the deal, and Wojcicki’s appearance today might not assuage them. The reason: she underscored that 23andMe customers aren’t being asked to opt in to this data-sharing agreement, but rather, they are being told they can opt-out via email. To people who treasure their privacy, that’s not enough. [TechCrunch and at: Medium]

Health / Medical

US – Unsecured Medical Record Systems and Devices Put Patient Lives at Risk

A team of physicians and computer scientists at the University of California has shown that it is easy to modify medical test results remotely by attacking the connection between hospital laboratory devices and medical record systems. These types of attacks might be more likely used against high-profile targets, such as heads of state and celebrities, than against the general public. But they could also be used by a nation-state to cripple the United States’ medical infrastructure. Dubbed Pestilence, the attack is solely proof-of-concept and will not be released to the general public. While the vulnerabilities the researchers exploited are not new, this is the first time that a research team has shown how they could be exploited to compromise patient health. These vulnerabilities arise from the standards used to transfer patient data within hospital networks, known as the Health Level Seven standards, or HL7 [see wiki]. Essentially the language that allows all devices and systems in a medical facility to communicate, HL7 was developed in the 1970s and has remained untouched by many of the cybersecurity advances made in the last four decades. [Science Daily]

WW – Insider Threats Account for Almost 1/3 of Healthcare Breaches

A Protenus breach study, in collaboration with DataBreaches.net, examined breaches in the healthcare sector April-June 2018. The majority were first-time offenders, who are more than 30% likely to commit a second offence in 3 months’ time and a third offence in 1 years’ time; almost 3/4 of offenders snoop into patient records belonging to a family member. [Q2 2018 Breach Barometer – Protenus]

CA – OIPC NS Finds Multiple Violations by Pharmacist

A OIPC NS report investigated breaches of personal health information in the provincial Drug Information System pursuant to the Personal Health Information Act. Organizations authorized to access the provincial drug information system were not sufficiently monitoring their staff’s access (lookups without user notes and not associated with dispensing activity were not audited), resulting in the pharmacist snooping into the PHI of 46 individuals in an EHR database, including her doctor, co-workers, and her child’s teachers. [OIPC NS – Investigation Report IR18-01 – Drug Information System Privacy Breaches, Department of Health and Wellness]

Horror Stories

CA – Air Canada Resets 1.7 Million Accounts After App Breach

Air Canada has been forced to issue a password reset for all 1.7 million users of its Android, iOS and BlackBerry mobile app after up to 20,000 accounts were compromised by hackers last week. According to this alert, the company detected “unusual login behaviour,” between August 22 and 24, after which it blocked further access. For the 20,000 people believed to be directly affected by the breach, two types of data were put at risk: a) Name, email address, telephone number, and Air Canada Aeroplan account number; and b) Potentially also passport number, NEXUS number (a system allowing rapid crossing of some borders), Known Traveler number, gender, birth date, nationality, passport expiration date, country of issuance, and country of residence. Credit card numbers were encrypted and were not compromised. Passwords associated with the company’s Aeroplan points program were also not at risk, but users should still monitor transactions Air Canada said. Arguably, caution dictates that passengers should cancel their passports and buy new ones. If customers wish to go down this route, it’s hard to see how Air Canada won’t be expected to reimburse that cost. [Naked Security and at: CBC News, CTV News, The Canadian Press, The Register and BBC News]

Identity Issues

CA – OIPC SK Reluctantly Finds Driver’s License Data Not PI

This OIPC SK report reviewed the Saskatchewan Government Insurance’s response to a request for records pursuant to the Freedom of Information and Protection of Privacy Act. An insurance body properly withheld some parts of an internal privacy breach report that might sway jury member in impending legal proceedings; however, they must disclose driver’s licence details, which the OIPC SK believes should, but does not, fall under the definition of PI in the FOI legislation (such data should be protected as it can be used for fraud and identity theft). [OIPC SK – Review Report 146/2017 – Saskatchewan Government Insurance]

CA – OIPC SK Stands by Current Victim Identity Law

Saskatchewan’s privacy commissioner Ronald Kruzeniski does not believe the legislation behind the Regina police’s policy on naming murder victims needs to be changed. The option was not ruled out by Justice Minister Don Morgan to settle a difference in the way he and the Regina Police Service interpret the Local Authority Freedom of Information and Protection of Privacy Act (LAFOIP) [PDF] which came into effect in January. Regina Police Chief Evan Bray has decided to release the names of murder victims on a case-by-case basis [watch video]. Police will only release names in situations where it will help an investigation, to protect someone’s health or safety, after the first court appearance of someone charged in the crime or if it’s in the public interest. Minister Morgan argued the chief is taking the wrong approach in his interpretation and explained the starting point should be that names are released except in rare cases, like pending next-of-kin notification or if it would compromise an ongoing investigation. Kruzeniski said regardless of which direction is taken, it still ends up being a case-by-case determination. He believes you could draft the policy either way and end up with the same result. “It’s hard to put a real good definition on public interest and I think you have to rely on the police chief at the time to say ‘When I hear the summary of the facts, I think it’s in the public interest to release it,’” said Kruzeniski. [News Talk 980 CJME , The Canadian Press and see also opinion: The killing of someone, who shall remain nameless

IN – Edward Snowden on Aadhaar Privacy: “The system is already going bad”

“You’ll be tracked, you’ll be monitored, you’ll be recorded in a hundred different ways and not by UIDAI [Unique Identification Authority of India], but by the Aadhaar number they created that is being used by every other company and every other group in society,” Edward Snowden said during a recent ‘Talk Journalism’ event. A video of Snowden’s live stage interview via Google Hangouts was published on YouTube recently [29:13 min – at Talk Journalism event on August 11 at Hotel Fairmont, Jaipur]. “The biggest crime behind this system is that it’s being used for things that are unrelated to what the Government is paying for. If you want to open an account, buy a train ticket, more and more of these services are demanding an Aadhaar number. Not just the number, they are demanding that you show them the physical card. This is creating a systemisation of society, of the public and this was not the intention of the programme,” Snowden said. He called for criminal penalties on companies that ask for a person’s Aadhaar number for a service that the Govt is not paying for. Snowden concluded his comments on Aadhaar by saying that the system in already going bad and that privacy of Indians (digital or otherwise) is not adequately protected. [Digit and at: International Business Times, All India Roundup and SocialPost]

Law Enforcement

CA – Police Chiefs Push for New Data-Sharing Treaty With U.S.

Canada’s police chiefs are pressing the Trudeau government to sign a new electronic data-sharing agreement with the United States to overcome hurdles in the fight against crimes ranging from fraud to cyberterrorism. But the government and the federal privacy commissioner say more consultation and study are needed to ensure appropriate protection of personal information before taking such a step. The Canadian Association of Chiefs of Police recently passed a resolution [see pg 7 of PDF] urging the federal government to negotiate an updated sharing agreement with the U.S. They say cross-border access to information is one of the most pressing issues for law enforcement agencies. The chiefs see an opportunity for a virtual leap forward following Washington’s passage of the Clarifying Lawful Overseas Use of Data (CLOUD) Act [H.R.4943 & wiki]. The new law allows the U.S. to sign bilateral agreements with other countries to simplify the sharing of information on criminal justice matters, as long as signatories have proper safeguards in place. The Liberal government has conducted consultations on cybersecurity, but it has yet to address some key questions about how to ensure police and spy agencies have access to information that will help them solve crimes in the digital realm without trampling on privacy or charter rights. A spokeswoman for privacy commissioner Daniel Therrien, said that while the watchdog has not yet studied the police chiefs’ proposal, alternative arrangements to the current international legal assistance process should not undermine privacy protections in Canadian law. [CTV News]

Location

US – Lawsuit Over Google’s Sneaky Location Tracking Could Be a Game-Changer

The Associated Press revealed that Google continues to collect location data from users even when “Location History” is disabled in its options. The company was unapologetic, but did change its location policy. Now, a Californiaian named Napoleon Patacsil has filed a lawsuit [5:18-cv-05062 – here & here – read complaint] against Google in federal court and requested a judge grant the case class-action status so that other Google users could join. If the suit is granted class-action status, practically every breathing American could potentially join in as a Plaintiff. The suit accuses Google of violating California’s privacy laws on three counts. It cites section 637.7 of the penal code that “prohibits the use of an electronic tracking device to determine the location or movement of a person.” The second count builds on the first, and claims Google violated the plaintiff’s reasonable expectation of privacy. This claim goes on to say that “Google engaged in true tracking of location history deceptively and in direct contradiction of the express instructions of Plaintiff and the members of the Class.” The third count goes further, saying that the Plaintiff’s “solitude, seclusion, right of privacy, or private affairs” were violated “by intentionally tracking their location.” The suit claims that Google has caused harm to its users “because they disclosed sensitive and confidential location information, constituting an egregious breach of social norms” and were the victims of an “intrusion into their private affairs.” On the same day that Patacsil filed his lawsuit, activists from the Electronic Privacy Information Center sent a letter [3 pg PDF here] to the Federal Trade Commission encouraging it to investigate Google for potentially violating a consent decree it signed with the agency in 2011. [GIZMODO and at: Reuters, Ars Technica, BGR, Tom’s Guide and Courthouse News Service | Napoleon Patacsil et al. v. Google, Inc. – Class Action Complaint – United States District Court Northern District Of California, San Francisco/Oakland Division]

Online Privacy

US – Facebook Users Are Changing Their Social Habits amid Privacy Concerns

The Pew Research Center has released the results [also see 3 pg PDF PDF] of a survey that shows many Facebook users have changed how they interact with the site over the past year. The center found that 54% said they had adjusted their privacy settings, 42% had taken a break from the platform for at least several weeks and 26% said they deleted the Facebook app from their phone in the past year. In all, 74% of those surveyed had taken at least one of those actions over the past 12 months, though it’s unclear if that’s a typical rate or a response to recent privacy-related scandals. Pew also found a difference between older and younger users. While 44% of Facebook users between 18 and 29 years old said they deleted the app sometime in the last year, only 12% of users 65 years of age or older said they had done the same. Similarly, while around 64% of users aged 18 to 49 said they had changed their privacy settings, only 33% of users 65 years old or older said they’d done so. Pew didn’t find any major differences between Democrats and Republicans. [engadget and at: The Washington Post and Bloomberg]

WW – Apple App Store Data Privacy Policy Changes

Apple’s new privacy policy for its Apple App Store takes effect on October 3, 2018. After that date, developers will have to submit privacy policies for new apps and updates before they can be submitted for distribution. To prevent surreptitious policy changes, developers will be permitted to edit policies only when they submit a new version of the app. The privacy policies must include clear information about what data are collected; how the data are collected; how the data are stored; what is done with the data’ and how users can revoke their consent and demand that their data be deleted. Apple also requires that the policy promise that any third-party entities with which the data are shared abide by the same rules.

  • ZDdnet: Apple looks to plug App Store privacy hole with new personal data policy
  • Computerworld: Apple insists developers ramp up their privacy commitments
  • apple.com: 5.1.1 Data Collection and Storage

WW – Google Selling 2FA Security Keys

Google is now selling its USB and Bluetooth Titan FIDO-based security keys for two factor authentication (2FA). Google has been using the keys internally; last month, the company said that since the keys’ use has been adopted more than eight months ago, none of its employees’ accounts has been phished.

  • The Verge: Google’s in-house security key is now available to anyone who wants one
  • CNet: You can buy Google’s $50 set of Titan security keys now
  • Bleeping Computer: Google’s FIDO Based Titan Security Key Now Available for $50 USD

Other Jurisdictions

AU – Australian Commissioner Provides Final Guidance on Access

The New South Wales Information Privacy Commissioner has issued final guidance on patient access requests. The IPC issued a draft version in June 2018. Providers must respond to an access request within 45 days and generally provide access to an individual’s health information upon request, pursuant in some cases to identity verification and a reasonable fee; exceptions include where providing access would pose a serious threat to the health of the requester or others, have an unreasonable impact on others’ privacy, or there have been repeated, unreasonable requests. [IPC New South Wales – Access to Health Information: Fact Sheet for Health Care Providers | Checklist for Private Sector Staff

Privacy (US)

US – More States Appoint ‘Chief Privacy Officers’ to Protect People’s Data

In this age of hackers and cybercriminals, every state has a top security official focused on preventing breaches and protecting the vast amounts of data it collects. Now, a growing number also are hiring a top official to make sure that the privacy of residents’ personal data is protected as well. Many large companies have employed ‘chief privacy officers’ for years, but they were rare in state government. A decade ago, there were only a few; today, at least eight states have them — Arkansas, Indiana, Kentucky, Ohio, South Carolina, Utah, Washington and West Virginia, according to the National Association of State Chief Information Officers [NASCIO]. Arkansas hired its first in June. States collect reams of confidential information from residents. Chief privacy officers are tasked with ensuring that state agencies safeguard that information and comply with privacy regulations. That means state employees who handle data must know how to protect sensitive information when they use or share it. Chief privacy officers typically create statewide privacy policies that apply to every agency and require that staffers be trained. They meet regularly with state agencies’ privacy teams and evaluate new technology to make sure it doesn’t conflict with privacy protections. Some also offer services to consumers to educate them about protecting their privacy. State chief privacy officers work closely with chief information security officers, who oversee cybersecurity. Chief privacy officers also must make sure their efforts don’t impede the public’s right to know. A lot of data collected by states isn’t private; it’s public information that should be accessible to anyone. [Stateline Blog (The Pew Charitable Trusts)]

Security

US – Want to Hack the WA government? Try ‘Password123’

A staggering 60,000 out of 234,0000 active accounts at a range of WA government agencies were potentially at risk of a dictionary attack due to their weak passwords, a review by the WA Office of the Auditor General, Caroline Spencer, has found. For the report [see notice, report summaries & video here & PDF report] the Auditor General obtained encrypted password data from 23 Active Directory environments across 17 agencies. Using a selection of password dictionaries it found that tens of thousands of users had chosen weak passwords including “Password123” (1464 accounts), “password1” (813), “password” (184), “password2” (142) and “Password01” (118). The auditor also assessed the information security controls surrounding key business applications at five government agencies. All five “had control weaknesses with most related to poor information security and policies and procedures”. Earlier this year the WA government transformed the Office of the Government Chief Information Officer into the Office of Digital Government [here] and moved it to the Department of the Premier and Cabinet [here]. In its response to the audit report, DPC said that the move would help “ensure that ICT performance, data sharing and cyber security are strengthened”.[Computerworld]

US – Augusta University Health Exposed 417K Records Due to Phishing Attacks

Once again, a medical data breach has exposed thousands of patients. This time, the victims primarily include citizens of the state of Georgia. Reportedly, the Augusta University Health suffered data breach due to multiple phishing attacks over the year. Regretfully, the breach has exposed around 417,000 records [see AUH notice & FAQ]. Sophisticated phishing attacks targeted Augusta University in two different instances. The first incident took place on September 10-11, 2017. Initially, the college suspected that the breach exposed a “small number of internal email accounts”. However, this year, they realized that those accounts allegedly exposed 417,000 records. Whereas, the second phishing attack happened on July 11, 2018, with a much smaller scope. The breach data includes explicit personal information about the patients, as well as their medical and health records. In some cases, breach of financial records and Social Security numbers is also suspected. [Latest Hacking News aandt: DARK Reading, Atlanta Journal Constitution, Healtcare IT News and SC Magazine]

US – 1000 GAO Cybersecurity Recommendations Remain Unaddressed

Since 2010, the Government Accountability Office (GAO) has made over 3,000 recommendations [see some here] to agencies aimed at addressing cybersecurity shortcomings in each of these action areas. However, as of this month, about 1,000 have not been implemented. Until these shortcomings are addressed, federal agencies’ information and systems will be increasingly susceptible to the multitude of cyber-related threats that exist. There is much work to do to protect the public by both government and the private sector. The GAO has been examining federal efforts on several cybersecurity fronts including protecting Americans’ privacy, protecting critical infrastructure such as telecommunications and financial markets, and protecting the federal government’s own operational IT systems, such as those that are essential to the day-to-day workings of government. Urgent actions are needed to address several cybersecurity challenges facing the nation. The risks to IT systems supporting the federal government and the nation’s critical infrastructure are increasing as security threats continue to evolve and become more sophisticated. These risks include escalating and emerging threats from around the globe, steady advances in the sophistication of attack technology, the emergence of new and more destructive attacks, and insider threats from disaffected or careless employees. The GAO has identified a range of critical cyber challenges [see report to Congress] facing the federal government today and critical actions needed now to address them. GAO has had this issue of information security on our High Risk list since 1997 and we will continue to track it as part of that list identifying programs that need concentrated attention from the Congress and the Administration. [The Hill]

Surveillance

WW – Five Eyes Countries Want Tech Companies’ Help to Access Encrypted Communications

The countries known as the Five Eyes – the US, the UK, Canada, Australia, and New Zealand – have issued a joint statement suggesting that unless tech companies help law enforcement access communications protected by end-to-end encryption, they “may pursue technological, enforcement, legislative or other measures to achieve lawful access solutions.”

  • CNET: US and intelligence allies take aim at tech companies over encryption
  • NextGov: Five Eyes Intel Alliance Urges Big Tech to Help Break Encrypted Messages
  • Infosecurity-magazine: Five Eyes Talk Tough on Encryption Backdoors
  • Homeaffairs,gov.au: Statement of Principles on Access to Evidence and Encryption

Workplace Privacy

CA – Ontario Law Limits Employee Screening

McCarthy Tetrault examines Ontario’s Bill 113, the Police Record Checks Reform Act, 2015, effective November 1, 2018. The law firm says that employers cannot obtain non-conviction information unless for a vulnerable sector check, or use/disclose the results of the check other than for the original purpose or as authorized by law; an individual’s written consent must be obtained before the check is conducted and specify the check being consented to. [New Rules Regarding Police Record Checks: Employers Take Note – Jessican Wuergler, Associate, McCarthy Tetrault]

CA – OIPC AB Finds Significant Harm from Disclosure of Employee PI

The Office of the Information and Privacy Commissioner of Alberta was notified by La Coop fédérée of unauthorised disclosure of personal information, pursuant to the Personal Information Protection Act. Three employees in an organisation clicked on a malicious link in phishing emails and provided their authentication information (allowing the hacker access to PI in their inbox); mitigation steps were taken (retaining an external consultant, extensive monitoring, refresher training), however, the breach was deliberate, and compromised information (salary, bonus considerations, benefit plan types) can be used for identity theft and fraud. [OIPC AB – Breach Notification Decision – P2018-ND-083 – La Coop fédérée]

US – California Employers Must Get Applicant OK for Background Check

California employers, lenders, and landlords must obey the tougher of two privacy laws and inform applicants before investigating their background, the state Supreme Court ruled Aug. 20 [see “Connor v. First Student Inc” & opinion summary]. The 7-0 decision affects the thousands of credit, employment, and housing decisions made daily in California under two laws. One of them requires prior notice and authorization before certain types of background investigative reports are ordered. The other covers more consumer-oriented information that doesn’t require advance disclosure or consent. The justices upheld a 2015 lower court ruling [see discussion] that school bus transportation company First Student Inc., part of FirstGroup plc, failed to adequately notify and obtain consent from former Laidlaw International Inc. bus drivers and aides before it conducted background checks on 54,000 workers. The reports were ordered after First Student bought Laidlaw in 2007. [Bloomberg BNA and at: The Record (Law.com) and At The Lectern (Horvitz & Levy)]

CA – Liberals Consider ‘Right to Disconnect’ Outside Work Hours: Report

The federal Liberals are considering whether a reshaping of federal labour standards should include giving workers the right to ignore their job-related emails at home. The idea of putting into law a “right to disconnect” [wiki] is one of several policy areas the Liberals identify as meriting further study in a new report [“What we heard: Modernizing federal labour standards” – see PDF]. The report which provides results of a year-long consultation on changes to the federal labour code showed a split between employer and labour groups over whether the Liberals should set rules for workers in federally-regulated industries. That includes employees in transport, banking and telecommunications, and could also influence provincial labour laws. Labour groups argued that a legal right to turn off work devices, or workplace policies to limit the use of work-related devices when not at the office, would improve rest and not bite into family time. Employers were more cautious, telling federal officials that some companies need employees available to be on call after hours. And some employees choose to stay connected because they don’t work a traditional “9-to-5” workday. At least one employer group called any government action a “legislative over-step,” the report said. [National Post and at: The Canadian Press]

 

+++

 

01-15 August 2018

Biometrics

CA – OIPCs to Investigate Use of Facial Recognition at Calgary Malls

The privacy commissioners of Alberta and Canada are launching investigations into the use of facial recognition technology, without the public’s consent, in at least two malls in Calgary. A notice posted to the OIPC AB website says the investigation will look to determine, “what types of personal information are being collected, whether consent for collection or notice of collection is required or would be recommended, for what purposes personal information is collected, whether the data is being shared with other businesses, law enforcement or third parties, and what safeguards or security measures are in place to protect personal information.” Alberta’s privacy commissioner, Jill Clayton, opened the investigation based on the level of public interest. A similar notice was also posted to the Office of the Privacy Commissioner of Canada (OPC) website. Provincial and federal privacy offices will co-operate with each other. The owner of the mall, Cadillac Fairview, said the software was also running in other malls across Canada. The company said the cameras in the mall directories are used to better understand traffic flow and they “do not record or store any photo or video content.” [CBC News Sources report Cadillac Fairview has confirmed it has suspended use of the system and has promised to “co-operate fully with the investigations” – see The Globe and Mail, CBC News]

AU – Australian Governments Continue Expanding Use of Facial Recognition

The Australian state of Western Australia is planning to trial facial recognition technology for enforcing bans on purchasing alcohol by certain individuals. A trial will be conducted in the Pilbara region, where high rates of violence have been blamed on alcohol consumption, with vendors informed by the system when a person has been banned from purchasing alcohol as a consequence of intoxicated driving or domestic abuse. Racing and Gaming Minister Paul Papalia said he would like the Scantek system which is currently used to scan drivers’ licenses to integrate facial recognition or something similar in the future. Australia’s Department of Home Affairs will begin loading driver’s license images into its new biometric database within months, and police are being trained on the new Driver License Facial Recognition Solution. Rights advocacy group Access Now has criticized the Australian Federal Government for its use of biometrics in public surveillance, while multiple state governments have challenged what they say is the expanding scope of the facial recognition systems set out in the Identity-matching Services Bill 2018 and related legislation. [The West Australian, The Courier-Mail, Biometric Update]

US – GAO to Examine Government and Commercial Use of Facial Recognition

Four US Senators and a House Judiciary Committee Member requested the GAO to investigate the government use of commercial facial recognition tools. Facial recognition technologies raise serious concerns about individual privacy rights; a survey should examine which law enforcement agencies are using such technologies and how (how is the technology audited for accuracy, and is there transparency regarding use and are there redress procedures), what data commercial vendors use to “train” the algorithms used to match images, and whether data brokers buy from multiple vendors to create individualized profiles for marketing. [Letter to GAO Regarding Facial Recognition – Senator Ron Wyden et al.  | Press release]

US – Facial Recognition Not Accurate Enough for Policing Decisions: Body Cam Company

Facial-recognition technology has been facing public scrutiny in past weeks, especially since an American Civil Liberty Union experiment using Amazon’s facial recognition erroneously matched members of US Congress to a directory of mugshots of alleged criminals [ACLU blog post]. Axon, the country’s biggest supplier of body cameras, doesn’t want to face similar backlash. The company’s CEO Rick Smith told investors that Axon isn’t yet working on facial recognition to integrate into its products. The “accuracy thresholds,” said Smith, are not “where they need to be to be making operational decisions off the facial recognition.” Smith suggested that rolling out facial-recognition at this point could scuttle the tech’s future in body cameras. “This is one where we think you don’t want to be premature and end up either where you have technical failures with disastrous outcomes or there’s some unintended use-case where it ends up being unacceptable publicly in terms of long-term use of the technology.” In addition, Smith noted there are accountability and privacy measures that haven’t yet been worked out with the tech’s application in body cameras. Commercialization would only come, Smith said, once all those issues had been resolved, and Axon had ensured “that it will be acceptable by the public in large.” [Quartz, GOZMODO] See also: Can US Law Enforcement Be Trusted With Facial Recognition Technology? ]

Big Data / Analytics / Artificial Intelligence

EU – Starting Point for a Big Data Project: The Privacy Impact Assessment

The use of big data has also brought much controversy, particularly when it involves sensitive information, concerns children, minorities or other vulnerable people, or where the decision-making has a significant impact on individuals. As both public interest and regulatory scrutiny in artificial intelligence, machine learning and big data continues to build, it is increasingly becoming important for businesses to be aware of individuals’ rights over their data and be prepared to demonstrate compliance with data protection laws. The data protection impact assessment (DPIA), also called privacy impact assessment (PIA), is an important tool that organisations have at their disposal to ensure that their processing of personal data complies with data protection law and minimises the impact on privacy. The guide [“The Starting Point for a Big Data Project: The Privacy Impact Assessment”] is intended to explain why, when and how PIAs should be carried out in the context of a big data project. It also discusses some of the key issues that are likely to be identified in a PIA on a big data project and factors to consider when making risk-based decisions on the basis of a PIA. [Global Media and Communications Watch (Hogan/Lovells)]

WW – Weaponized AI and Facial Recognition Enter the Hacking World

The open-source intelligence-gathering tool Social Mapper — developed by Trustwave’s Jacob Wilkins — uses facial recognition to automatically search for targets across eight social media sites: Facebook, Twitter, LinkedIn, Instagram, Google+, the Russian social networking service VKontakte, and the Chinese social networking sites Weibo and Douban. Its purpose is to help pen testers and red teamers with social engineering attacks. Instead of manually searching social media sites for name and pictures, Social Mapper makes it possible to automate such scans “on a mass scale with hundreds or thousands of individuals.” After searching, it spits out a report such as a spreadsheet with links to targets’ profile pages or an HTML report that also includes photos. From there, your attacks are limited “only by your imagination” If everyday malware is not considered evasive enough, then think about weaponized artificial intelligence (AI) and then meet the new attack tool DeepLocker [created by the IBM team] a “highly evasive new breed of malware, which conceals its malicious intent until it reached a specific victim.” DeepLocker “unleashes its malicious action as soon as the AI model identifies the target through indicators like facial recognition, geolocation and voice recognition.” To show off DeepLocker’s capabilities, the researchers camouflaged WannaCry ransomware in a video conferencing app. Going undetected by security tools, DeepLocker did not unlock and execute the ransomware until it recognized the face of the target. [CSO Online. For Social Mapper see The Verge & FindBiometrics. For DeepLocker see: ZDNet & eWeek — for both see Forbes]

Canada

CA – Federal Bill Regulates Collection at Border

Bill C-21, an Act to Amend the Customs Act, passed the House of Commons and is being reviewed by the Canadian Senate. If passed, the Canadian Border Services Agency may collect from any person leaving Canada personal information (including nationality, sex, and travel history), travel documents and itinerary information, which can be retained for 15 years beginning on the date on which the information is collected. [Bill C-21 – An Act to Amend the Customs Act – See also Blaney McMurtry: Does the CBSA Have Authority to Search Your Electronic Devices? ]

CA – Canada Legislation Permits Drug and Alcohol Testing by Police

Bill C-46, an Act to Amend the Criminal Code Offences (Relating to Conveyances), received royal assent on June 21, 2018. Police officers may require a person to provide a blood, urine or breath sample for testing if there is reasonable belief that an individual has operated a car, aircraft or railway equipment after consuming drugs and alcohol, or has consumed drugs or alcohol within 3 hours of committing an offence; disclosure of test results to third parties is generally prohibited unless an exception applies. [Bill C-46 – An Act to Amend the Criminal Code Offences Relating to Conveyances – Parliament of Canada

CA – OPC and OIPC Guide Companies on Consent

The Office of the Privacy Commissioner of Canada and the Offices of the Information and Privacy Commissioner of Alberta and British Columbia have jointly issued guidelines for obtaining meaningful consent. To obtain express and informed consent, companies must involve users when designing the consent process and conduct regular audits of privacy communications to ensure they reflect management policies; mobile apps guidance recommends limiting data to that which is needed by the app to function and providing a dashboard for users to easily tighten privacy settings. [OIPC AB – Guidelines for Obtaining Meaningful Consent | Mobile Apps Guidance

CA – Ontario Psychologist’s Telephone Recording Lawful: Review Board

The Ontario Health Professions Appeal and Review Board reviewed an Ontario psychologist’s recording of a contentious telephone conversation. Another party to a phone conversation complained to a regulator that she was not asked or told by the psychologist that the telephone call between them could be recorded; however, such recording is not contrary to professional standards and is permitted in Ontario pursuant to the Criminal Code (an individual can record a call in which he/she is participating). [H. S., Ph.D., C.Psych. v. the Catholic Diocese of London – 2018 CanLII 55890 HPARB – Health Professions Appeal and Review Board of Ontario]

CA – OIPC BC Requires Ministry to Sever Records

An OIPC BC order examined the Ministry of Attorney General’s decision to withhold records requested pursuant to BC’s Freedom of Information and Protection of Privacy Act. The OIPC rejected the Ministry’s argument that attachments to privileged briefing notes are therefore automatically privileged; the Ministry was ordered to sever information from third party correspondence that would not reveal the substance of legal advice, [OIPC BC – Order F18-18 – Ministry of Attorney General]

CA – BC Law Society Properly Withheld Billing Information

An OIPC BC order examined the Law Society of BC’s handling of a request for access to records pursuant to BC’s Freedom of Information and Protection of Privacy Act. The OIPC BC concluded that a statement of an account is billing information that is presumptively privileged; descriptions of professional services could reveal privileged communications, and disclosure of some information could allow inferences to be made about privileged communications (i.e. the hours spent providing services on each date and the total amount of fees, taxes and disbursements). [OIPC BC – Order F18-29 – Law Society of British Columbia]

CA – IPC ON Clarifies Applicability of GDPR in Ontario

The IPC Ontario published an overview of the GDPR as applicable to institutions and healthcare information custodians in Ontario. Ontario public institutions and custodians’ compliance with the GDPR depends on processing activities (offering goods or services to individuals in the EU or monitoring activities of individuals in the EU), requires express consent (which is specific, unambiguous and freely given), ensuring data subject rights (right to object, restrict, access and delete personal data, and the right to be forgotten), and breach notification (to the DPA within 72 hours after becoming aware). [IPC ON – Privacy Fact Sheet – July 2018 – General Data Protection Regulation]

CA – Investigation Records Not Protected by Solicitor-Client Privilege

The OIPC NL reviewed the City of Corner Brook’s decision to withhold requested records, pursuant to the Access to Information and Protection of Privacy Act. The OIPC found that communications to and from a public body’s solicitor contained facts, and do not entail the seeking or giving of legal advice; records were created for the dominant purpose of a workplace investigation, and the fact that some of those records may have been held in a file in the solicitor’s office does not result in them being privileged. [OIPC NL – Report A-2018-017 – City of Corner Brook]

CA – OIPC SK Finds Unlawful Disclosure of PHI

An OIPC SK report investigated two alleged privacy breaches under the Health Information Protection Act. A medical lab sent two patient reports to the wrong doctor because the lab’s information system is designed to automatically highlight the first name appearing in a list of doctors with the same surname; the system should be reconfigured to search for a doctor by their unique ID number or enter in both the first and last name of the doctor. [OIPC SK – Investigation Report 014-2018, 016-2018 – Saskatchewan Health Authority]

CA – Alberta Public Body Did Not Destroy Records

The OIPC AB investigated whether Balancing Pool had proper records management and retention procedures, pursuant to the Freedom of Information and Protection of Privacy Act. The public body did not follow instructions in email records received from another public body to delete records to prevent release; however, the public body must create a records management program, train employees on retention and destruction schedules and provide related documents alongside each other to help FOI applicants understand responsive records. [OIPC AB – Investigation Report F2018-IR-02 – Balancing Pool]

CA – Calgary Homeless Shelter Plans to ID Clients With Facial Recognition

Agencies have struggled with how to identify clients that don’t have official ID, and the Calgary Drop-In Centre shelter thinks it might have a high-tech solution — facial recognition  — but it’s a fix that comes with serious privacy risks for an already marginalized population. Currently, person[s] who enter the building are fingerprinted. However, “Being fingerprinted is invasive and can cause stress for some clients” said Helen Wetherley Knight, director of IT at the Calgary Drop-In Centre. So, instead the Drop-In Centre is testing facial recognition technology for a non-invasive ID solution. Each client’s photos are captured with a secure webcam, encrypted, and then linked to a system where staff can access the client’s profile. Knight said she isn’t aware of other shelters in Canada using it The program the Drop-In Centre is testing, which is being implemented by Vancouver-based IT company Sierra Systems, uses Microsoft’s Facial Recognition API. Client data would be stored securely in Microsoft’s cloud in data warehouses in Quebec. Eventually, they’d like to implement blockchain technology, to give clients control over which agencies access their personal information. The facial recognition technology has already gone through one round of testing — 41 clients, volunteers and staff “eagerly participated” Knight said — but no further testing is planned as it is undergoing a feasibility study. [CBC News]

CA – OIPC SK to Doctor Who Altered Records of Dead Patient: Adopt Better Record Keeping Practices

The Saskatchewan Information and Privacy Commissioner, Ron Kruzeniski, is recommending that a doctor who altered an electronic record of a dead patient’s visit eight times after the patient’s death do a better job at keeping medical records. Dr. Svitlana Cheshenchuk altered a record of a visit from Sandra Hendricks, who died hours after leaving a check up with the doctor back in 2014. The alterations took place between October 2014 and June 2015. The privacy commissioner found [Investigation Report 024-2018] that the doctor did not comply with multiple sections of the Health Information Protection Act (HIPA), which are related to policies which should protect the integrity and information and its compliance with HIPA. “Integrity refers to the condition of information being whole or complete; not modified, deleted or corrupted,” the report reads. [CBC News, CTV News, Regina Leader-Post and CBC News]

CA – Ontario Company’s Zero Tolerance Approach Unreasonable

The U.S. Workers Union grieved a workplace policy of Drivetest. The Ontario Labour Arbitration Board found the company’s termination of an employee for viewing her own information on a confidential system an unreasonable interpretation of its personal data handling policy; the employee did not view the data with any malicious intent or in furtherance to an illegal act, and the policy did not expressly stipulate a particular level of discipline for any specific offense. [SERCO DES (Drivetest) v United Steel Workers – 2018 CanLII 64969 – Ontario Labour Arbitration

CA – Surrey Plans to Install Cameras to Catch Illegal Garbage Dumping

City of Surrey wants to install 10 or more cameras to catch people illegally dumping garbage. Surrey says illegal dumping has been increasing at a ‘alarming rate’ in the municipality during the past decade. Ray Kerr, Surrey’s manager of engineering operations. He said illegal dumping isn’t only a problem in Surrey, but throughout Metro Vancouver and across the country. He said Surrey spent $600,000 last year on removing illegally dumped garbage. He estimated that Surrey was on track to spend about $550,000 by the end of 2018. Metro Vancouver estimates that it cost regional municipalities $5 million to clean up illegal dumping, which often includes items such as mattresses, sofas, carpeting, tires and appliances. Kerr said there was no specific reason why the total of 10 or more cameras was picked other than he believes it was a “good place to start.” [Vancouver Sun, Peace Arch News]

CA – Privacy of Online Pot Sales Needs Watching: Experts

Buyers who have to provide personal information to purchase recreational pot online after legalization this fall should be able to rely on existing laws to protect their privacy but the issue needs to be watched closely to ensure regulations are obeyed and mistakes are avoided, experts say. Ontario’s government recently announced [see related rules here & here] that consumers 19 years or older will have to go online to buy weed after legalization federally on Oct. 17 because private retail stores won’t be up and running until April. A government agency called the Ontario Cannabis Store will run the online sales, although private e-commerce provider Shopify will be involved. The matter is important given the stigma many people still attach to marijuana use, and the potential for Canadians to be barred from the United States if their otherwise legal indulgence becomes known to American border agents [see earlier reporting here & here]. A spokeswoman for the OPCC said the office had not looked specifically at online marijuana sales. At the same time, the commission said it recognized privacy concerns around buying or using marijuana given its longtime status as a controlled substance. At minimum, have to provide a name along with email and delivery address, and payment information. However, a spokesman for the Ministry of Finance said buyers will have to provide proof of age via government-issued ID, which a delivery person will verify but not copy. The cannabis store website will have data security and privacy controls “aligned with global e-commerce best practice,” he said. Personal data will remain in Canada and not be shared with third parties. [National Post]

CA – Many Organizations Still Ignore Basic Security: Survey

Experts are going hoarse telling organizations they have to build their cyber security strategies around doing the basics. But if a survey/report called “The State of Cyber Hygene” sponsored by Tripwire is accurate not many are following even the top six of the 20 recommended Critical Security Controls [download] set by the Center for Internet Security (CIS). The top six CIS Security Controls are inventory and control hardware assets; inventory and control software assets; perform vulnerability management; secure hardware and software configuration; control administrative privileges; and monitor and analyze logs. The survey was completed by 306 participants in Canada and the U.S. last month, all of whom are responsible for IT security at companies with more than 100 employees. [IT World Canada]

CA – OIPC SK: Too Much Info Released to Parents About School Gun Threat

An employee of the Good Spirit School Division, which oversees 27 schools in southeast Saskatchewan, breached a student’s privacy earlier this year after an alleged threat involving guns was overheard by a substitute teacher and some students, the Saskatchewan privacy commissioner said in a report. A letter sent out to parents a few days after the incident provided too much information that shouldn’t have been disclosed, the commissioner found. The letter said that the student made a threat and included the wording of the threat, that the RCMP had been called and that the student had parents who were “very responsible gun owners and the subject individual could not access weapons.” The letters also said the student had been suspended and that there were concerns the student may have been bullied. Once the letters went out and the breach was noticed, the school division proactively reported itself to the office. The parents of the student also made a complaint to the privacy commissioner. [CBC News]

E-Mail

WW – Spam is Still an Effective Way to Infect Computers: Study

A study from F-Secure and MWR InfoSecurity says that spam is still the top choice of attackers for spreading malware. The study found that spam click rates have risen slightly from 13.4% last year to 14.2% this year. The report also says that spam is still an effective vector of infection because the presence of others, like Adobe Flash, is diminishing. Threatpost ThreatList: Spam’s Revival is Tied to Adobe Flash’s Demise | information-age.com: Spam still the most common cyber crime technique, according to recent research]

EU Developments

EU – EU Publishes New ePrivacy Revisions

The EU released the latest revisions to the proposed ePrivacy Regulation for comment. Providers would be permitted to process metadata if necessary for network management/optimization (for a limited duration and if anonymised data cannot be used), for statistical counting/scientific research (pursuant to EU/Member State law and subject to encryption, pseudonymisation and compliance with related GDPR provisions), and for calculating/billing interconnection payments. [Council of the European Union – Presidency Delegations – Revisions to ePrivacy Regulation 10975/18 – July 2018]

EU – EPIC Reacts to EDPB Certification Guidance

The Electronic Privacy Information Center (EPIC) commented on EU Data Protection Board’s (EDPB) draft guidelines on certification criteria. EPIC recommends certification criteria include disclosure of algorithm logic, processing prohibitions when profiling risks are identified, and scrutiny of categories and amount of personal data collected. Immediate re-certification should be mandatory for new technologies that collect large quantities of granular data, and certification should not be granted for unspecified or excessive processing. [EPIC – Comments on EDPB Consultation on Guidelines 1-2018 on Certification Criteria under the GDPR]

EU – German State DPA Publishes List of Mandatory DPIAs

The Hessian Data Protection Authority issues a list of processing activities, pursuant to article 35(4) of the GDPR, that require a data protection impact assessment. Such processing includes vehicle data using automatic readers, merging of data using non-transparent algorithms (e.g., fraud prevention), behavioural/performance evaluation assessments (e.g., ratings portals, collection services, geolocation of employees), online profiling (e.g., dating sites and social networks), Big Data, artificial intelligence, location tracking (e.g., in shopping malls), RFID (e.g., by apps/maps), and centralized storage of measurement data (e.g., fitness apps). [DPA Hesse – List of Processing Operations Pursuant to Article 35(4) of GDPR | General information]

EU – CIPL Maps Elements of GDPR Accountability

The Centre for Information Policy Leadership (CIPL) has mapped GDPR requirements to elements of accountability. CIPL outlines controls and measures to ensure GDPR accountability for leadership and oversight (privacy engineers, DPO oversight and reporting), risk assessment (DPIAs, for breach incidents, at program or service level), policies and procedures (crisis management, vendor management, legal basis and fair processing), transparency (breach notification, dashboards, information portals), and monitoring and verification (processing records, evidence of consent, notices). [The Case for Accountability – How it Enables Effective Data Protection and Trust in the Digital Society – CIPL]

EU – DPAs Should Incentivise Accountability: CIPL

The Centre for Information Policy Leadership called for incentivising organisational accountability in the EU. Accountability should be encouraged, incentivized and rewarded where it goes above minimal legal requirements, and should not be left solely to the threat of sanctions, or the self-interest of the organisation; impactful incentives include more flexibility in interpreting privacy principles and discretion when considering enforcement actions for organisations that demonstrate heightened accountability. [Incentivising Accountability – How Data Protection Authorities and Law Makers Can Encourage Accountability – CIPL]

EU – Nymity Issues Recommendations for Demonstrating GDPR Accountability

Nymity has issued recommendations for generating required reports to demonstrate GDPR compliance and accountability. A spreadsheet, document or scorecard can be used to tie relevant GDPR provisions to implemented measures and references the owner of each processing activity, ensure DPIA records clearly show how risk was mitigated in the project (e.g. identifying privacy by design elements and how accountability for addressing risks was affirmed), and consider keeping records on legitimate interests processing (individuals impacted, potential harms and risks mitigated). [Reporting on GDPR Compliance – An Accountability Approach to GDPR Regulator Ready Reporting – Nymity]

UK – ICO Appoints New Executive Director for Technology Policy and Innovation

The UK ICO has appointed Simon McDougall as executive director for Technology Policy and Innovation to lead new approaches to information rights practice and promoting the legally compliant processing of personal data as a core element of new technologies and business systems [see ICO PR here]. McDougall is currently managing director of Promontory [see here wiki here] – a risk management and regulatory compliance consulting firm acquired in 2016 by IBM, where he founded and led a global privacy practice. Elizabeth Denham, Information Commissioner, said, “As a globally respected figure in the world of privacy and innovation, Simon is a great fit for this new role, which will strengthen our expertise and responsiveness to new challenges and opportunities.” The ICO is also planning for a regulatory ‘sandbox’ to enable organisations to develop innovative products and services while benefitting from advice and support from the ICO. [Government Computing Network at: ComputerWeekly and Information Age]

EU – Publishers Adopting Consent Management Platforms for GDPR Compliance

More publishers are feeling under pressure to adopt a consent-management platform [also see here] to be compliant with the GDPR. CMPs store consent information and pass it on to the publisher’s programmatic partners. In the U.K., 31% of publishers had a CMP, an increase of 12% from July to August, according to tech vendor Adzerk. Among U.S. publishers, 27% had a CMP in August, up 13% from the month before. (Adzerk defines “publisher” as a site that shows programmatic ads). Several vendors in France have reported the same findings. Smart, an SSP, said just over 40% of its ad calls now come through with consent strings — which can only be generated once a publisher adopts a CMP (it didn’t have a comparison figure). Getting consumers’ consent to have their data collected for ad-targeting purposes is one way marketers can comply with the GDPR. Until now, many publishers have chosen other routes such as legitimate interest [see here & here], a route that’s seen as more likely to safeguard ad revenue. While many big publishers have built their own CMP or used free versions from vendors [see here], others worry that implementing them could cause a drop in personalized advertising. [Digiday and at: MarTech]

EU – Children’s Rights and the GDPR

The General Data Protection Regulation (GDPR) applies to both children and adults alike and includes certain child-specific clauses that aim to protect the data of children. Children merit additional protections because they are less likely to be familiar with the risks, consequences and safeguards regarding their personal and public data. The GDPR has a non-standardized definition of a child, with the default age set to those 16 years old and below [see GDPR Art.8 here]. Member States are permitted to lower the age cap to define children in the GDPR, but to no younger than 13 years old. This is an option nearly half the Member States have exercised. Recently the UK ICO began developing an “Age Appropriate Design Code” to inform organizations seeking consent to use children’s data. The Information Commissioner’s current call [see PR here – consultation closes September 19 here] for evidence seeks evidence from bodies representing the interests of children or parents, child development experts, and online service providers. This evidence will be taken into consideration while developing the Code in order to provide clear guidelines and expectations of age-appropriate design standards to providers of online information society services. [ICO’s earlier guidance on children & the GDPR] This post reviews the best way to comply with the GDPR’s regulation of the data of children. Understanding the relevant provisions of the legislation is key. In particular: 1) the GDPR’s provisions regarding informed consent; 2) the right to erasure; and 3) how automated decision making are relevant to organizations which may be processing a child’s data. [CyberLex Blog (McCarthy/Tetrault)]

EU – GDPR Could Hinder Blockchain Innovation, Warns EU Body

The EU Blockchain Observatory and Forum [see here & here] has warned that the GDPR law that went into effect a little over two months ago could hinder innovation in the blockchain space. According to the European blockchain body, this is because of the lack of legal clarity between blockchain technology and the GDPR law, whose aim is to protect individual data rights as well as facilitate the free movement of personal data in the single market. “As long as the legal framework around personal data and blockchain remains unclear, entrepreneurs and those designing and building blockchain-based platforms and applications in Europe face massive uncertainty. That can put a brake on innovation,” notes the report titled ‘Blockchain Innovation in Europe’. Under the GDPR the key to ensuring that individual data rights are protected is having a central body that can be held accountable when things go wrong. But in the case blockchain, a centralized data controller does not exist. Additionally, it is stipulated in the GDPR law that data can only be transferred to third parties based outside the European Union on condition that the data will be held in a jurisdiction which offers data protection levels that are equivalent to those in the single market. With open permissionless blockchains, however, it is impossible to select where the data ends up since a full copy of the database is replicated on all the full nodes regardless of their geographical location. [Blockchain News, ETH News and Loyens & Loeff News]

Facts & Stats

AU – 242 Breach Notifications to Australian DPA from April to June

The Australian Privacy Commissioner has released a report on breach notifications received by from April 1 – June 30, 2018. Malicious or criminal attacks caused 59% of the reported breaches (phishing, compromised credentials, brute-force, paperwork or device theft), 36% resulted from human error (PI send to the wrong recipient, loss of PI, failure to redact or use BCC), and system faults caused 5%; types of PI compromised include contact information, financial details, identity information, health information, and tax file number. [OAIC – Notifiable Data Breaches Quarterly Statistics Report – 1 April – 30 June 2018]

Filtering

WW – G-Suite Security and Privacy Settings Every Admin Should Review

G Suite administrators may select from a wide range of settings that control the privacy of new G Suite files, sharing settings on Team Drives, and security requirements for account sign-ins. Many organizations prefer the default options for these three sets of settings, which result in: 1) New G Suite files that are private, viewable only by the creator of the file; 2) Team Drives that allow files to be shared externally; and 3) step authentication that is optional. But different organizations may choose dramatically different defaults. As organizations transition to G Suite, they would like their G Suite settings to reflect their current security and privacy preferences. Thi article looks at how each of these three sets of G Suite settings affect security and privacy. [TechRepublic]

Finance

US – Your Banking Data Was Once Off-Limits to Tech Companies. Now They’re Racing to Get It.

Facebook has joined a growing race among big technology companies seeking private financial information once regarded as off-limits: users’ checking-account balances, recent credit card transactions and other facts of their personal finances and everyday lives. Facebook said [see Fortune & TechCrunch] that it had proposed data-sharing partnerships with banks and credit card companies that would allow users to access their personal account information from within the social network’s messaging service Facebook said the banking information wouldn’t be included in the vast stores of information the site uses to build people’s personality profiles. Many of the tech world’s major players have shown similar ambitions in tapping users’ financial data. Apple and Google provide mobile-payment services that allow users to access financial information and pay for products with their phones. Amazon.com offers users a credit card issued by JPMorgan Chase. And Google last year announced a deal that would let it review and analyze roughly 70% of all credit and debit card transactions in the United States. Facebook already has smaller agreements with financial institutions, including PayPal and American Express, that allow users to do things such as review transaction receipts on Facebook Messenger. In March, Facebook launched a service that would allow Citibank customers in Singapore to ask a Messenger chatbot for their account balance, their recent transactions and credit card rewards. [The Washington Post]

US – Facebook’s Plan to Partner With Banks Raises Privacy Concerns

Facebook has asked big banks to share their customers’ detailed financial records with it in an effort to offer new financial and commerce services through Facebook Messenger, the Wall Street Journal reports. The social media network wants access to card transactions and checking account balances along with information about where its users shop, the report said, citing people familiar with the matter. Gennie Gebhart [here], a Researcher at the Electronic Frontier Foundation told Fortune that this push to change user habits and increase their interactions with businesses through the Messenger app is dangerous for user privacy. Facebook said it would not use any information provided by banks for targeted ads, and would not share it with third parties. In a statement reported by CNBC, a Facebook spokesperson clarified that the company is not “actively asking financial services companies for financial transaction data.” Rather, banks could offer real-time customer service to users through Facebook Messenger, according to the statement. [Fortune, PC Magazine, TechCrunch and Global News]

FOI

WW – Siri is Listening to You, But She’s NOT Spying, Says Apple

Are our iPhones eavesdropping on us? How else would Siri hear us say “Hey, Siri” other than if she were constantly listening? That’s what Congress wondered, so last month the US House of Representatives Energy and Commerce Committee [here] sent a letter to Apple CEO Tim Cook [PR here & 5 pg PDF letter here] on the matter of Apple having recently cracked down on developers whose apps share location data in violation of its policies. The letter posed a slew of questions about how Apple has represented all this third-party access to consumer data, about its collection and use of audio recording data, and about location data that comes from iPhones. This week, Apple responded with a letter that translates into “We Are Not Google! We Are Not Facebook!” As in, Apple’s business model is different from those of other data-hoovering Silicon Valley companies that rely on selling consumer information to advertisers And no, Siri is not eavesdropping. The letter went into specifics about how iPhones can respond to voice commands without actually eavesdropping. It has to do with locally stored, short buffers that only wake up Siri if there’s a high probability that what it hears is the “Hey, Siri” cue. Once actual recording takes place after the “Hey, Siri” phrase is uttered, the recording that’s sent to Apple is attached to an anonymous identification number that isn’t tied to an individual’s Apple ID. Users can reset that identification number at any time. [Naked Security (Sophos) and at: Infosurhoy here & here, CNN Tech & Fortune]

WW – Is Apple Really Your Privacy Hero?

Apple Inc. has positioned itself as the champion of privacy. Even as Facebook Inc. and Google track our moves around the internet for advertisers’ benefit, Apple has trumpeted its noble decision to avoid that business model. When Facebook became embroiled in a scandal over data leaked by an app developer, Apple CEO Tim Cook said he wouldn’t ever be in such a situation. He framed Apple’s stance as a moral one. Privacy is a human right, he said. “We never move off of our values,” he told NPR in June. The campaign is working, as evidenced by media reports depicting Apple as hero to Facebook’s villain. But that marketing coup masks an underlying problem: The world’s most valuable company—its market value crossed the $1 trillion mark on Aug. 2—has some of the same security problems as the other tech giants when it comes to apps. It has, in effect, abdicated responsibility for possible misuse of data, leaving it in the hands of the independent developers who create the products available in its App Store. Bloomberg News recently reported that for years iPhone app developers have been allowed to store and sell data from users who allow access to their contact lists, which, in addition to phone numbers, may include other people’s photos and home addresses. According to some security experts, the Notes section—where people sometimes list Social Security numbers for their spouses or children or the entry codes for their apartment buildings—is particularly sensitive. In July, Apple added a rule to its contract with app makers banning the storage and sale of such data. It was done with little fanfare, probably because it won’t make much of a difference. For all of Facebook’s privacy problems, it was at least able to alert people who were potentially affected by the Cambridge Analytica leak. Apple has no such mechanism. If the company insists on not knowing what happens to our data in the name of privacy, it can at least help us ensure we don’t share more of it than necessary. [Bloomberg LP and at: Bloomberg, Macworld, FastCompany & 9to5Mac – Related coverage at: Naked Security (Sophos)]

Genetics

CA – Border Agents Using DNA Databases to ID Detainees, Track Relatives

According to immigration lawyers, border officials have collected DNA samples from at least three immigration detainees in the past year in their attempts to identify their ethnicity, track down relatives and establish nationality in order to remove these individuals from Canada. “CBSA uses DNA testing in order to determine identity of longer term detainees when other avenues of investigation have been exhausted,” said the agency’s spokesperson Jayden Robertson, who refused to release how many detainees have undergone the DNA searches to date. “DNA testing assists the CBSA in determining identity by providing indicators of nationality thereby enabling us to focus further lines of investigation on particular countries.” DNA is just one of the many tools that assist officials in their detective work, Robertson added. The border agency would not comment if it has any protocol or guideline in the use of DNA samples in investigations but said it requires consent from clients before submitting their information to DNA websites such as Familytree.com and Ancestry.com. Lawyer Jared Will, who represents the other two detainees, says detainees often have no choice but give consent. “The consent cannot be truly voluntary. These individuals are being detained and they risk prolonged detention because if they don’t give consent, they are alleged to be non-cooperating.” [The Toronto Star ]

Health / Medical

US – HHS Issues Authorizations for Health Research

HHS has issued guidance on uses and disclosures of protected health information for research. HIPAA authorizations must contain specific information about research purposes of requested PHI use or disclosure, specific identification of authorized persons, and expiration dates or events; covered entities do not have to remind individuals of their right to revoke authorization (they may choose to remind minors when they reach the age of majority), and revocation exceptions include maintenance of research integrity, quality assessments, and reporting adverse events. Data Privacy Monitor

EU – CNIL Approves Simplified Measures for Health Research Approvals

The French Data Protection Authority (CNIL) has issued a series of documents related to health sector research without obtaining consent under the GDPR. Five new reference methodologies lessen the need for prior DPA authorization subject to compliance with prescribed conditions (such as, depending on the type of research undertaken, a public interest in the research and a prohibition on data matching); the CNIL also approved simplified access to extractions from a health insurance database which only requires prior approval from a research institute. [CNIL – Approval of 5 Reference Methodologies for Health Research and Simplified Access to PMSI Press Release | Guidance | Reference Methodology MR-001 | Reference Methodology MR-003 | Reference Methodology MR-004 | Reference Methodology MR-005 | Reference Methodology MR-006 | Deliberation No. 2018-256 | CNIL overview (French)

US – HHS Weighs Changes to Health Data Privacy Regulations

The Department of Health and Human Services is considering making changes to federal privacy regulations governing health data – including the HIPAA Privacy Rule and the 42 CFR Part 2 law [here, here & here], which pertains to substance abuse and mental health information. According to HHS Secretary Alex Azar in a July 26 speech. In the coming months, HHS will be releasing requests for information, seeking comments regarding potential changes in HIPAA and also 42 CFR Part 2, a federal privacy law that governs confidentiality for individuals seeking treatment for substance use disorders from federally assisted programs. Congress is also awaiting word from HHS about its work to address “Compassionate Communications on HIPAA” provisions that are authorized under the 21st Century Cures Act [PDF here], which was signed into law in 2016. In a July 26 letter, six members of Congress asked HHS for an update regarding the status of the department implementing the 21st Century Cures provision that calls for HHS to develop “model programs and training” for healthcare providers to clarify when patient information can be shared. Some regulatory experts argue that no changes to the HIPAA Privacy Rule are needed, while others say that changes could prove helpful. [GovInfo Security, HealthIT Security and Becker’s Hospital Review]

US – OCR Issuing Fewer HIPAA Penalties in 2018, Report

The HHS Office for Civil Rights is on track to impose significantly fewer HIPAA settlement fines in 2018 than the agency has in previous years, according to a report from the law firm Gibson Dunn [here & see 32 pg PDF report here]. The July 26 report is a mid-year review of healthcare enforcement actions, including decisions by HHS, CMS, OCR and the Justice Department. Since HIPAA privacy rules went into effect in 2003, OCR has reviewed and resolved more than 180,000 complaints related to the legislation. In 2017, the civil rights office issued 10 penalties totaling $19.4 million, and in 2016, the office issued 13 penalties totaling $23.5 million. As of July, OCR has reported only two HIPAA penalties in 2018, along with one decision from an HHS administrative law judge. The three decisions amount to an estimated $7.9 million in fines. Gibson Dunn noted it’s unclear whether the downtick in HIPAA enforcement actions during the first half of 2018 signals a shift in priorities, or whether the civil rights office intends to pursue more settlements in the second half of the year. However, if OCR continues at this pace throughout the remainder of 2018, the year will mark a “dramatic decline in HIPAA enforcement actions.” [Becker’s Hospital Review, Health IT Security & Bank Info Security]

US – Healthcare IT Security Worst of Any Sector with External Threats

Healthcare IT security is the worst of any sector when it comes to external security posture, according to a recent report by security advisory firm Coalfire. The Coalfire Penetration Risk Report [PR here] used customer penetration test data to analyze the security challenges within enterprises of various sizes and in different industries, including retail, healthcare, financial services, and technology industries, and compared the security posture between small, mid-sized, and large organizations. In terms of external security posture, healthcare organizations had the highest level of severe issues in their external security posture, followed by tech, retail, and financial services. In terms of internal security posture, retail had the highest level of severe security issues, followed closely by healthcare, tech, and financial services. Coalfire found that healthcare organizations, especially hospitals, have hundreds and sometimes thousands of high-risk connected devices that are unsupported, unpatched, and without basic security systems in place. The report also found that large enterprises are not the best prepared to protect against cybercrime, despite having bigger budgets and more resources. Across all sizes and sectors, however, people remain the biggest security weakness, whether through human error or creating opportunities for social engineering hacks. Phishing was a highly successful “foot in the doorway” for attackers who use it as an entry point to infiltrate the organization, then pivot to navigate internally to escalate for greater control. [Health IT Security]

US – Amazon’s Healthcare Expansion: Privacy Concerns

Amazon is greatly expanding its healthcare activities. For example, it’s nearing completion of its purchase of online pharmacy PillPack, and it has entered an employee health partnership with JP Morgan and Berkshire Hathaway. As a result, the online retail giant now will face a wide variety of important new privacy issues, say attorneys Jeffrey Short and Todd Nova. As Amazon collects, analyzes and exchanges more healthcare data, it will need to navigate privacy and breach regulations at the state and federal levels, including HIPAA, the attorneys note in a joint interview with Information Security Media Group [listen here]. In the interview, Short and Nova also discuss: a) Other privacy and security issues tied to Amazon’s pending $1 billion purchase of PillPack, which is slated for completion by the end of this year; b) Potential privacy and security concerns for Amazon’s partnership announced earlier this year with JP Morgan and Berkshire Hathaway to create a not-for-profit company aimed at lowering the healthcare costs of their employees; and c) Issues raised by the healthcare sector’s ongoing consolidation. [BankInfo Security]

US – NIST Issues Best Practices for Healthcare Use of Mobile Devices

The National Institute of Standards and Technology issued guidance on the security of health records on mobile devices. Primary risks to patient information on mobile devices include loss, theft, deliberate misuse (use of unsecure networks, virus or malware downloads), and inadequate privilege management; address risks by implementing least privilege access controls, firewalls, vulnerability scanning tools, continuous monitoring of server baselines, end-to-end encryption for communications (between doctors, patients, IT administrators, EHRs), and using encryption for archived files. [NIST Special Publication 1800-1 – Securing Electronic Health Records on Mobile Devices]

Horror Stories

CA – Nova Scotia Privacy Breach “An Epic Government Failure”

It is impossible to overstate the epic failure of Nova Scotia’s Health Department, uncovered and enumerated by Privacy Commissioner Catherine Tully last week in a pair of damning reports [see PR, IR 18-01 & IR 18-02]. The Health Department failed at virtually every turn to protect the privacy of 46 Nova Scotians whose medical records were pilfered for purely personal motives by a former Sobeys pharmacist. The department then compounded that failure with callous disregard for the victims’ rights to timely and complete notification on the extent of the intrusion into their personal medical histories. “It is virtually impossible to undo the harm and sense of violation individuals feel when the intimate details of their personal health information are breached. I find that the harm from these breaches is significant,” Tully concluded. The department will respond to the commissioner’s findings sometime this month. There was nothing remotely serious or even competent about the way the department handled this breach. It didn’t seriously pursue a tip that came in on its 1-800 Health Privacy tip line It deferred to Sobeys in the initial investigation, and it even failed to identify all those whose records had been inappropriately accessed. The department identified 39 of the victims. The commissioner discovered seven more. [Cape Breton Post at: CBC News and City News Toronto]

Law Enforcement

UK – Privacy International Calls for Probe into Cops’ Use of Mobile Phone Extraction

Privacy International is calling for the UK’s Investigatory Powers Commissioner (IPCO) to probe whether cops have a legal right to extract data from mobile phones. In a letter [see PR here] sent to Lord Justice Sir Adrian Fulford, the Investigatory Powers Commissioner, the privacy advocacy group says its concerned that the use of mobile phone extraction technology by coppers may in some – or all – circumstances constitute either an unlawful interception of communications or hacking. “If it does, then the conduct engaged in is subject to your oversight,” Privacy International says in its letter. The letter highlights findings from a recent report [see PR here & 41 pg PDF report here] out of Privacy International which revealed that the use of extraction kits – already used by more than half of UK police forces, and being trialled by a further 17% – allows cops to download the entire content of someone’s phone without their knowledge. [The Inquirer, The Register]

Location

WW – Google Tracks Unsuspecting Users; Lawmakers Demanding Action

The Associated Press published a report about Google products that track users’ location even if they tried to stop it in their privacy settings. Google, according to the AP, claims that it is clear with its data tracking practices. Lawmakers are now saying they want to look into this privacy practice. Senator Mark Warner [here] of Virginia and Representative Frank Pallone [here] of New Jersey’s 6th district have both decried the practice, calling for tougher privacy legislation. The FTC is already investigating Facebook’s privacy, and this development could expand the scope of its inquiry. Politico reported that past FTC officials believe the company’s actions could warrant heightened scrutiny. [Fast Company, Bloomberg Law and DBR on DATA] See also: Google Tracks Your Movements, Like It or Not (AP) and also at: WIRED, The Verge, CNET and The Register] and also “The FBI Attempted Unprecedented Grab for Google Location Data” at: Forbes, Press Herald, AP via Washingto Times and Bangor Daily News]

Privacy (US)

US – Treasury Report Urges National Breach Notification Standard

A U.S. Treasury report that focuses on nonbank financial institutions, financial technology, and innovation includes recommendations for improved fin-tech consumer protection, such as giving consumers greater control over their financial data, and establishing a national breach notification standard. [SC Mcmagazine: U.S. Treasury calls for national data breach notification and increased data protections | Treasury.gov: Treasury Releases Report on Nonbank Financials, Fintech, and Innovation | Treasury.gov: A Financial System That Creates Economic Opportunities: Nonbank Financials, Fintech, and Innovation]

US – NIST Required to Develop Small Business Guidelines

S. 770, the National Institute of Standards and Technology (NIST) Small Business Cybersecurity Act, has been signed by the U.S. President. Resources must be publicly available on agency websites to help small businesses to reduce their cyber risks and promote awareness of basic controls, a cybersecurity culture and mitigation of common risks; resources must be technology-neutral, and vary with the nature and size of the business and sensitivity of data collected or stored. [S. 770 – NIST Small Business Cybersecurity Act – 115th Congress]

US – FTC Strengthens Safeguards for Kids’ Data in Gaming Industry

The FTC has unanimously voted to approve EPIC’s recommendations to strengthen safeguards for children’s data in the gaming industry. In a 5-0 vote, the FTC adopted EPIC’s proposals to revise the Entertainment Software Rating Board’s industry rules to (1) extend children’s privacy protections in COPPA to all users worldwide; and (2) to implement privacy safeguards for the collection of data “rendered anonymous.” The FTC wrote, “the Commission agrees with EPIC’s comment. As COPPA’s protections are not limited only to U.S. residents, the definition of ‘child’ in the ESRB program has been revised to remove the limitation.” The Commission also strengthened protections for de-identified children’s data: “companies must provide notice and obtain verifiable parental consent if personal information is collected, even if it is later anonymized.” EPIC has testified several times before Congress on protecting children’s data and supported the 2013 updates to COPPA. [Electronic Privacy Information Center and at: New York Magazine]

US – House Candidates Vulnerable to Hacks: Researchers

A team of four independent researchers led by former National Institutes for Standards and Technology security expert Joshua Franklin concluded that the websites of nearly one-third of U.S. House candidates, Democrats and Republicans alike, are vulnerable to attacks. NIST is a U.S. Commerce Department laboratory that provides advice on technical issues, including cyber security. Using automated scans and test programs, the team identified multiple vulnerabilities, including problems with digital certificates used to verify secure connections with users, Franklin told Reuters ahead of the presentation. The report follows a string of warnings by Trump administration security officials that Russia is actively interfering in the November elections. FBI Director Christopher Wray recently warned that Russian government agents were working around the clock to sow discord ahead of the election. The researchers did not identify any cases where it appeared that politically motivated hackers had exploited those vulnerabilities. “We’re trying to figure out a way to contact all the candidates” so they can fix the problems, said Franklin, who joined the nonprofit Center for Internet Security [here] last month. [Reuters and at: Buisiness Insider]

US – FTC Seeks Comments on Privacy Impacts and Enforcement

On August 6, 2018, the Federal Trade Commission published a notice seeking public comment on whether the FTC should expand its enforcement power over corporate privacy and data security practices. The notice, published in the Federal Register [see here], follows FTC Chairman Joseph Simons’ declaration [read prepared statement here] at a July 18 House [Subcommittee on Digital Commerce and Consumer Protection of the Committee on Energy and Commerce see here & wiki here] hearing [FTC PR here & watch here] that the FTC’s current authority to do so, under Section 5 of the FTC Act [15 USC §45 here, also see overview here], is inadequate to deal with the privacy and security issues in today’s market. The FTC asks for input by August 20, 2018. It also requests comment on growing or evolving its authority in several other areas, including the intersection between privacy, big data and competition. Beginning in September 2018, the FTC will conduct a series of public hearings to consider “whether broad-based changes in the economy, evolving business practices, new technologies, or international developments might require adjustments to competition and consumer protection law, enforcement priorities, and policy.” [Privacy & Information Security Law Blog (Hunton/Andrews/Kurth) andat: Blockchain Legal Resource (Hunton) – also Related coverage at: PYMNTS and The Wall Street Journal | FTC – Notice of Hearings and Request for Comments – Hearings on Competition and Consumer Protection in the 21st Century | see Press Release & here & 16 pg PDF Federal Register notice here | see also: DBR on Data (Drinker Biddle)]

US – BART Is Planning a System-Wide Surveillance Network Using ‘Video Analytics’ to Automatically Pinpoint Crime and Alert Cops

In response to several recent high-profile crimes, including the horrific killing of Nia Wilson, Bay Area Rapid Transit (BART) officials have revealed preexisting plans to build out a massive surveillance system that would closely monitor all of the district’s stations, trains, and other property [BART statement]. The district’s general manager and police want to upgrade BART’s 1,500 existing analog video cameras to a digital format, which would then be linked to computers that analyze video feeds in real time to detect possible criminal activity. The computers would then automatically notify officers to respond to the scenes of crimes and other disturbances. The proposal is mentioned in a report that will be heard at a meeting of the BART board of directors [see report at pp 39-46 of the agenda]. But the proposal isn’t really new. BART officials said they’ve been testing various powerful surveillance technologies since long before Wilson’s death and other recent violent incidents. BART has long sought to use technologies to secure its trains and stations, but this hasn’t necessarily made the system safer, and many worry about the loss of privacy and civil liberties, or fear surveillance tools could be used in harmful ways. Brian Hofer, the chair of the city of Oakland’s privacy commission and a member of the group Oakland Privacy said there are many ways to make BART safer that don’t necessarily involve mass surveillance. [East Bay Express]

Security

WW – Study Assesses Impact of Cloud Migration Strategies on Security and Governance

Three major approaches to cloud migration have very different technical and governance implications: a) ‘lift and shift’ approach, where applications are moved from existing servers to equivalent servers in the cloud. The cloud service model consumed here is mainly IaaS [Infrastructure as a Service – wiki here]; b) the other side of the spectrum is adopting SaaS solutions [Software as a Service – wiki here]. More often than not, these trickle in from the business side, not from IT. These could range from small meeting planners to full blown sales support systems; and c) More recently, developers have started to embrace cloud native architectures. Ultimately, both the target environment as well as the development environment can be cloud based. The cloud service model consumed here is typically PaaS [Platform as a Service – wiki here]. There can be business case for each of these. The categories also have some overlap. The big point I want to make here is that there are profound differences in the issues that each of these categories faces, and the hard decisions that have to be made. Most of these decisions are about governance and risk management. [Information Security]

US – Accidents Were Most Frequent Cause of Healthcare Data Breaches

In the second quarter of 2018, the most frequent cause of healthcare data breaches was accidental disclosures, according to incidents reported to the Beazley Breach Response Services team [report]. Accidental disclosures made up 38% of the data breaches in the healthcare sector, hacking/malware were 26% of breaches, followed by insiders at 14%, physical loss of a nonelectronic record at 7%, loss or theft of a portable device at 6%, social engineering at 4%, and unknown/other at 5%. The compromise of a single email account provides the hacker with a platform from which to spear phish within and outside the organization, the report noted. Hackers can also use compromised accounts to make fraudulent wire transfers, redirect an employee’s paycheck, and steal sensitive information form the inbox. The report cited a case study involving an undisclosed health system that was hit by a widespread phishing campaign. The phishing email had a link that took victims to a website that instructed them to enter their credentials. All told, the attack costs the health system $800,000 for legal fees, forensic costs, programmatic review, and manual review of documents and another $150,000 in notification, call center, and credit monitoring fees. Beazley said that phishing attacks can be prevented using two-factor authentication and employee training. [Health IT Security]

WW – Are IT Managers Keeping Up with Social-Engineering Attacks?

Using both high-tech tools and low-tech strategies, today’s social-engineering attacks are more convincing, more targeted, and more effective than before. They’re also highly prevalent. Almost seven in 10 companies say they’ve experienced phishing and social engineering. Today’s phishing emails often look like exact replicas of communications coming from the companies they’re imitating. They can even contain personal details of targeted victims, making them even more convincing. Given the prevalence and advanced nature of social-engineering threats, your privacy and security measures should cascade across three key areas: a) people, b) processes, and c) technology. You and your IT team must be vigilant about emerging threats so that as they evolve, your security and privacy measures evolve with them. [DARKReading: Security Boulevard here & here, and ITWorld]

WW – Firms Must Spread Responsibility for Security throughout Enterprise: Accenture

Organizations aren’t doing enough to spread responsibility for cyber security throughout the enterprise, says consulting firm Accenture after looking at the results of a global study [overview]. While 73% of the C-level executives polled agreed that cyber security staff and activities need to be dispersed and executed throughout all parts of the organization, only 25% of non-CISO executives said business unit leaders are accountable for cyber security today. The survey questioned 1,460 executives in 16 countries – including 66 from Canada – on whether their security plans address future business needs. Half of the respondents were Chief Information Security Officer or equivalent roles, while the remaining half were CEOs or other C-suite executives. Among the results:

  1. Only half of the respondents said all employees receive cyber security training upon joining the organization and have regular awareness training throughout employment;
  2. Only 40% of CISOs said establishing or expanding an insider threat program is a high priority; and
  3. Just 40% of CISOs said they always confer with business-unit leaders to understand the business before proposing a security approach. [IT World Canada]

WW – Amnesty International Spearphished with Government Spyware

Amnesty International has been spearphished by a WhatsApp message bearing links to what the organization believes to be malicious, powerful spyware: specifically, Pegasus, which has been called History’s Most Sophisticated Tracker Program. The human rights-focused NGO said in a post that a staffer received the link to the malware in June. Pegasus is a tool sold by NSO Group, an Israeli company that sells off-the-shelf spyware. It enables governments to send a personalized text message with an infected link to a blank page. Click on it, whether it be on an iOS or Android phone, and the software gains full control over the targeted device, monitoring all messaging, contacts and calendars, and possibly even turning on microphones and cameras for surveillance purposes. NSO Group’s response to incidents like this has been consistent on each occasion: the company points to the fact that Pegasus is supposed to be used solely by governments, to enable them to invisibly track criminals and terrorists. “If an allegation arises concerning a violation of our contract or inappropriate use of our technology, as Amnesty has offered, we investigate the issue and take appropriate action based on those findings. We welcome any specific information that can assist us in further investigating of the matter.” Once software blinks into existence, keeping it out of the hands of the wrong people can be very difficult. Pegasus is a case in point: last month, one of NSO Group’s own employees allegedly stole the valuable software and hid it under his bed. Then, he allegedly tried to sell it for the bargain basement price of USD $50 million. (According to the indictment, the tool is estimated to be worth “hundreds of millions of dollars.”) [Naked Security, The INQUIRER, V3 News and The Citizen Lab]

WW – The Internet of Things: Baby Monitor Hacked

A Texas family heard noises coming from their toddler’s bedroom through their video baby monitor. A man was yelling obscenities at their child, and when the parents entered the room, he yelled obscenities at them as well. The family had taken security precautions, including enabling a firewall and establishing passwords for their router and the baby monitor camera, which connects to their Wi-Fi network. [BBC, CNET, NBC News]

WW – Android Malware Spreading Through Mobile Ad Networks

Malware targeting Android devices has been found to be spreading through mobile advertisement networks. Many developers include advertising frameworks in their apps to help boost profits. Advertisements in mobile apps are served by code that is part of the app itself. An attack scheme in Asia involved a rogue ad network pushing code onto devices. When users download and install legitimate apps, the malware prompts users to approve its installation, appearing to be part of the process for the app they have just downloaded. [ComputerWorld]

US – HHS Recommendations for Secure PHI Disposal

The HHS Office for Civil Rights issued guidance on proper disposal of electronic devices and media for healthcare organisations. Ensure policies for final disposition of devices or media consider where data is stored, if all asset tags or corporate identifiers will be removed by the method chosen, and the logistics and security controls necessary to move equipment. The final decommissioning process should ensure total data destruction (or proper migration to another system), and inventories that accurately reflect the current status of devices. [HHS – Guidance on Disposing of Electronic Devices and Media See also: Signal Magazine: NSA Influences Commercial End-of-Life Data Security]

US – NIST Working on Final Public Draft of Risk Management Framework 2.0

The National Institute of Standards and Technology (NIST) is hard at work on the next version of its Risk Management Framework 2.0 (RMF 2.0). The final public draft of RMF 2.0 is expected to be available in September 2018, with final publication expected in November. RMF 2.0 will address supply chains, systems engineering, and privacy. FCW.com: NIST pushes on next version of Risk Management Framework | csrc.nist.gov : Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy – Revision 2, May 2018]

WW – Smart City Sensor Vulnerabilities

IBM Security and Threatcare examined smart city sensor hubs made by three companies and found 17 unpatched vulnerabilities. The flaws could potentially be exploited to manipulate traffic signals and activate flood warnings. The researchers notified the companies of the problems, and all say they have made patches available. It is not known if cities that use the affected sensors have applied the patches. [Wired: The Sensors That Power Smart Cities are a Hacker’s Dream | ZD Net: Smart city systems are riddled with critical security vulnerabilities | CNnet: Smart cities around the world were exposed to simple hacks]

WW – Cybersecurity: Average of 10 Cloud Security Incidents Annually

Kaspersky Lab surveyed 3,041 IT personnel from small-medium businesses in 29 countries regarding their IT infrastructure and use of cloud tools. Most small-medium businesses have adopted some form of cloud platform, but such businesses view data protection and business continuity as their top business challenges, and almost half say a primary IT challenge is the difficulty in securing a distributed IT security perimeter. [Growing Businesses Safely: Cloud Adoptions vs Security Concerns]

CA – Cybersecurity is Top IT Priority for Canadian Organizations: Survey

Cybersecurity is ranked as the top priority for Canadian organizations as more firms become interested in cloud storage and collaboration tools, according to a recent survey by CDW Canada [here & PR – also see related report]. Nearly six in 10 (59%) firms said email security is a main focus, followed by ransomware protection (52%) and intrusion prevention (48%). Daniel Reio, director, product and partner management and marketing, said this year also marks a continued focus on the cloud, with data at the core of many organizations’ IT plans. More than half of Canadian organizations (53%) say their cloud strategy for 2018 includes shifting workloads over time through hybrid solutions. 16% of organizations plan to take a “cloud-first” strategy moving forward while 13% want to move all workloads to the cloud. [Insurance Business]

WW – Information Security Spending to Surge to Over $124bn by 2019

Gartner estimates that worldwide spending on IT security solutions will reach at least $124 billion next year, an increase of 8.7% from 2018’s estimate of $114 billion [see PR here]. A recent report conducted by the Ponemon Institute and sponsored by IBM estimates that the average cost of a data breach to an enterprise company is $3.86 million, and the average cost for each lost or stolen record containing sensitive and confidential information has increased by 4.8% year over year to $148. To make matters worse, the average time it takes to identify a data breach is 197 days. In the meantime enterprise players must tackle the security issues prompted by ongoing cyberattacks and new regulations by way of investment in new and better security solutions. Robust security is not only required to protect corporate networks but has also now become a competitive advantage. By 2019, at least 30% of organisations are expected to invest in GDPR-related consulting and implementation services in order to become compliant with the new regulations. In particular, enterprises are expected to turn to cloud-based systems, such as security information and event management (SIEM), in order to protect corporate networks. [Computer Business Review, Forbes & Information Age]

US – Police Body Cameras Open to Attack

Josh Mitchell, a consultant at security firm Nuix, analysed cameras from five vendors who sell them to US law enforcement agencies. Presenting at the DEF CON conference last week, he highlighted vulnerabilities in several popular brands that could place an attacker in control of a body camera and tamper with its video. Many of them include Wi-Fi radios that broadcast unencrypted sensitive information about the device. This enables an attacker with a high-powered directional antenna to snoop on devices and gather information including their make, model, and unique ID. An attacker could use this information to track a police officer’s location and find out more about the device that they are using. They might even be able to tell when several police officers are coordinating a raid, he said. Mitchell’s research found that some devices also include their own Wi-Fi access points but don’t secure them properly. An intruder could connect to one of these devices, view its files and even download them, he warned. In many cases, the cameras relied on default login credentials that an attacker could easily bypass. Mitchell contacted the vendors about these vulnerabilities and has been working with them to fix the issues, he said. In the meantime, it should leave police forces thinking hard about security audits for their wearable devices. [Naked Security and WIRED & watch here, Engadget, NewsWeek and GIZMODO]

AU – Data Breaches Report Provides Insight into Data Security Vulnerabilities

Under the Notifiable Data Breaches Scheme [OAIC Guidance] which commenced in February 2018, organisations are required to notify the Office of the Australian Information Commissioner (OAIC) and affected individuals in relation to data breaches where there are reasonable grounds to believe that an eligible data breach has occurred (that is, a data breach that is likely to result in serious harm). We now have a better idea of how the Scheme is working, and the nature of the causes of data breaches, with the release by the OAIC on 31 July 2018 of the second quarterly report on data breach notifications the OAIC has received under the Notifiable Data Breaches Scheme [see PR here & report here — also see the 1st quarterly report here & PR here]. It covers the period between 1 April 2018 and 30 June 2018, the first full period for the Scheme and thus the first full period for which information about data breach notifications has been available. During that period, a total of 242 data breach notifications to the OAIC were made, which is an average of more than two notifications per day (reports of a breach that involved multiple entities were counted as a single notification). What will be of most interest to entities subject to the Scheme will be the kinds of information involved, the causes of the breaches, and what they suggest are the key areas of concern in improving their own data security This post reviews these issues. [Clayton Utz Knowledge, Security & Privacy Bytes (Squire Patton Bogs)]

US – Justice Department Releases A-G’s First Cyber-Digital Task Force Report

The Department of Justice recently released its comprehensive assessment of cyber threats in the United States, titled “Report of the Attorney General’s Cyber-Digital Task Force“ [Press Release & Fact Sheet]. The Report is the result of the establishment of the Attorney General’s Cyber-Digital Task Force by the Department in February 2018. Attorney General Jeff Sessions directed the Task Force to answer two questions: 1) How is the Department responding to cyber threats? and 2) How can federal law enforcement more effectively accomplish its mission in this important and rapidly evolving area? The Report responds to the first question and is broken into six chapters, each analyzing cyber threats and how the Department counters them. This Report provides an overview of the Department’s detection of ever-changing cyber threats to the United States as well as the tools and methods the Department is utilizing to counter those threats. The Report indicates that in the future the Department will build on the initial findings and provide recommendations to the Attorney General for more means to protect Americans from cyber threats. [Data Privacy Monitor (BakerHostetler)]

US – Anthem $115 Million Data Breach Settlement Approved by Judge

A $115 million settlement Anthem Inc. for a data breach that exposed consumer personal data won the approval of Judge Lucy Koh [wiki here] of the U.S. District Court for the Northern District of California… The approval [8/15/18 see 53 pg PDF here] finalizes one of the largest settlements in a consumer data breach case. The health insurance giant reached the settlement with about 19.1 million Anthem consumers June 23 without admitting any wrongdoing. [The case is In re Anthem, inc. Data Breach Litig., N.D. Cal., No. 15-md-02617 here ] The 2015 breach [see details here & wiki here] on Anthem’s system affected more than 78 million people, exposing consumers’ Social Security numbers, names, dates of birth, and health care ID numbers and other data. The settlement includes a pool of $15 million for consumers in the class group to claim up to $10,000 each for their out-of-pocket expenses related to correcting. The class members can also get free credit monitoring services beyond what Anthem has already offered. In addition to the settlement fund, the health benefits company agreed to make changes to its data security procedures, including adopting encryption protocols for sensitive data. [Bloomberg Law , The Recorder (Law.com) and Law360]

Smart Cars and Cities

CA – Sidewalk Toronto Shies Away from Data Privacy, Focuses on Urban Planning at Third Public Roundtable

While the August 14th, 2018 Sidewalk Toronto public roundtable focused on Alphabet subsidiary Sidewalk Labs’ infrastructure plans for Toronto’s waterfront, there was a distinct lack of discussion surrounding the company’s privacy and data governance policies. However, Waterfront Toronto’s vice president of innovation, sustainability and prosperity Kristina Verner specified that conversations about data governance and data stewardship are still taking place at smaller committee meetings. “What we’re doing with the digital governance and privacy issues is really pulling that into a slightly different location for the conversation,” said Verner. These different locations include Waterfront Toronto’s public Digital Strategy Advisory Panels — including the one set for August 16th, 2018 — and at three public ‘CivicLabs’ on October 3rd, November 7th and December 5th, 2018. Verner added that the CivicLabs will focus on discussions about “privacy, data governance, data residency, intellectual property, shared benefits — all of that sphere that is much more…technical in nature, a little different from the urban planning piece to it.” “It’s just a different venue, but that conversation is still going to be happening very soon,” said Verner. [MobileSyrup, IT World Canada, Finacial Post and Toronto Star]

Surveillance

US – University Putting 2,300 Echo Dots in Student Living Spaces

Saint Louis University will “deploy more than 2,300” Echo Dot smart devices throughout student living spaces, including in “every student residence hall room and student apartment on campus” [see SLU notice & FAQ]. Arizona State University put Echo Dots in student spaces last year [see here & here], but SLU’s new initiative seems to be the first time a university has put an Echo Dot in all student living quarters. The SLU Echo Dots will be modified to answer 100-plus SLU-related questions about things like library hours, basketball games, campus events, and university office locations. SLU is using Amazon Alexa for Business platform so the Echoes are going to be attached to a central SLU system and not individual accounts. SLU claims Alexa does not keep recordings of asked questions. The SLU system apparently does not keep personal information on users, and “all use currently is anonymous.” Amazon did not immediately respond to a request for comment on SLU’s use of the devices and privacy concerns that may come with having a voice-controlled device in student living spaces. [GIZMODO,: The Verge, voicebot and DIGITAL Trends See also: Hackers Found a Way to Make the Amazon Echo a Spy Bug

AU – Gov’t Plays Down Concerns About Use of Biometrics for Mass Surveillance

Officials from the Department of Home Affairs have sought to assuage concerns that a proposed national facial recognition service could lay the basis for mass surveillance. The government currently has two bills before parliament — the “Identity-matching Services Bill 2018“ [review here] and the “Australian Passports Amendment (Identity-matching Services) Bill 2018” [here] — which are part of creating the legal infrastructure for the new system. The Commonwealth, state and territory governments have endorsed the idea of a national, federated system for facial identification and verification, which could draw on the driver’s licence data held in different Australian jurisdictions as well as other sources of face images including passport and citizenship data. In October 2017, the Council of Australian Governments (COAG) signed the Intergovernmental Agreement on Identity Matching Services (IGA) that committed them to promoting “the sharing and matching of identity information to prevent identity crime, support law enforcement, uphold national security, promote road safety, enhance community safety and improve service delivery, while maintaining robust privacy and security safeguards”. The services are “not intended for mass surveillance,” acting first assistant secretary, Identity and Biometrics Division at the Department of Home Affairs, Andrew Rice, told a federal parliamentary inquiry into the two bills. Earlier this year the Victorian government, which signed the IGA, raised a number of concerns about the proposed federal legislation. The IGA envisages potential private-sector access to the Face Verification Service and Document Verification Service. The document states, however: “The private sector will not be given access to the other Face Matching Services or the Identity Data Sharing Service.” [Computerworld, Biometric Update and Security Document World]

WW – Google Keeps on Tracking

An investigation conducted by the Associated Press found that Google stores users’ location data even when those users have switched off Location History in their account settings. In fact, turning off Location History only stops Google from adding location information to a viewable timeline. The issue affects all iPhone and Android users who run Google Maps on their devices. There is a way to completely turn off location tracking by making changes to the Web and App Activity setting, but it is not easy to find. [CS Monitor: Turned off location history tracking? Google might still be following you | BBC: Google tracks users who turn off location history | Wired: Google Tracks You Even if Location History’s Off. Here’s How to Stop it]

WW – Apple Cannot Monitor Third Party App Data Use

Tim Cook, CEO of Apple, Inc. responded to a letter from the Energy and Commerce Committee relating to information on the microphone functionality of iPhones. The iPhone’s capabilities enable collection of location data even when the phone does not have a SIM card and the WiFi services are disabled, but only if location services are enabled; Apple has put in place security measures like evaluating developer’s apps to ensure compliance with its privacy guidelines, however it does not guarantee or monitor compliance of third party app developers with local laws or their own privacy policies. [Tim Cooks Response Letter to the Committee on Energy and Commerce – the iPhones capabilities of collection and use of consumer data and microphone functionality of iPhones]

Telecom / TV

US –Senators Probe NSA’s Deletion of Phone Records

Senators Ron Wyden (D-OR and Rand Paul (R-KY) have sent a letter [Press Release] to the NSA’s inspector general asking him to look into the agency’s torching of metadata for hundreds of millions of phone calls. “We write to request that you conduct an investigation into the circumstances surrounding, and any systemic problems that may have led to, the deletion by the National Security Agency (NSA) of certain call detail records (CDRs) collected from telecommunications service providers pursuant to Title V of the Foreign Intelligence Surveillance Act (FISA),” the letter begins. That deletion was announced back in June [see reports at: CATO at Liberty, Marea Informative Blog and Hit & Run Blog], one month after the spy agency revealed in a “statistical transparency report“ that it had collected 534 million call details in 2017, a tripling of the number from the previous year. The NSA blamed “technical irregularities” for the receipt and storing of an unspecified amount of phone call data, and said that, since it was not possible to discern between legitimately and illegally gathered details, it was going to “delete all CDRs acquired since 2015.” Wyden and Paul have proven to be two of the very few congressmen and women willing to challenge the powerful intelligence agencies, and in the letter ask eight questions of NSA’s data bonfire, focused on identifying contradictory elements. [The Register, The Washington Times and The Hill]

US Legislation

US – California Bill Takes Aim at Secrecy Surrounding Police Personnel Records

More than 40 years of police secrecy could begin to crumble if California lawmakers pass a new bill allowing the public release of personnel records for law enforcement officers involved in deadly force, on-duty sexual assaults and falsifying evidence. Senate Bill 1421 [see here and/or here], by state Sen. Nancy Skinner, D-Berkeley, is the latest effort to open police records in the name of transparency. Since 1976, California law enforcement officers have been protected by statutes and court rulings — the strictest in the nation — that make it illegal to release virtually all police personnel records, including those involving wrongdoing and disciplinary action. California’s protections were made virtually impenetrable in 2006, when the California Supreme Court ruled in Copley Press v. Superior Court of San Diego County that civilian police commissions could not publicly disclose their findings on police misconduct. As a result, some commissions could no longer gain access to personnel files. Lobbyists for the police said these protections were necessary for officer safety. Specifically, Skinner’s bill would allow for the disclosure of reports, investigations or findings for incidents involving the discharge of a firearm or electronic control weapons, strikes by weapons to the head or neck area or deadly force; incidents of sustained sexual assault by an officer; and findings of dishonesty by an officer. The proposal is scheduled to be heard Thursday by the Assembly Appropriations Committee. It already has been passed by the Senate [see here]. [Orange County Register and at: CATO]

US – Federal Bill Regulates Consent for Apps

H.R. 6547, the Application Privacy Protection and Security Act of 2018 was introduced in the US House of Representatives. If passed, mobile app developers must obtain consent to the terms and conditions prior to collecting personal data, and provide users with a means of notifying the developer that they intend to stop using the app, and requesting the developer to refrain from further processing or sharing of the data, and possibly deleting data collected and stored by the app. [H.R. 6547 – Application Privacy Protection and Security Act of 2018 – US House of Representatives]

 

+++

 

July 2018

Biometrics

CA – Canada Expands Its Biometrics Screening Program

On July 31, 2018, all nationals from countries in Europe, Africa and the Middle East are required to provide biometrics (fingerprints and a photo) if they are applying for a Canadian visitor visa, a work or study permit, or permanent residence. Accurately establishing identity is an important part of immigration decisions and helps keep Canadians safe. For more than 20 years, biometrics (fingerprints and a photo) have played a role in supporting immigration screening and decision-making in Canada. Canada currently collects biometrics from in-Canada refugee claimants and overseas refugee resettlement applicants, individuals ordered removed from Canada and individuals from 30 foreign nationalities applying for a temporary resident visa, work permit, or study permit. .More than 70 countries are using biometrics in immigration and border management. Canada’s Migration 5 partners – the United Kingdom, Australia, the United States, and New Zealand – have implemented biometric programs; so have the 26 Schengen states in Europe, and other countries around the world like Japan, South Africa and India. [Immigration, Refugees and Citizenship Canada and iPolitics]

US – Amazon Face Recognition Matches 28 Members of Congress With Mugshots

Amazon’s face surveillance technology is the target of growing opposition nationwide, and today, there are 28 more causes for concern. In a test the ACLU recently conducted of the facial recognition tool, called “Rekognition“, the software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime. The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country. The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.). These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance. To conduct our test, we used the exact same facial recognition system that Amazon offers to the public, which anyone could use to scan for matches between images of faces. And running the entire test cost us $12.33 — less than a large pizza. [Free Future Blog (ACLU) and at: Seattle PI, WIRED, NPR and The Washington Post]

US – Schools Face Civil Liberties Battles in Effort to Adopt Facial Recognition

As schools around the country attempt to deploy new facial recognition functionality as part of their video surveillance systems, the ACLU is challenging those efforts in the name of protecting civil rights. And they’re not alone in their concerns about the controversial student surveillance tactic. As EdTech Strategies recently reported, both Magnolia School District in Arkansas and Lockport City School District in New York recently approved purchases of camera systems that include the ability to identify people captured on camera and track them. In both scenarios, however, the ACLU has objected to the use of facial recognition for several reasons. They’re “vulnerable to hacking and abuse,” asserted the ACLU of Arkansas, and they compromise “students’ privacy.” The national organization stated that once somebody’s facial image is captured by the technology and uploaded into the system planned for New York, the program has the ability to “go back and track that person’s movements around the school” for the previous two months. [T.H.E. Journal coverage at: CNET News, Narcity Planet Biometrics and also Hey mom, did you see this? Camps are using facial recognition and also at: Biometric Update

US – Lawmakers to Investigate Use and Abuse of Face Recognition Tech

Less than a week after a damaging report [see ACLU blog post here and PRs here, here & here] exposed substantial flaws in facial recognition technology marketed to law enforcement by Amazon, five Democratic lawmakers are calling for an investigation into the commercial and government use, and potential abuse, of the technology. In a letter [PR here] to Gene Dodaro, head of the U.S. Government Accountability Office (GAO), lawmakers raised concerns about the use of facial recognition and its impact on privacy rights, underscoring, in particular, the “disparate treatment of minority and immigrant communities within the United States we ask that you investigate and evaluate the facial recognition industry and its government use.” The letter was signed by Senators Ron Wyden, Chris Coons, Ed Markey, and Corey Booker, and Jerrold Nadler, the ranking Democrat on the House Judiciary Committee. [GIZMODO and at: The Hill, Healthcare IT News and Techdirt]

UK – Government Has Created an Automated Facial Recognition “Policy Void”

A lack of clear government action has created a UK “policy void” when it comes to using automated facial recognition technology in CCTV analytics, according to a leading cyberlaw academic. Andrew Charlesworth, professor of law, innovation and society at the University of Bristol, called for an informed debate into the use of artificial intelligence (AI) in video surveillance. UK police are increasingly using automated facial recognition on CCTV footage to identify persons of interest. A recent report by Big Brother Watch [PDF & blog post] found that automated facial recognition technology used by police falsely identified 98% of people in UK cases. However, Charlesworth, in a white paper named “CCTV, Data Analytics and Privacy: The Baby and the Bathwater“ said that public debate over the issue had become “distorted”. He warned that the two sides of the argument had become polarised, fuelled by the government’s lack of stringent regulations. The UK Government’s long-awaited biometrics strategy, released in June, was criticised for not being comprehensive enough. Charlesworth’s report was commissioned by Cloud-based video surveillance system company Cloudview. [Verdict and at Security Boulevard]

UK – Consultation on Police Handling of Biometric Data Launched

The Scottish Government wants to introduce additional safeguards to ensure the safe and proportionate use of fingerprints, DNA and facial recognition technology. A public consultation is now underway in response to recommendations made by an Independent Advisory Group on biometrics earlier this year. It asks for views on the creation of a code of practice on the use, storage and disposal of biometric data to be overseen by a new Scottish Biometrics Commissioner. The arrangements will cover data held by the likes of Police Scotland, the Scottish Police Authority and other bodies involved in law enforcement activity in Scotland. The Scotsman

Face Recognition ‘Tickets’ Are Coming to Baseball Games

MLB and Clear announced a partnership that will soon let baseball fans enter stadiums using fingerprints, and eventually, just their face, instead of tickets. Clear, which offers similar biometric fast-tracking for participating airports, says it will let baseball fans link their Clear and MLB.com accounts. By sharing fingerprint data, visitors can bypass long lines at stadiums. 13 stadiums use Clear already. GIZMODO

Big Data / Artificial Intelligence

US – NIST Identifies Challenges of Big Data

The National Institute of Standards and Technology issued an interoperability framework for big data. Challenges include the ability to infer identity from anonymized datasets by correlating with apparently innocuous public databases, and shifts in protection requirements and governance as processing roles change and responsible organizations merge or disappear; where data is stored and moved between multi-tiered storage media, systematic analysis of threat models and development of novel techniques is required. NIST – Big Data Interoperability Framework: Volume 1 Definitions – NIST Special Publication 1500-1r1

US – Strategy Experts Split Over Effect of Privacy Concerns on Big Data: Survey

In a new survey, a group of the world’s top strategy experts found they could not agree on the effect privacy concerns will have on how businesses use data. Fifty two percent disagreed with the statement “concern over consumer privacy will fundamentally limit businesses’ ability to use big data,” while 48% agreed or strongly agreed. The forum’s findings come from the MIT SMR Strategy Forum, a new regular feature at MIT SMR where strategy scholars react to a provocative question on strategy development and execution. The forum is led by Joshua Gans of the Rotman School of Management, University of Toronto and Timothy Simcoe of Boston University’s Questrom School of Business. MIT Sloan Management Review

WW – Google’s Approach to Big Data and Artificial Intelligence

Google has unveiled it’s strategy for development of artificial intelligence applications. The company will incorporate privacy principles in AI development and use (e.g. notice and consent, privacy safeguards), develop systems to be overly cautious, test technologies in constrained environments, avoid unfair biases based on sensitive information (e.g. race, income, gender, ethnicity), and evaluate likely uses (based on primary purposes, and whether the technology will have significant impact). [AI at Google – Our Principles]

WW – FPF Provides Risk Assessment Framework for Machine Learning

The Future of Privacy Forum assessed the three line of defense when using machine learning models. The first line is focused on the development and testing of models, the second line on model validation and legal and data review, and the third line on periodic auditing over time. [FPF – Beyond Explainability: A Practical Guide to Managing Risk in Machine Learning Models]

WW – Key Findings from Value of Artificial Intelligence in Cybersecurity Study

A day seldom passes without any exposure to the term artificial intelligence (AI). But when our survey team conceptualized this topic, we were stunned to learn that there wasn’t much publicly available information that documented end users’ perspectives on the impact of AI on organizations’ cybersecurity efforts. So, we’re pleased to share our comprehensive findings — and help answer the critical question: What value does AI bring to cybersecurity? The Ponemon Institute 2018 Artificial Intelligence (AI) in Cyber-Security Study, sponsored by IBM Security, includes detailed and high-level cybersecurity discoveries, as well as a comprehensive look at the impact of AI technologies on application security testing. Here are our top 10 key findings from the study. Make plans to register and attend our webinar on this compelling topic on Aug. 2. After our live session, the webinar will be available on demand for your listening and sharing pleasure. Security Intelligence

CN – Ethics of Big Data: A Look at China’s Social Credit Scoring System

There is much good to be gained from data science, but the negative side includes concerns over data privacy, risk management and cybersecurity, not to mention valid ethical debates over the fairness of digital divides, open access and the democratic use of public information. Now there is a new system being pioneered in China that has the potential to encompass many of these concerns: the creation of social credit scores by the government for its citizens. Will China’s social credit scores represent a grand technological breakthrough for society or ultimately be an example of ethical quicksand? Beyond traditional concerns over data privacy and cybersecurity, this form of social ranking poses deeper ethical dilemmas. First, the dilemma of “conformity vs. coercion” is central to address. A second ethical dilemma is the issue of “transparency vs. trafficking.” The use of gamification with social status scores means that both your absolute score and position relative to others is important. There are other ethical issues that public social credit systems may present. If unaddressed, these issues can become an ethical quicksand that widens the divide across people through the use of a socially constructed algorithm of “trustworthiness.” Do these social credit systems represent an opportunity or are they ethical quicksand? I wonder what George Orwell would say. Forbes

US – Big Data Is Getting Bigger. So Are the Privacy and Ethical Questions

The next step in using “big data” for student success is upon us and also raises issues around ethics and privacy. Whenever you log on to a wireless network with your cellphone or computer, you leave a digital footprint. Move from one building to another while staying on the same network, and that network knows how long you stayed and where you went. That data is collected continuously and automatically from the network’s various nodes. Now, with the help of a company called Degree Analytics, a few colleges are beginning to use location data collected from students’ cellphones and laptops as they move around campus. Some colleges are using it to improve the kind of advice they might send to students, like a text-message reminder to go to class if they’ve been absent. Others see it as a tool for making decisions on how to use their facilities. Many colleges now collect such data to determine students’ engagement with their coursework and campus activities. Of course, the 24-7 reporting of the data is also what makes this approach seem kind of creepy. My concerns are broad: Just because colleges and companies can collect this information and associate it with all sorts of other academic and demographic data they have on students, should they? How far should colleges and companies go with data tracking? The Chronicle of Higher Education

EU – Ethical Matters Raised by Algorithms and AI: CNIL Report

The Commission nationale de l’informatique et des libertés (CNIL) in France discusses ethical matters raised by algorithms and artificial intelligence. The CNIL proposes that the principles of fairness and vigilance could be used to form part of a new generation of principles and human rights in the digital age, and recommendations include education for all players in the algorithmic chain (designers and professionals), setting up organisational ethics committees, and designating a role to oversee social responsibility of the company. CNIL – How Can Humans Keep the Upper Hand – The Ethical Matters Raised by Algorithms and Artificial Intelligence

Canada

CA – CSE Annual Report Tabled in Parliament

The Annual Report of the Communications Security Establishment Commissioner, the Honourable Jean-Pierre Plouffe, cd, was tabled in Parliament. All of the CSE activities reviewed in 2017-2018 complied with the law. The Commissioner did, however, make four recommendations to promote compliance with the law and strengthen privacy protection. One recommendation related to CSE information sharing with international partners to ensure an adequate assessment of authorities and privacy protection measures prior to undertaking new sharing activities. A second recommendation related to disclosure of Canadian identity information and requiring client departments to note both lawful authority and a robust operational justification to acquire that information. Two other recommendations dealt with ministerial authorizations one that CSE should clarify language to reflect the legal protection afforded solicitor-client communications; and the other that CSE should restore the inclusion of comprehensive information in its request to the Minister for one particular MA to assist the Minister in making his decision. Office of the CSE Commissioner

CA – Federal Government Supports PIPEDA Changes

The Federal Minister of Innovation, Science and Economic Development responded to recommendations from the Standing Committee on Access to Information, Privacy and Ethics following its review of PIPEDA. Specific rules are needed for collection and use of minors’ PI (given recent breaches involving PI obtained from social media), some GDPR rights and protections can be incorporated into PIPEDA to enhance privacy protections (algorithmic transparency, privacy by design, data portability), and there are active discussions with the EU Commission to ensure adequacy standing is maintained. Letter to the Chair of the Standing Committee on Access to Information, Privacy and Ethics – Minister of Innovation, Science and Economic Development Committee Recommendations | Minister’s Response

CA – NS Board Conditionally Permits Smart Meter Implementation

The Nova Scotia Utility and Review Board reviews an application by Nova Scotia Power Inc. for approval of a $133.2 million smart meter project. The utility must permit customers to opt-out of the smart meters, subject to a cost (TBD) of continuing with non-standard meter service, and devise a detailed plan of how to inform customers of the opt-out process; the Board is satisfied that the utility’s data collection will not involve PI (there will be an identifier for each customer account), and security is sufficient (data will be protected by security certificates and end-to-end encryption). Nova Scotia Utility and Review Board – 2018 NUSARB 120 – Decision

CA – Waterfront Toronto, Sidewalk Labs Walk Back Plans In New Deal

After months of talks, Waterfront Toronto and Sidewalk Labs LLC have signed a deal [PR here] that reins in some of the Google-affiliate’s plans around its proposed “test bed” for new urban technologies on the city’s lakeshore. Waterfront Toronto released both a new “plan development agreement“ as well as the original “framework agreement“ it had signed last fall with the New York-based Sidewalk, the full text of which had until now been kept secret [W.T. also posted an FAQ ]. The new deal walks back or clarifies a number of provisions contained in that original deal, signed after Sidewalk Labs, a unit of Google parent Alphabet Inc., was chosen as the “funding and innovation partner” to develop a five-hectare (12-acre) parcel of land on the waterfront known as Quayside that sits at the end of Parliament Street. Waterfront Toronto and Sidewalks Labs praised the deal as an important milestone as they continue to develop the project. It was approved unanimously by Waterfront Toronto’s board, but only after Toronto developer Julie Di Lorenzo, who has previously publicly questioned the plan, resigned from her seat on the board. The agreement comes after the sudden departure in early July of Waterfront Toronto’s CEO, Will Fleissig, who had been a driving force behind the project. The deal is only a step toward a final “master innovation and development plan,” which the waterfront agency said won’t be signed until next year. Toronto Mayor John Tory said the agreement will allow “the City to consider an innovative new approach to development, housing, public space and mobility in the Quayside District,” and that he was confident Waterfront Toronto and all three levels of government would ensure it proceeds “in the best interests of Toronto residents.” [The Globe and Mail and at: MobileSyrup ]

CA – BCCLA Launches Handbook to Protect Privacy at the Border

The BC Civil Liberties Association (BCCLA) and the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC) released the online guide “Electronic Devices Privacy Handbook – a Guide to your Rights at the Border“ [overview, short PDF version] The Handbook helps travellers understand what is known about their data privacy rights at these border areas, best practices for securing digital devices and interacting with border officers, and what to do if they’ve been searched. The handbook is for every person who crosses the Canadian border and the U.S. border through preclearance areas, but has particularly important implications for marginalized populations and professionals carrying sensitive documents. All people with personal information on their devices have vested interests in protecting their data from being seized at the border and shared with Canada’s vast network of coordinating departments and national security partners. [BC Civil Liberties Association and at: The Vancouver Province & The Toronto Star]

CA – OIPC NS Recommends Amendments to PHIA

The OIPC NS releases its findings of its review of Nova Scotia’s Personal Health Information Act. PHIA should be amended to permit an executor to determine the collection, use and disclosure of a decedent’s PHI, and require security breaches with a risk of harm notified to individuals to also be notified to the OIPC; a working group should consider the issues of PHI data matching/linking for research and planning purposes, the disposition/outsourcing of storage of health records (including outside NS), and whether there are sufficient safeguards for genetic data and EHRs. [OIPC NS – Personal Health Information Act – Three Year Review Findings] See also: Health Records: DPA Cyprus Sets Retention at 15 Years

CA – Yukon Privacy Commissioner Worried About City Drone Proposal

Diane McLeod-McKay, the Yukon’s Information and Privacy Commissioner is concerned about a proposal that could see the City of Whitehorse using drones to enforce certain bylaws. At the July 16 council meeting, one of the recommendations made was to consider purchasing a drone for use patrolling trails. Council heard that other municipalities have used drones in search and rescue, but that they could also be effective in preventing illegal dumping. Council has accepted a new bylaw review document, but that doesn’t mean all of its recommendations will be implemented. McLeod-McKay noted that the city isn’t subject to the Yukon’s ATIPP Act., or PIPEDA. “Even if ATIPP or PIPEDA did apply, the lack of transparency around what a drone is recording, at any given time, hinders accountability. It’s difficult to make a complaint when you don’t know what personal information is being collected.” Yukon News

CA – OIPC ‘Following Up’ With Calgary Mall Using Facial Recognition Software

The Office of the Privacy Commissioner of Canada [here] said it is “following up” with Cadillac Fairview – the company that owns the Chinook Centre – after the company disclosed it is testing facial recognition technology in mall directories. News of the software came to light after a shopper saw a window on a directory at the Chinook Centre that showed what appeared to be facial-recognition data, including codes like “gender/inception” and “age/inception.” “Given we are not storing images, we do not require consent,” a statement from the company said.[see here] The agency has reached out to Alberta’s privacy commissioner [here] to discuss the matter as well. To date, the agency has not received any complaints involving the Chinook Centre directories. [Global News and at: CBC News and 660 News]

CA – Court Affirms Expectation of Privacy in Devices Under Repair

A Canadian appeals court has decided in favor of greater privacy protections for Canadians. The case involves the discovery of child porn by a computer technician who was repairing the appellant’s computer. This info was handed over to the police who obtained a “general warrant” to image the hard drive to scour it for incriminating evidence [see R v Villaroman, 2018 ABCA 220]. “General warrants” are still a thing in the Crown provinces. These days, it has more in common with All Writs orders than the general warrants of the pre-Revolution days. “General warrants” are something the government uses when the law doesn’t specifically grant permission for what it would like to do. The appellant’s challenge of the general warrant (rather than a more particular search warrant) almost went nowhere, but this decision grants him (and others like him) the standing to challenge the warrant in the first place. As the court notes, handing a computer over to a technician doesn’t deprive the device’s owner of an expectation of privacy. Standing helps, but ultimately didn’t help the appellant here. The court decides the failure to obtain the proper warrant is indeed a violation, but one not severe enough to trigger suppression of the evidence. The court goes on to note the failure to follow proper procedures when obtaining the warrant (ultimately the wrong sort of warrant) was negligent. It was anything but a “trivial” breach of protocol. Even if the officer’s inexperience resulted in erroneous actions, the violation is severe enough for the court to take note of. But this negligence isn’t enough to overcome the inevitable outcome of the search, in the court’s opinion. TechDirt

CA – OIPC NL Directs Healthcare Custodians

The OIPC NL published a newsletter addressing issues pertaining to:

  • personal representative of a deceased individual;
  • privacy training expectations; and
  • the importance of auditing access

Custodians should conduct ongoing training programs for employees handling PHI (training new employees, continuing education throughout the employment, and avoid reliance on general external training), and monitor and assess access to PHI (addressing who should conduct audits and when they should be conducted, what information is being assessed, and what areas will need to be audited). [OIPC NL – Safeguard – A Quarterly Newsletter – Volume 2 – Issue 2]

CA – A Cross-Border Perspective on Privacy Class Actions in Canada

This post explores trends in Canadian privacy class actions and points out similarities and differences in the approaches taken in the United States and Canada in these types of lawsuits. Canadian privacy class actions have been on the rise for the last decade. In both Canada and the U.S., privacy class actions largely fall into three categories: 1) claims that challenge a corporation’s business practices (e.g., cookies, targeted advertising); 2) claims that arise from accidental breaches (e.g., lost storage devices); and 3) claims relating to intentional, targeted misconduct (e.g., hacking, employee snooping). In all categories, the size of the classes and the quantum of damages claimed tend to be large. Importantly however, most cases settle for a fraction of the compensation sought. Generally, plaintiffs must establish some evidence of actual harm and may not simply seek damages for mere fear of identity theft, although no decisions have yet tested the line between harm and mere fear in a trial on the merits. Although moral damages for humiliation or anxiety arising from privacy violations are sometimes awarded, they are nominal—in the range of $2,000–$20,000 per claim. Compared to Canada, many more privacy class actions are commenced in the U.S. Canadian class actions are growing in number, but Canada is still developing its statutory causes of actions related to misuses of technology, while the data breach privacy class actions in the U.S. are largely founded on statutes such as the Electronic Communications Privacy Act [see here & wiki here] and the Computer Fraud and Abuse Act [see here & wiki here]. Unlike the U.S., Canada has an expansive federal regulatory regime—the Personal Information Protection and Electronic Documents Act [see here & wiki here], which provides a simple administrative procedure for complaints and remedies, arguably making class actions less preferable. The European Union (EU) General Data Protection Regulation (GDPR), which purports to extend to organizations based outside the EU that offer goods or services to individuals in the EU or to those who engage in practices that monitor online behaviour of individuals in the EU, may impact privacy litigation and force business to modify their practices in the U.S. and Canada. Mondaq

CA – Ontario Children’s Lawyer Records Exempt from FIPPA

The Ontario Court of Appeals reviewed a decision of the OIPC ON ordering the Children’s Lawyer for Ontario to disclose records pursuant to the FIPPA. The court quashed the IPC order for the Office of the Children’s Lawyer to issue a decision to a father requesting access to his children’s records; the entity is not a government agent (it does not receive direction from, or report to the Attorney General), and it has a fiduciary duty to child clients to keep information provided confidential (which is separate from solicitor-client privilege). Children’s Lawyer for Ontario v. IPC ON, AG ON and John Doe – 2018 ONCA 559 CanLII – Court of Appeal of Ontario

CA – Canada Amends AML/ATF Regulations

The Regulations Amending Certain Regulations Made Under the Proceeds of Crime Money Laundering and Terrorist Financing Act 2018 were published in the Canada Gazette on June 9, 2018. The Regulations update customer due diligence requirements to permit confirmation of identity from a reliable source (e.g., a prescribed financial entity), and beneficial ownership reporting requirements that include information about the beneficiary’s occupation, and user name if receiving payment online. Regulations Amending Certain Regulations Made Under the Proceeds of Crime Money Laundering and Terrorist Financing Act 2018 – Government of Canada

CA – Canada Spending $500 Million on Cybersecurity Over 5 Years

The Canadian federal government announces its renewed national cybersecurity strategy following its public consultation. National objectives are security and resilience (combatting increased cybercrime and the growing impact of IoT), cyber innovation (investing to address the cyber skills gap), and leadership and collaboration (establishing a national plan and clear focal point for cyber incidents and enhancing public awareness). National Cyber Security Strategy – Public Safety Canada | Press Release

CA – Ontario Survey Not Covered under Research Exemption

An Ontario Court reviewed an order of the IPC for Carleton University to disclose records requested, pursuant to the Freedom of Information and Protection of Privacy Act. A court upheld an IPC ON decision that a university survey of Jewish students and faculty was not for pure academic purposes; it is market research based on an administrative request to identify areas for improvement for minority students, and there would not be serious adverse consequences if the records were disclosed (survey data was coded to eliminate identification of respondents, and any identifiable information would be exempt from disclosure). Carleton University v. IPC ON and John Doe – Ontario Superior Court of Justice – 2018 ONSC 3696 CanLII

Consumer

US – Walmart Patents Audio Surveillance Technology to Record Customers and Employees

America’s largest retailer has patented surveillance technology that could essentially spy on cashiers and customers by collecting audio data in stores. The proposal raises questions about how recordings of conversations would be used and whether the practice would even be legal in some Walmart stores. “This is a very bad idea,” Sam Lester, consumer privacy counsel of the Electronic Privacy Information Center in Washington, D.C., told CBS News. “If they do decide to implement this technology, the first thing we would want and expect is to know which privacy expectations are in place.” [Daily Mail]

CA – Canada Tackles Malicious Online Advertising

On July 11, 2018, the Canadian Radio-television and Telecommunications Commission imposed sanctions against the installation of malicious software through online advertising for the first time in its history. This decision was taken under the provisions of the Canadian Anti-Spam Legislation, which came into effect on July 1, 2014. The federal agency issued Notices of Violation [see CRTC PR here & Investigation Summary here] to Datablocks and Sunlight Media, for allegedly facilitating the installation of malware through online advertising. The companies are subject to penalties of $100,000 and $150,000, respectively. Among other things found, these two companies were not verifying their new customers and allowed payment by cryptocurrency. While both companies have been warned of these weaknesses in their practice in a 2015 report by cybersecurity researchers and then again in 2016 by the CRTC, neither implemented basic safety measures. While this CRTC fine is a first of its kind in Canada, this type of threat is nothing new in the industry. We Live Security Blog

E-Government

UK – Voter Analytics and Data Protection: Early Findings from the ICO

The question of the role of big data analytics in modern elections is the question that the ICO has tackled in its report on voter analytics released this month [see July 10 PR here, the 60 pg PDF report “Democracy Disrupted? Personal information and political influence” here & related progress report here ]. For the first time, a DPA has tried to draw the curtain back on the very complicated world of voter analytics, to paint a picture of the range of organizations involved in contemporary elections, and of the practices they engage in. There has been a lot of hype about the importance of the “data-driven” election, and recent scholarly work that sheds a skeptical light on the extent to which data analytics do indeed influence election outcomes. [Democracy Disrupted ] does not go there, although there is an accompanying research report from Demos which reviews the current and future trends in campaigning technologies. Democracy Disrupted provides a detailed and empirically based description of the various sources of personal data that are used to profile the electorate and of how micro-targeting works across a variety of media. Around 40 organizations were the focus of this ongoing inquiry; many other individuals assisted. For privacy professionals, the report raises some intriguing questions about the application of the General Data Protection Regulation to political parties and election campaigns going forward. [IAPP.org and at: ByLine, Information Law Blog (Inksters) and Financial Times]

US – Top Voting Machine Vendor Admits It Installed Remote-Access Software on Systems Sold to States

The nation’s top voting machine maker has admitted in a letter to a federal lawmaker that the company installed remote-access software on election-management systems it sold over a period of six years, raising questions about the security of those systems and the integrity of elections that were conducted with them. In a letter sent to Sen. Ron Wyden (D-OR) in April, Election Systems and Software acknowledged that it had “provided pcAnywhere remote connection software to a small number of customers between 2000 and 2006,” which was installed on the election-management system ES&S sold them. The statement contradicts what the company told me and fact checkers for a story I wrote for the New York Times in February. At that time, a spokesperson said ES&S had never installed pcAnywhere on any election system it sold. “None of the employees, including long-tenured employees, has any knowledge that our voting systems have ever been sold with remote-access software,” the spokesperson said. The company told Wyden it stopped installing pcAnywhere on systems in December 2007, after the Election Assistance Commission [here], which oversees the federal testing and certification of election systems used in the US, released new voting system standards. Motherboard

US – For Sale: Survey Data on Millions of High School Students

At the end of June, three thousand high school students from across the United States trekked to UMass in Lowell sports arena to attend an event with an impressive-sounding name: the Congress of Future Science and Technology Leaders. Many students were selected for the event because they had once filled out surveys that they believed would help them learn about colleges and college scholarships. Many had taken a college-planning questionnaire, called MyCollegeOptions or surveys that came with the SAT or the PSAT, tests administered by the College Board. In filling out those surveys, the teenagers ended up signing away personal details that were later sold and shared with the future scientists event. Consumers’ personal details are collected in countless ways these days, from Instagram clicks, dating profiles and fitness apps. The recruiting methods for some student recognition programs give a peek into the widespread and opaque world of data mining for millions of minors — and how students’ profiles may be used to target them for educational and noneducational offers. These marketing programs are generally legal, but the handling of student surveys is receiving heightened scrutiny In May, the Department of Education issued “significant guidance” [11 pg PDF] that recommended that public schools make clearer to students and their parents that surveys with the SAT and the ACT, a separate college admissions exam, are optional. Over the last few years, several states have passed laws that might also limit the spread of some student profiles. The laws generally prohibit online educational vendors to schools from selling students’ personal data or using it for targeted advertising. The New York Times

E-Mail

US – FBI Provides Guidance for Email Scams

The Federal Bureau of Investigations (“FBI”) released guidance on an increasing threat related to requests for money transfers from compromised email accounts. The business email compromise and individual account compromise scam; targets businesses and real estate sectors performing wire transfers payments; organisations must verify any changes in the vendor payment type or location, and include a two-step verification process for wire transfer payments (form code phrases known only to the legitimate parties. FBI Public Service Announcement – Business Email Compromise the 12 Billion Dollar Scam

Electronic Records

AU – Privacy Commissioner Report: Health Sector Tops Breaches

The healthcare sector has topped the list for data breaches once again, with the Office of the Australian Information Commissioner releasing its delayed quarterly report into the Notifiable Data Breaches scheme [see PR here & report here], with most caused by malicious conduct and human error. According to the report, 49 notifications of data breaches in healthcare were made from April to 30 June 2018, surpassing the finance sector’s 36 notifications. A total of 242 notifications were received during the quarter. The report shows 59% of data breaches were caused by malicious or criminal attacks (142 notifications), with the majority of those linked to the compromise of credentials such as usernames and passwords. 36% of breaches were the result of human error such as sending emails containing personal information to the wrong recipients. The OAIC said the data breaches do not relate to the My Health Record system [see here & here]. But the stats are another setback to the national health information database as it continues to be buffeted by data privacy concerns. Up to 900,000 health professionals will have access to My Health Record via numerous software systems, creating a substantial “attack surface”, according to former Privacy Commissioner Malcolm Crompton. [Healthcare IT News Au, ABC News, CNET News, The Register and OAIC]

US – OCR Issues Guidance on Disclosures to Family, Friends and Others

In its most recent cybersecurity newsletter, the U.S. Department of Health and Human Services’ Office for Civil Rights (OCR) provided guidance regarding identifying vulnerabilities and mitigating the associated risks of software used to process electronic protected health information. The guidance, along with additional resources identified by OCR, are outlined in this post. Privacy & Information Security Law Blog (Hunton Andrews Kurth)

EU Developments

EU – EU Modernises Convention 108

The Council of Europe has approved amendments to modernise Convention 108. Amendments include conducting impact assessments to ensure processing is designed to minimise risks to data subjects, and processing is carried out on the basis of informed, express consent or some other legal basis; data subjects have the same rights afforded under the GDPR, and breaches must be notified where there is a serious risk to data subjects. Modernised Convention 108 – Council of Europe | Comparative table | See Analysis by Graham Greenleaf

EU – Supreme Court of Ireland to Review Facebook Privacy Case

On July 31, 2018, the Supreme Court of Ireland granted Facebook, Inc.’s leave [see ruling here] to appeal a lower court’s ruling sending a privacy case to the Court of Justice of the European Union (the “CJEU”). Austrian privacy activist Max Schrems challenged Facebook’s data transfer practices, arguing that Facebook’s use of standard contractual clauses failed to adequately protect EU citizens’ data. Schrems, supported by Irish Data Protection Commissioner Helen Dixon, argued that the case belonged in the CJEU, the EU’s highest judicial body. The High Court agreed. Facebook’s request to appeal followed. In granting Facebook leave to appeal, the Supreme Court noted that “it is in the interest of justice” that the Court hear its arguments. The hearing will take place within the next five months. Privacy & Information Security Law Blog (Hunton) coverage at: TechCrunch]

EU – Parliament Calls for Suspension of Privacy Shield

The EU Parliament passed a resolution on the adequacy of the EU-US Privacy Shield calling on the European Commission to ensure the Shield will comply with the GDPR; and suspend the Shield if the US is not fully compliant by September 1, 2018 if US authorities do not address identified deficiencies including unclear rules for automated decision making and processing of HR data, failure to follow the EU model of consent, and lack of effective judicial redress for EU citizens. EU Parliament – Motion for a Resolution on Adequacy of Protection Afforded by EU-US Privacy Shield

EU – Privacy Shield Under Pressure as Lawyers Back MEPs’ Call for Suspension

The Council of Bars and Law Societies of Europe (CCBE) [comments] – which represents 32 member countries and 13 associate and observer countries – has repeated its concerns over the deal’s suitability and called for an immediate suspension. The intervention comes as a group of MEPs, who called for a ban on the deal if the issues aren’t addressed by September, travels to Washington to discuss data privacy. The CCBE’s intervention comes as MEPs on the EU’s civil liberties and justice committee (LIBE) begin a four-day trip to Washington to discuss Privacy Shield, along with other data protection issues, with the US government. [The Register Related coverage at: CIO, DBR on Data and Legaltech News] Adequacy: EU Parliament Calls for Suspension of Privacy Shield

EU – Proposed EU Cybersecurity Act Released

The Council of the European Union released a proposal for the future of cybersecurity regulation in Europe. At a time of increased cybersecurity risks, the EU Cybersecurity Act would strengthen the powers of the European Union Agency for Network and Information Security by making it a permanent agency of the European Union. The EU Cybersecurity Act would also create a European cybersecurity certification framework for information and communications technology goods. The goal of the EU Cybersecurity Act is to build cyber resilience and response capabilities within the EU. Harmonizing standards to promote efficiency is also a central theme of the EU’s Digital Single Market strategy. The EU Cybersecurity Act is an output of a broader Cybersecurity Package which was first introduced in 2017 before going through several impact assessments and a comment period. To become law, the proposal will have to be approved by the European Parliament. CyberLex (McCarthyTetrault)

UK – ICO Release Annual Report

The Information Commissioner’s Office has released their Annual Report for 2018 [PDF here]. Commissioner Elizabeth Denham highlights the following in her foreword to the Report: The ICO…

  • has been involved in producing significant GDPR guidance in the last 12 months and has also run an internal change management process to ensure it is up to the demands placed upon it by GDPR (think: extra staff, new breach reporting functions and helplines);
  • pay levels have fallen out of step with the rest of the public sector. UK Government has given the ICO 3-year pay flexibility and some salaries have increased;
  • has taken decisive action on nuisance calls and misuse of personal data;
  • began investigation of over 30 organisations in relation to use of personal data and analytics for political campaigns; and
  • launched a “Why Your Data Matters” campaign – designed to work as a series of adaptable messages that organisations can tailor to inform their own customers of their data rights.

Privacy and Cybersecurity (Dentons)

EU – European Court of Justice Clarifies Who Is a ‘Data Controller’ Under GDPR

The European Court of Justice (ECJ) in Luxembourg rendered a judgment on July 12 [see CJEU Press Release & Judgment of the CJEU], that explains, among other things, what a (joint) data controller is. The judgment is on the “old” EU Data Protection Directive 95/46/EC, but the relevant provisions in the General Data Protection Regulation (GDPR), Art. 4 and 26, are very similar. The case is about Jehovah’s Witnesses Community and whether taking notes in the course of their door-to-door preaching falls under the GDPR. The ECJ states that (a) their activities don’t fall under the exemptions for religious communities, and that (b) the community is a data controller jointly with its members who engage in this preaching activity. Tech & Sourcing at Morgan Lewis

EU – ECHR Ruling Confirms Freedom of Expression Trumps Right of Erasure

The European Court of Human Rights (“ECHR”) decided on 28 June 2018 that the right to request the erasure of personal data on prior convictions, may be trumped by the right to freedom of expression and information. The court confirmed prior case law deciding that the public’s legitimate right of access to electronic press archives is protected by the fundamental right of freedom of expression and information and that limitations to this right must be justified by particularly compelling reasons. Inside Privacy

EU – The eData Guide to GDPR: What is Sensitive Personal Data?

Information on health, race/ethnic origin, sexual orientation, and religious and political beliefs are among a special category of data that have been classified as sensitive personal data under the EU’s GDPR and are given a higher degree of protection. This installment of The eData Guide to GDPR discusses how sensitive personal data is defined, under what conditions it can be processed, and what steps businesses can take to ensure compliance with the GDPR’s special protections of sensitive personal data. Morgan Lewis Insight

EU – Heirs Can Access Facebook Account of Deceased Relatives: German Court

Heirs in Germany have the right to access the Facebook accounts of their deceased relatives, a court said in a landmark privacy ruling on Thursday, saying a social media account can be inherited in the same way as letters. Reuters Additional coverage at: Technology Law Dispatch, Deutsche Welle, Quartz, AFP, Naked Security and GIZMODO]

EU – DPA Brandenburg Advises Caution for Photography

The Brandenburg Data Protection Authority issued guidance on the processing of photos under the GDPR. The taking and publication of photos is permitted under the GDPR (pursuant to data subject consent, a controller’s legitimate interests, or journalistic activity); however, photographers should be careful about photos of large groups of people (notice may be impossible to provide), employees (consent may not be truly voluntary), and existing photo stock (which should comply with prior legal requirements). DPA Brandenburg – Processing of Photos – Legal Requirements Under GDPR

EU – EDPS Comments on Monitoring for Copyright Infringement

The European Data Protection Supervisor commented on a draft resolution for a proposal regarding copyright in the Digital Single Market. According to the EDPS, the draft EU resolution appropriately addresses the obligation for online sharing service providers to monitor their platforms for copyright infringement by not targeting end users who might download or stream uploaded content, and requiring observance of the data minimisation principle; it will be impossible, however, for providers to avoid processing personal data while complying with monitoring and reporting obligations. EDPS – Formal Comments on a Proposal for a Directive of the European Parliament and Council on Copyright in the Digital Single Market

UK – ICO Seeks Views on Age Appropriate Design

The UK ICO is calling for evidence and views on the Age Appropriate Design Code under the Data Protection Act, 2018. The ICO UK is calling for suggestions from information service providers and child development experts to design the Age Appropriate Design Code, with a focus on the different development stages of children and the websites or applications that children access or are likely to access; specific areas of interest include profiling, geolocation, and strategies used to encourage extended user engagement. The Code will be submitted to the Secretary of State for Parliamentary approval within 18 months from May 25, 2018; and the call for evidence closes on September 19, 2018. ICO UK: Blog – Children’s Privacy – Call for evidence | Consultation

WW – Big Tech Companies’ Privacy Policies Not Totally GDPR Compliant: Report

A new report from a consumer protection group indicates that even though privacy policies were revamped right before the GDPR came into effect in late May, “there is still room for significant improvements.” The survey used artificial intelligence to analyze 14 privacy policies at major tech companies, including Google, Facebook, Amazon and Apple. The Recorder (Law.com)

EU – Cloud Security and Due Diligence Checklists

A UK law firm highlights industry best practices from regulators and associations, which include risk profiling, monitoring of security controls, and defining access controls to service interfaces and administration systems; to demonstrate compliance, cloud buyers can demonstrate provider compliance through contractual commitment, third party certification and/or independent testing. [Kemp Law]

Facts & Stats

US – Major Breaches in the First Six Months of 2018

The most serious breaches of the first half of 2018 include the US government acknowledging that Russian hackers have managed access to a power utility’s control systems; hackers using phishing attacks to gain access to university systems, private companies, and government agencies around the world and stealing many terabytes of intellectual property; and many instances of organizations misconfiguring data storage mechanisms, exposing stored information. Wired: The Worst Cybersecurity Breaches of 2018 So Far.

WW – Survey Finds Breach Discovery Takes an Average 197 Days

A global study based on 500 interviews conducted by The Ponemon Institute on behalf of IBM [see PR here, infographic here] finds that the average amount of time required to identify a data breach is 197 days, and the average amount of time needed to contain a data breach once it is identified is 69 days. When it comes to cost containment, the study makes it clear time is of the essence. Companies that were able to contain a breach in less than 30 days saved more than $1 million compared to those that took more than 30 days ($3.09 million versus $4.25 million average total). 2018 Cost of a Data Breach Study [PDF] Security Boulevard, Security Intelligence, listen Audio interview – 26 min

EU – The GDPR and Blockchain

Blockchain technology has the potential to revolutionise many industries; it has been said that “blockchain will do to the financial system what the internet did to media”. Its transformative capability also extends far beyond the financial sector, including in smart contracts and the storage of health records to name just a few. Notwithstanding its tremendous capabilities, in order for the technology to unfold its full potential there needs to be careful consideration as to how the technology can comply with new European privacy legislation, namely the GDPR. This article explores some of the possible or “perceived” challenges blockchain technology faces when it comes to compliance with the GDPR. The European Commission has recently launched the EU Blockchain Observatory and Forum which is focused on promoting blockchain throughout Europe. The Forum recently ran a series of workshops on the impact of the GDPR on blockchain technology. Inside Privacy (Covington)

Finance

CA – Canada 2020 Issues Open Banking Report

On July 5, 2018, Canada 2020, a Canadian think-tank, published its report on open banking [see 10 pg PDF here] following a Policy Lab which brought together various stakeholders to discuss open banking in Canada. “Open Banking” refers to an emerging financial services business model that focuses on the portability and open availability of customer data, including transactional information. The core aim of open banking is to enable consumers to share their financial data between their financial institution and third party providers (and between financial institutions), typically through the use of application programming interfaces (APIs). While still a relatively new concept in Canada, open banking has the potential to transform the financial services sector. The federal government is currently undergoing a review of open banking to assess whether it could have a positive impact on consumers while considering the risks to consumer privacy, data security, and financial stability. The purpose of the Canada 2020 Policy Lab was to encourage stakeholders to share information and to discuss the future of open banking in Canada and identified nine broad areas of consensus. CyberLex Blog (McCarthy Tétrault)

FOI

CA – NL Government Breaking Its Own Laws on Access to Info Requests: OIPC

Newfoundland and Labrador is breaking its own laws by exceeding the legal deadlines for responding to access to information (ATIPP) requests, the information and privacy commissioner says. In a report [ruling], Molloy said the government flouts the law based on the volume of work it takes to complete requests, something that would never be tolerated from average citizens. The result is long delays. Molloy’s report, which looked into a case where it took 86 business days for a response to an access to information request, when the law says that should happen within 20 business days concerned the Department of Transportation and Works, and found that over the last fiscal year the department was late on approximately 15 per cent of deadlines and received extensions on another 15%. CBC News

Genetics

WW – Privacy Concerns After 23andme Shares Genetic Data With Major Drugmaker

Drug giant GlaxoSmithKline is investing US$300 million in the DNA-testing company 23andMe in a deal they say could spark the creation of important new medicines, but one that is also raising privacy concerns. Under the deal, GSK will have exclusive rights for four years to use 23andMe’s DNA database to develop new medicines using human genetics. Both the funding and proceeds will be split equally, with the option of extending the partnership for a fifth year. For more than a decade, 23andMe has been selling saliva-based DNA kits to consumers. The company has more than 5 million users – 80% of whom have checked boxes to consent to participating in medical research as well. Genetics is playing an increasingly important role in the world of drug discovery. Researchers use genetic data to help them understand how diseases begin and which proteins and pathways diseases use to progress. Peter Pitts, the president of the U.S.-based non-profit Center for Medicine in the Public Interest told Time he’s worried that whenever one organization shares personal data with another organization, there is a risk the information could be misused. Pitts also wonders whether 23andMe customers are entitled to be compensated if the genetic information they paid for is then used to lead to profitable drugs. “Are they going to offer rebates to people who opt in, so their customers aren’t paying for the privilege of 23andMe working with a for-profit company in a for-profit research project?” Pitts wondered to NBC. 23andMe insisted in its announcement Wednesday that its customers are still in control of their own data. [CTV News, BioNews, Forbes and Scientific American]

US – DTC Genetic-Testing Giants Throw Their Weight Behind Privacy

For years, consumer and privacy advocates have railed against the potential for the direct-to-consumer (DTC) genetic testing to go horribly wrong. In what ways? Privacy violations, for one, along with the idea that companies could get rich off patient data, all while freely sharing our most personal information with law enforcement. But news this week suggests solutions for these problems could be on the way. On July 31, Future of Privacy Forum [allong with testing companies 23andMe, Ancestry, Helix, MyHeritage, and Habit released a set of best practices for the DTC genetic-testing industry, outlining eight key areas and a war chest of possible fixes [see FPF blog post here]. The best practices cover transparency, consent, accountability, security, privacy by design and consumer education, along with data access, integrity, retention and deletion. Recommendations range from providing clear privacy notices of a company’s practices and asking separately for consent to share with third-party organizations to enabling consumers to delete their data, including biological samples. In no way, however, does the document serve as a call to disarm the growing genetic-testing industry. [Healthcare Analytics News, Chicago Tribune, Engadget and GIZMODO]

Health / Medical

US – HHS Releases Interim Guidance on Authorizations for Research

The Department of Health and Human Services (HHS) recently released interim guidance on sufficiency of authorizations for future uses or disclosures of protected health information (PHI) for research purposes. The HIPAA rule permits covered entities and business associates to use or disclosure PHI only as permitted by the Privacy Rule or as authorized in writing by the information’s owner or that person’s personal representative. The 21st Century Cures Act, enacted in 2016, sought, in part, to improve accessibility to medical information for research purposes. It mandated HHS issue guidance on how to allow for this improved access while still protecting patients’ rights under HIPAA. HHS recognizes that additional input from the public on this complex question would better help it provide meaningful guidance. Therefore, HHS is inviting comments from the public before issuing final rules. Data Privacy Monitor (BakerHostetler)

US – FDA: Make Sure EHRs Used for Clinical Studies are Secure

The Food and Drug Administration has issued new guidance spelling out its policy for organizations using electronic health record data in FDA-regulated clinical investigations, such as studies of the long-term safety and effectiveness of various drugs. Among other criteria, the EHRs need to contain certain privacy and security controls. EHRs used for clinical investigations should be certified under the Department of Health and Human Services’ Office of the National Coordinator for Health IT’s EHR certification program, which requires products to meet a variety of privacy and security protection requirements for patient data. But if data from EHRs that are not ONC-certified is collected from “foreign” sources – such as from clinical studies conducted outside the U.S. – sponsors need to consider whether such systems also have “certain privacy and security controls in place to ensure that the confidentiality, integrity and security of data are preserved,” the agency says. GovInfo Security

US – Health Data Breach Tally: Lots of Hacks, Fewer Victims

As of July, some 199 breaches affecting 3.9 million individuals had been added to the Department of Health and Human Services’ HIPAA Breach Reporting Tool website, commonly called the “wall of shame.” The website lists health data breaches affecting 500 or more individuals. By comparison, the 2015 cyberattack on Anthem Inc. affected nearly 79 million individuals. Plus, 2015 attacks against Premera Blue Cross, Excellus BlueCross BlueShield, and UCLA Health affected many millions more. Of the breaches added to the wall of shame so far this year, 74 are listed as hacking/IT incidents. Those incidents affected nearly 2.65 million individuals. But other types of breaches have also been added to the tally in the last seven months. Those include 84 “unauthorized access/disclosure” breaches impacting a total of more than 562,000 individuals, with some of the largest of these incidents involving email. Another 37 breaches involved loss or theft; those affected a total of about 672,000 individuals. Of the loss/theft breaches, 28 involved unencrypted devices. Those incidents impacted a total of about 80,000 individuals. The largest breach tied to loss or theft so far this year involved paper/film records. That incident – which, with 582,000 affected, is also the largest breach posted added to the tally so far this year – was reported in April by the California Department of Developmental Services. GovInfoSecurity

US – Cyberattacks on Health-Care Providers Are Up in Recent Months

Health-care providers and government agencies across the U.S. have seen an increase in cybersecurity breaches in recent months, exposing sensitive data from hundreds of thousands of people as the sector scrambles to find adequate defense mechanisms. The breaches include malware attacks, computer thefts, unauthorized network access and other security breaches, according to a government database that tracks attacks in the health-care sector. Last year’s global WannaCry ransomware attack crippled parts of the U.K.’s National Health Service for days. In a 2015 hack, U.S. health insurance giant Anthem Inc. had about 79 million customers’ personal information exposed. Bloomberg

US – California Bill Requires Security for Health Sensors

AB-2167, an Act Relating to Digital Health Feedback Systems was introduced in the Legislative Assembly of California and has been engrossed to the Senate. If passed, a manufacturer or operator that sells a device or software application that may be used with a digital health feedback system (ingestible sensor that collects or sends health information) must implement reasonable security features appropriate to the nature of the device/software application and the information it may collect, contain or transmit. AB-2167 – An Act Relating to Digital Health Feedback Systems – Legislative Assembly of California

US – Relaxing Patient Privacy Protections Will Harm People With Addiction

The nation is in the midst of a staggering opioid epidemic. Over 115 people die from an overdose each day – and all signs indicate that the problem is getting worse. Unfortunately, of the more than 20 million Americans who need treatment for addiction, it’s estimated that only about 7 percent of them will actually receive specialty care. We would expect policymakers and medical providers to do everything possible to increase the number of people entering treatment, not take actions that will discourage individuals from seeking treatment. But unfortunately, that’s exactly what the Overdose Prevention and Patient Safety Act would do. Despite its benevolent title, this legislation, which has already passed the House of Representatives, would jeopardize the confidentiality of substance use treatment and discourage patients from seeking the care they need. [The Hill and coverage at: STAT News, Scientific American and The Journal of Law, Medicine & Ethics]

WW – Mobile Apps Expose Sensitive and Regulated Data

Appthority’s Enterprise Mobile Threat Report uncovers a new variant of the HospitalGown data privacy vulnerability. The Mobile Threat Report showcases mobile apps’ failures to require authentication to a Google Firebase cloud database, exposing the system to data leak; implement user authentication on all database tables and rows to protect against exploit. Other mitigation steps to reduce risks include prohibiting employees from downloading unsecured apps and performing security reviews on private and public apps. Enterprise Mobile Threat Report – Unsecured Firebase Databases – Exposing sensitive data via thousands of mobile apps – Q-2 2018 – Appthority

WW – Insider Health Data Security Threats Bigger Concern than External

Many healthcare professionals are more concerned about insider threats to health data security than external breaches, according to a survey by HIMSS on behalf of SailPoint. There is an acute level of concern about the threats posed by insiders. On a scale of 1 to 10, the mean score for the level of concern of respondents was 8.2. Among respondents who implemented or managed cybersecurity solutions for their organization, 43% said that insider threats were of greater concern than external threats. Another 35% were equally concerned about insider threats and external threats to data security, according to the survey of 101 healthcare professionals.- HealthIT Security, Healthcare Informatics, CISION]

Horror Stories

US – Patient Data Exposed for Months After Phishing Attack On Sunspire

Several employees of Sunspire Health, a nationwide network of addiction treatment facilities, fell victim to a phishing email campaign, which may have exposed personal patient information for about two months. [see notice here] Hackers were able to access some employee email accounts between Mar. 1 and May 4, but officials did not become aware of the cyberattack until sometime between April 10 and May 17. Officials did not give an explanation as to why the discovery took more than a month. The impacted email accounts contained names, dates of birth, Social Security numbers, medical data like diagnoses and treatments, and health insurance information. All patients are being notified and offered a year of free credit monitoring. While officials have notified the U.S. Department of Health and Human Services, the number of patients impacted by the breach haven’t been posted to the breach reporting tool [here]. [Healthcare IT News and coverage at: Health Data Management]

US – Phishing Attacks Breach Alive Hospice for 1 to 4 months

Two employees of Tennessee-based Alive Hospice fell for phishing attacks, which potentially breached patient data for one to four months. During a review of their email system on May 15, officials discovered unauthorized access to two separate employee email accounts that began on December 2017 for one account and around April 5 for the other. While the breached data varied by patient, it included a vast store of highly-sensitive information including: Social Security information, passport numbers, driver’s licenses or state identification cards, copies of marriage and or birth certificates, financial data, medical histories, IRS pin numbers, digital signatures — and even security questions and answers. Notification letters were sent to impacted patients on July 13. Healthcare IT News See also: Phishing attack compromised the data of 1.4 million UnityPoint Health patients and at: SecurityInfoWatch, ISBuzz News and Latest Hacking News

NZ – Allegations 800,000 NZers at Risk of Medical Privacy Breach

Four New Zealand and Australasian healthcare IT companies, Healthlink, Medtech Global, My Practice, and Best Practice Software New Zealand, have jointly contacted the Privacy Commissioner with a claim the privacy of up to 800,000 Auckland patients has been put at risk. They said primary health organisation (PHO) ProCare Health was putting private information of up to 800,000 Auckland patients into a large database, including patient name, age, address, and all financial, demographic, and clinical information.ProCare Health runs a network of community-based healthcare services, including GPs, throughout Auckland. It strongly denies patient privacy is being compromised. The IT companies said they didn’t know how widespread the data collection was in New Zealand, but it wasn’t acceptable to hold so much identifiable information in one place. In a joint letter to the Privacy Commissioner, the companies said most patients seemed unaware of the ProCare database, as well as potentially some GPs. [The New Zealand Herald coverage at: Tripwire and New Zealand Doctor Online]

Identity Issues

CA – Feds Studying Mobile Passports Despite Privacy Fears

New public opinion research [PDF] published by Immigration, Refugees and Citizenship Canada suggests officials there are considering whether Canadians should be able to renew their passport via a mobile application, as well as what Canadians’ attitudes are towards the idea of using virtual or mobile passports. Through 15 focus groups held across the country earlier this year, participants were asked for their perspectives on what sort of “passport of the future” they would be most interested in using and as with most new technologies, there was general enthusiasm but also a marked wariness about the potential for misuse. Millennials and those over the age of 58 also said they would not be likely to use a mobile passport option. While participants suggested they would be all right with using a passport renewal app or a passport stored on their phone, they were less convinced the ease of use would be worth the security concerns. Convenience seemed to be the biggest motivator overall to consider any move away from the current passport. Mobile passport apps are not yet widespread but South of the border, U.S. Customs and Border Protection has officially endorsed an app called Mobile Passport and it’s being used in 25 American airports so far. Personal data on the app is encrypted and stored by Customs and Border Protection. It’s not clear whether Immigration, Refugees and Citizenship Canada is looking to develop its own app for mobile passports or use the existing one. Global News

CA – Canadian Bankers Push for Federated Approach to Digital ID

The Canadian Bankers Association discuss Canada’s need for a digital identity system. Digital ID can be standardized for use between entities (unlike physical documents), and ensures only one version of an individual’s identity exists, reducing the potential for misinformation, identity theft, or the use of outdated data; Canada should learn from the successes of Estonia and India, ensuring digital ID meets legislative and regulatory requirements for customer identification, and using government as a catalyst to bring digital ID to market. White Paper – Canada’s Digital ID Future – A Federated Approach – Canadian Bankers Association

CA – Health Records: Anonymised PHI Not Compellable

The Supreme Court of Canada reviewed an appeal by the Province of British Columbia regarding disclosure of personal health information to Philip Morris International, Inc. The Supreme Court of Canada found that the anonymization of health data in government databases did not change the nature of the information as data derived from a particular individual’s clinical record, and the relevance of the records to a claim brought on an aggregate basis does not alter that nature. [British Columbia v. Philip Morris International Inc. – 2018 SCC 36 CanLII – Supreme Court of Canada]

Law Enforcement

UK – Police Chief Explains ‘Justice by Algorithm’ Tool

A police chief pioneering new ways of dealing with offenders vigorously defended his force’s pilot of a controversial algorithm-based system for picking suitable candidates. Michael Barton, chief constable of Durham Constabulary, was appearing at the first public evidence-gathering hearing of the Law Society’s Technology and Law Policy Commission on algorithms in the justice system [see here & here]. Durham Constabulary has come under fire after revealing last year that it was testing whether an algorithmic ‘Harm Assessment Risk Tool’ (HART) [see “risk assessment“] could help custody officers identify offenders eligible for a deferred prosecution scheme called Checkpoint designed to encourage offenders away from criminality. The tool employs advanced machine learning to predict the likelihood that an individual will reoffend in the next two years. Barton said that HART was intended as a decision support tool and would never take the kind of nuanced decisions made by custody officers. The main reason for its use is to ensure that people released under the Checkpoint scheme do not go on to commit serious crimes, he said. ‘We are halfway through the pilot of finding out whether custody officers do better than the algorithm he said, promising that results will be peer-reviewed and published. [The Law Society Gazette and at: WIRED and BBC News]

Location

WW – Polar Flow Fitness App Reveals Location of Users in Military and Intelligence Agencies

The Polar Flow fitness app exposes sensitive information about its users, which include US intelligence employees, and military personnel. The Polar Flow Explore function could be used to obtain not only a user’s geolocation data, but also their name and home address. Polar has temporarily suspended the Explore API. Polar is not the first fitness app to expose user data; several months ago, the Strava app was found to be exposing soldiers’ locations and routes. Threat Post: Polar Fitness App Exposes Location of ‘Spies’ and Military Personnel | Bleeping Computer: Polar App Disables Feature That Allowed Journalists to Identify Intelligence Personnel | Fifth Domain Polar fitness app broadcasted sensitive details of intelligence and service members | The Register: Fitness app Polar even better at revealing secrets than Strava.

Online Privacy

WW – Low Accuracy in Device Fingerprinting Techniques

Researchers study the accuracy of fingerprinting of smartphone motion sensors. Existing browser fingerprinting techniques (used to track users without using cookies) are less effective on mobile platforms; additional features and external auxiliary information can be used to improve accuracy (but are unlikely to uniquely identify devices), and combining multiple classifiers provides better accuracy than current techniques. Every Move You Make – Exploring Practical Issues in Smartphone Motion Sensor Fingerprinting and Countermeasures – Anupam Dua et al. – Carnegie Mellon University

WW – Google Admits Third-Party Developers Can Read Your Emails

According to the WSJ, developers of third-party apps can read your Gmail messages. The thing is, you gave the application permission to do that. You just don’t remember. Or weren’t paying attention. After long-running complaints from users, Google stopped scanning the contents of Gmail messages to create targeted ads last year. But the company still allows third-party applications to do so. What skeeves so many people out is discovering that this process isn’t all done by computer. Some companies give human developers access to emails. This enables the developer to check if the code they’ve written to scan the text is finding what it’s supposed to scan. Or to know what to scan for in the first place. [Cult of Mac, Google Blog, CNET, CBC News, VentureBeat, The Verge, Digital Trends, Naked Security and The Sydney Morning Herald]

CA – New Zealand Company Violated Rights of Canadians, Says Privacy Commissioner

How far can companies go using personal information of people copied from a publicly-available website? Not far at all if it involves Canadians who don’t give their consent, according to a decision released by Canada’s privacy commissioner [see Announcement here & report here]. New Zealand’s Profile Technology Ltd. violated the privacy rights of potentially some 4.5 million Canadians by copying the profiles of Facebook users around the globe and posting them on its own website, the office of the federal privacy commissioner has ruled. The company said it merely indexed information publicly accessible on Facebook It also argued Canadian law didn’t apply. However, the commission said under Canadian law these people had to give their consent because Profile Technology used the information not just for indexing but also to start its own social networking website called the Profile Engine. The OPC has sent its findings to the Office of the Privacy Commissioner of New Zealand, which is considering what options may be available under that country’s laws. IT World Canada

US – 3 of 16 Providers Have Sufficient Takedown Processes: EFF

The Electronic Frontier Foundation, an advocacy organization, released its annual report on transparency of online service providers. The Apple App Store, Google Play store and YouTube earned full marks for transparency in reporting government takedown requests based on both legal requests and requests alleging platform policy violations, providing meaningful notice to users of every content takedown and account suspension, providing users with an appeals process to dispute takedowns and suspensions, and limiting the geographic scope of takedowns when possible. Who Has Your Back? Censorship Edition 2018 – Electronic Frontier Foundation | Chart only

US – EFF Files Amicus Brief Supporting Warrant for Border Searches of Electronic Devices

EFF, joined by ACLU, filed an amicus brief in the U.S. Court of Appeals for the Seventh Circuit arguing that border agents need a probable cause warrant before searching personal electronic devices like cell phones and laptops. We filed our brief in a criminal case involving Donald Wanjiku. In 2015 border agents at Chicago’s O’Hare International Airport searched Wanjiku’s cell phone manually and forensically. Border agents also forensically searched Wanjiku’s laptop and external hard drive. Wanjiku asked the district court in U.S. v. Wanjiku to suppress evidence obtained from the warrantless border searches of his electronic devices, but the judge denied his motion. He then appealed to the Seventh Circuit. In their amicus brief, EFF argued that the Supreme Court’s decision in Riley v. California (2014) supports the conclusion that border agents need a warrant before searching electronic devices because of the unprecedented and significant privacy interests travelers have in their digital data. They also cited the Supreme Court’s recent decision in U.S. v. Carpenter (2018) holding that the government needs a warrant to obtain historical cell phone location information. In our amicus brief, we explained that historical location information can be obtained from a border search of a cell phone. DeepLinks Blog (Electronic Frontier Foundation)

Privacy (US)

US – FTC Wants Expanded Authority in Data Security, Privacy

While HHS is the primary federal agency that enforces HIPAA Security and Privacy Rules, the FTC has expanded its enforcement activities in data security and privacy, including taking on now-defunct medical testing firm LabMD over poor data security that led to PHI breaches. The FTC was recently rebuffed by a federal appeals court in its effort to compel LabMD to overhaul its data security program. Despite this setback, the FTC is looking for additional authority from Congress in the privacy and data security area, FTC Chairman Joseph Simons told the House Energy and Commerce Committee’s digital commerce and consumer protection subcommittee on Wednesday. Specifically, the FTC wants the ability to impose civil penalties in privacy and data security cases, authority over nonprofits and common carriers, and authority to issue implementing rules under the Administrative Procedure Act (APA). Currently, the FTC issues rules under the Magnuson-Moss Warranty Act, which is more burdensome than the APA process, Simons noted. [HealthIT Security and at: Imperial Valley News]

US – Judge Rebukes FBI Agent over Improper Stingray Use

A federal judge chastised an FBI agent for improper use of a stingray, also known as a cell-site simulator or IMSI catcher, and an improper search of a cellphone. In April 2016, an FBI agent sought and obtained warrants from a county superior court judge in California to search a suspect’s cellphone and to use a stingray to locate a second suspect. California law does not permit state judges to sign off on warrants for federal agents. Court documents also show that the FBI agent misled the judge about what a stingray does. [Ars Technica: Judge slams FBI for improper cellphone search, stingray use | SC Magazine: Federal Judge scolds FBI agent for improper stingray use]

WW – CIPL Issues Discussion Papers on the Central Role of Accountability

On July 23, 2018, the Centre for Information Policy Leadership at Hunton Andrews Kurth LLP issued two new discussion papers on the Central Role of Organizational Accountability in Data Protection [7 pg PDF notice & overview here]. The goal of these discussion papers is to show that organizational accountability is pivotal to effective data protection and essential for the digital transformation of the economy and society, and to emphasize how its many benefits should be actively encouraged and incentivized by data protection authorities, and law and policy makers around the globe. The first discussion paper [PDF] explains how accountability provides the necessary framework and tools for scalable compliance, fosters corporate digital responsibility beyond pure legal compliance, and empowers and protects individuals. The second discussion paper [PDF] explains why and how accountability should be specifically incentivized, particularly by DPAs and law makers. It argues that given the many benefits of accountability for all stakeholders, DPAs and law makers should encourage and incentivize organizations to implement accountability. Privacy & Information Security Law Blog | see also: CIPL Hosts Special Executive Retreat with APPA Privacy Commissioners on Accountable AI

US – Florida Man Jailed for Failing to Unlock His Phone

What started as a routine traffic stop has quickly escalated into a civil rights case in a Florida courtroom after a man was put behind bars for failing to unlock his phone. William Montanez was given 180 days in jail by a judge after he was asked to unlock two separate phones seized from him by police. Montanez told the court that he couldn’t remember the passwords, so the judge found him in civil contempt and threw him in jail. According to an emergency writ filed by Montanez’s lawyer, he was pulled over by police on June 21 for not properly yielding while pulling out of a driveway. The officers making the stop asked to search his car, which he refused, so the police brought in a drug-sniffing dog. The police got a search warrant for the devices, claiming that they contain evidence of “Possession of Cannabis Less Than 20 grams” and “Possession of Drug Paraphernalia”—both of which Montanez already admitted to, which makes it unclear why the cops still want to search the phone to prove the charges. [Gizmodo coverage at: Fox 13 News, Miami Herald, WPLG Local 10 and Phone Arena]

US – $2 Million FTC Fine for Nonconsensual Posting of PI

A US Court granted the FTC and State of Nevada a permanent injunction against Emp Media Inc. et al for alleged violations of the FTC Act. Website operators are permanently banned from publicly disseminating individuals’ intimate images, name, employer and social media account information, and charging a fee for removal; verifiable express consent must be obtained (after provision of a separate, conspicuous notice), individuals must have the right to revoke consent at any time, and any third party hosting the company’s websites must ensure they are no longer accessible. FTC and State of Nevada v. Emp Media Inc. et al – Order Granting Default Judgment, Permanent Injunction and other Relief – US District Court for Nevada | Press Release

Privacy Enhancing Technologies (PETs)

WW – Privacy Pros Gaining Control of Technology Decision-Making Over IT

The results of new TrustArc research that examines how privacy technology is bought and deployed to address privacy and data protection challenges. Surveying privacy professionals worldwide, the findings of the survey show that privacy management technology usage is on the rise across all regions and that privacy teams have significant influence on purchasing decisions for eight of the ten technology categories surveyed. To understand the different types of privacy and security technologies that are being used – and by whom, more than 300 privacy professionals in the U.S., EU, UK and Canada were surveyed. Key findings from the survey include: A) Privacy tech adoption approaching the tipping point; B) Data mapping, assessment management, and data discovery among fastest growing solutions; and C) Privacy has a strong influence on purchase decisions across most product categories. Help Net Security

RFID / IoT

WW – Advocates Push for More User Control Over IoT Devices

The IoT Privacy Forum, a think tank, discusses governance and strategies regarding the Internet of Things. The Forum advocates for data minimization, built in “do not collect” switches (e.g., mute buttons and software toggles), wake words and manual activation for data collection, and mechanisms to make it easy for users to delete their data or revoke consent; only the user should decide if IoT data should be published on social media or indexed by search engines. Clearly Opaque – Privacy Risks of the Internet of Things – IoT Privacy Forum

WW – Digital Security Threats from New and Unexpected Sources

Symantec issued volume 23 of its internet security threat report, providing information on 2017 trends in targeted attacks, email spam, ransomware and mobile threats. The report identifies the threats as including attacks against IoT devices (by using most used login names like admin, guest and supervisor), attacks on mobile devices (using malware in apps related to photography and music), and attacks on supply chain software (by hijacking network traffic and compromising software supplier directly). Internet Security Threat Report Volume 23.

US – FTC Asked To Investigate Smart TVs

US Senators Blumenthal and Markey have asked the FTC to investigate privacy policies and practices of smart TV manufacturers. The smart TV manufacturers allegedly collect sensitive information and use it for tailoring advertisements on the basis of viewed and accessed content (e.g., applications, video games and cable shows), without obtaining express consent or notifying the user about such collection or tracking activities. Letter to FTC regarding smart TVs collecting personal data of viewers – Senator Markey and Blumenthal, U.S. Senate | Press Release 

Security

US – Final Report on U.S. Government Policies and Public-Private Frameworks to Address Botnets, Security and Resiliency Challenges Released

The U.S. Department of Commerce and the Department of Homeland Security, through the National Telecommunications and Information Administration (NTIA), has released the final report on enhancing the resilience of the Internet and communications ecosystem against botnets and automated distributed threats [see “Enhancing the Resilience of the Internet and Communications Ecosystem Against Botnets and other Automated, Distributed Threats“]. This report continues the work initiated under Presidential Executive Order 13800 titled “Strengthening the Cyber Security of Federal Networks and Critical Infrastructure“. The report aims to build upon consensus on various governmental and private initiatives and new approaches for the government either to adopt or to encourage the development of a more resilient ecosystem that can more effectively defend against threats and attacks by botnets. These attacks are expected to gain in both scale and complexity over time as vectors for attack (both end user devices and Internet of Things endpoints) proliferate. The final report does not differentiate between threats from nation states, cybercriminals or other actors; it observes that developing better cooperation and countermeasures within the ecosystem will generally be effective against all threats regardless of the threat origin. The final report was delayed from its originally scheduled May 11 deadline, it was released in late May 2018, along with a number of other reports relating to cybersecurity and linked to the Presidential Executive Order. A full list and links to the released reports is available [DBR on Data].

WW – Malware Attacks Have Doubled In First Half of 2018

The “malware boom” of 2017 has shown no signs of stopping through the first half of 2018, according to a new report from security company SonicWall. The company’s Capture Labs threat researchers recorded 5.99 billion malware attacks during the first two quarters of the year. At the same point in 2017 SonicWall logged 2.97 billion malware attacks [“2018 SonicWall Cyber Threat Report“]. On a month-to-month basis in 2018, malware volume remained consistent in the first quarter before dropping to less than 1 billion per month across April, May, and June. These totals were still more than double that of 2017, the report said. The study shows ransomware attacks surging in first six months of 2018, with 181.5 million ransomware attacks identified for the period. That marks a 229 percent increase over this same timeframe of 2017. [Information Management, SonicWall Blog, Tarsus Today]

US – US CERT Issues Best Practice to Reduce Phishing Risks

Verify unsolicited calls, visits or emails from individuals asking about employees or company internal information (however do not use contact information provided by the individual), check website URLs for spelling variations or domain changes, and do not provide personal, financial or company information in emails (unless assured of the person’s authority to have the information). Security Tip ST04-014 – Avoiding Social Engineering and Phishing Attacks – US-CERT

US –NIST Releases Security Assessment Requirements

NIST issues a publication for assessing security requirements for controlled unclassified information. Recommended controls include those under security categories such as access control, awareness/training, audit/accountability, configuration management, identification/authentication, incident response, maintenance (of connections/systems), media protection, personnel security, physical protection (escort/monitor visitors), risk and security assessments, system/communications protection and system/information integrity. NIST – Assessing Security Requirements for Controlled Unclassified Information – NIST Special Publication 800-171A | Press Release

EU – EU Commission Amends Draft ICT Certification Framework

The EU Commission amended its proposal for a regulation concerning cybersecurity and ENISA, the European Union Agency for Network and Information Security. Certificates issued under the framework will be valid in all EU countries, making it easier for companies to carry out their business across borders, certification will be voluntary (unless otherwise specified by EU or Member State law), and companies seeking certification will be evaluated against three assurance levels (basic, substantial, high). EC – Proposed Regulation on ENISA and ICT Cybersecurity Certification | Press Release

WW – Companies Overwhelmed by Data Collection: Survey

Gemalto’s fifth annual Data Security Confidence Index surveyed IT decision makers in organizations worldwide about data security mechanisms in place for compliance with GDPR. The study presents the status of organizations in protecting data collected from users, including that only 35% effectively analyze collected data and 65% are unable to analyze or categorize stored user data, while the collection of user data from sources such as apps and connected devices is only expected to increase in the future. Gemalto – Data Security Confidence Index

WW – 45% of US Companies Fell Victim to Phishing in 2017

Wombat Security Technologies, a security technology company, issued its 2018 report on phishing. The results are based on reported attacks from information security professionals; and analysis of simulated phishing attacks in more than 16 industries. The company reports that corporate phishing templates are the most frequently used by attackers (44%), with the most successful being corporate email improvements (89%); to combat phishing attacks, organizations train end users on how to identify and respond to suspicious email, and use email/spam filters, advanced malware analysis, and URL wrapping. State of the Phish 2018 – Wombat Security Technologies

Surveillance

CA – Controversial Gunshot Detector Technology Approved by Toronto Police

In an effort to curb gun violence, the Toronto Police Services Board (TPSB) has requested the city fund a motion to double the amount of public CCTV cameras and introduce a controversial audio recording technology called “ShotSpotter“ [wiki] that provides police with real-time shooting locations. The system is already in use by more than 90 cities in the U.S, including Louisville, Cincinnati and Chicago. The system uses microphones to detect and locate gunfire, and automatically informs police. According to its privacy policy, ShotSpotter said its devices only record and provide police with audio beginning two seconds before a gunshot has been fired and ending four seconds after. The effectiveness of the technology, however, is up for debate. The idea of using the ShotSpotter technology and increasing surveillance cameras raises questions about privacy. As for ShotSpotter, its privacy policy says it “does not have the ability to listen to indoor conversations” and does not have the ability to “overhear normal speech or conversations on public streets.” The company said there has been “three extremely rare ‘edge cases’” (out of 3 million incidents detected in the past 10 years) in which a human voice was overheard. City council will meet Monday and make the decision as to whether to approve the new measures. [Global News, The Toronto Star]

UK – GCHQ Spy Agency Given Illegal Access to Citizens’ Data

The British government broke the law by allowing spy agencies to amass data on UK citizens without proper oversight from the Foreign Office, Investigatory Powers Tribunal has ruled [see Judgment here]. GCHQ, the UK’s electronic surveillance agency, was given vastly increased powers to obtain and analyse citizens’ data after the 9/11 terrorist attacks in 2001, on the condition that it agreed to strict oversight from the foreign secretary. The Foreign Office on several occasions gave GCHQ an effective “carte blanche” to demand data from telecoms and internet companies, which could include visited websites, location information and email contacts. Monday’s ruling is the second from the IPT in a case brought by Privacy International [see PI’s July 23/18 PR here], the campaign group, on the harvesting and sharing of citizens’ data by British spy agencies. The UK government is currently seeking to convince the EU that it should be considered an “adequate” country for data transfer purposes after it leaves the bloc next March. On Monday, the tribunal updated its initial ruling [October 2016 – see here & PI’s PR here] to say that laws protecting UK citizens’ data had not been followed in full until October 2016, not November 2015 as it had previously concluded. A government spokesperson, speaking on behalf of the Foreign Office and GCHQ, said the unlawful requests for citizens’ data referred to in the tribunal’s judgment on Monday had since been replaced and were no longer in force. [Financial Times, The Register, Silicon UK, BBC News, Bloomberg, Computing and The Times]

EU – Statewatch Launches New Observatory of Centralised Big Brother Database

This Observatory covers the so-called “interoperability” of EU JHA databases which in reality will create a centralised EU state database covering all existing and future JHA databases – through combining biometrics and personal data in a single search. The intention is to bring together in one place the biometrics of millions – non-EU citizens now and EU citizens later – directly linked to the Common Identity Repository with personal details. The European Data Protection Supervisor says that the measure would mark a “point of no return” with all the inherent dangers that over time function creep will build up a highly detailed personal file attached to biometrics. For example when the EU-PNR (Passenger Name Record) comes into effect this will contain details of all travellers in and out of the EU and inside the EU as well.Statewatch (London)

WW – Does Your Phone Secretly Listen To You, Two-Year Study Says No

It’s the smartphone conspiracy theory that just won’t go away: Many, many people are convinced that their phones are listening to their conversations to target them with ads. Vice recently fueled the paranoia with an article that declared “Your phone is listening and it’s not paranoia,“ a conclusion the author reached based on a 5-day experiment where he talked about “going back to uni” and “needing cheap shirts” in front of his phone and then saw ads for shirts and university classes on Facebook. Some computer science academics at Northeastern University had heard enough people talking about this technological myth that they decided to do a rigorous study to tackle it. They ran an experiment involving more than 17,000 of the most popular apps on Android to find out whether any of them were secretly using the phone’s mic to capture audio. The apps included those belonging to Facebook, as well as over 8,000 apps that send information to Facebook. They found no evidence of an app unexpectedly activating the microphone or sending audio out when not prompted to do so. On the other hand, the strange practice they started to see was that screenshots and video recordings of what people were doing in apps were being sent to third party domains. In other words, until smartphone makers notify you when your screen is being recorded or give you the power to turn that ability off, you have a new thing to be paranoid about. The researchers will be presenting their work at the Privacy Enhancing Technology Symposium Conference in Barcelona next month. [Gizmodo, Business Insider and BGR]

US Government Programs

CA – Canadian Pot Investors Are Being Banned From Entering the U.S.

Sam Znaimer is a Vancouver, Canada-based venture capitalist who has been investing in everything from tech to telecommunications for more than 30 years. Recently, he put more than $100,000 into legal American cannabis companies. In May, when he attempted to drive across the border, he was flagged for a secondary inspection and questioned for four hours. “To my shock and horror, I was told that I was deemed to be inadmissible to the United States because I was assisting and abetting in the illicit trafficking of drugs,” Znaimer said. “They never asked whether I had consumed marijuana, the only thing that they’re interested in is that I’ve been an investor in U.S.-based cannabis companies.” Marijuana in some form is legal in 30 states and Washington D.C., but it’s still outlawed by the U.S. federal government. American immigration attorney Len Saunders said he’s seen at least a dozen cases like Znaimer’s at the Blaine land crossing as well as airports in Vancouver and Edmonton over the past few months. In the prior 15 years that he’s practiced law on the border, he’d never seen one. CBS News See also: How the tech behind bitcoin could safeguard marijuana sales data

CA – OPC Warns Canadians to Keep Data Secure When Crossing the Border

The OPC is warning citizens to be aware that their digital devices can be searched — and civil liberties advocates say every precaution must be taken. The commissioner’s updated guidelines on privacy at airports and borders advises that officers on both sides of the border can search your devices and ask for passwords. The guidelines include new advice on searches conducted at “preclearance” sites, where U.S. border officials can do searches on Canadian ground, part of an act passed in late 2017. They come following the release of a new U.S. Customs and Border Protection directive [see PR here] on searches of electronic devices, which clarifies previous search rules. It also includes updates on electronic searches for people going back through Canadian customs. Meghan McDermott, staff counsel at the BC Civil Liberties Association, said that due to the new powers of customs officers at preclearance sites and more detailed abilities for U.S. border patrols, she recommends taking every precaution to ensure your data is secure and protected should a search take place “the best guarantee is to not even bring your device at all, but if you do bring a device, you can use a burner phone [see here] or substitute. One of the other things people can do is to delete all the apps and documents and texts as well.” Toronto Star

US Legislation

US – California Enacts Comprehensive Privacy Rules

Effective January 1, 2020, organizations must comply with individual requests to provide categories of personal information collected and shared, stop selling personal information (services cannot be refused and prices cannot be increased as a result), delete personal information, and provide their information in a portable format; the Attorney General can impose civil penalties for violations and there is a private right of action for breaches resulting from reckless behavior. [AB 375 – The California Consumer Privacy Act of 2018 – State of California]

US – Tech Companies Cool Toward California Consumer Privacy Act

On the heels of the EU’s General Data Protection Regulation, California lawmakers passed a tough new privacy law, California Consumer Privacy Act, which is designed to give consumers more control over their personal information. Under the act, which goes into effect Jan. 1, 2020, consumers will be able to request details on how their personally identifiable information (PII) is used and how it is collected. The question now for California—and those state governments watching—is whether companies will embrace the California Consumer Privacy Act or will they find loopholes to skirt the law. California’s tech companies, usually out on the front line of innovation and new ideas, are soundly against the state’s new privacy law and are expected to fight for changes before the law goes into effect. The bill was pushed through too quickly, they say, and it is too vague. Yet, supporters of the bill point out, these same companies already have groundwork in place because of GDPR. Many large companies still have a long way to go in finishing the technical aspects of GDPR, and now California companies need to be ready for CCPA a year and a half later. Security Boulevard See also: California Consumer Privacy Act: What you need to know now | Key Takeaways from the California Consumer Privacy Act of 2018 | Out of the pot and into the fire? What the heck happened in California?! | California’s privacy law a commendable step toward national standard | New California Consumer Privacy Act increases the risk of additional data breach class actions

Workplace Privacy

US – Judge: No ‘Risk of Harm’ From Fingerprint Scan Time Clocks

A federal judge [Manish S. Shah, U.S District Court for the Northern District of Illinois] has kicked back to Cook County court [Illinois] a class action lawsuit accusing manufacturer Rexnord of violating an Illinois state privacy law by requiring employees to scan their fingerprints when using employee punch clocks to track work hours. The underlying complaint was brought by former Rexnord Industries employee Salvador Aguilar, who said the company violated the Illinois Biometric Information Privacy Act [see here] through its use of a fingerprint-based time clock system [see Rexnord policy here]. According to Aguilar, he never signed a written release allowing the company to collect or store his fingerprint. Further, he said the company never fully explained why it was keeping his fingerprint data and how long it would retain the information. Although Aguilar and his attorneys originally filed his complaint in Cook County, Rexnord removed the suit to federal court. The company then moved to have it dismissed for failure to state a claim. However, in an opinion issued July 3, U.S. District Judge Manish Shah remanded [see here] the matter because he said the federal court lacked jurisdiction in the case. Cook County Record

 

+++

10-30 June 2018

Biometrics

US – Police Use of Facial Recognition With License Databases Spur Privacy Concerns

31 U.S. states now allow law-enforcement officials to access license photos to help identify potential suspects. Roughly one in every two American adults—117 million people—are in the facial-recognition networks used by law enforcement. Police in Maryland used a cutting edge, facial recognition program last week to track down a robbery suspect, marking one of the first such instances of the tactic to be made public. In the process of identifying a possible suspect, investigators said they fed an Instagram photo into the state’s vast facial recognition system, which quickly spit out the driver’s license photo of an individual who was then arrested. This digital-age crime-solving technique is at the center of a debate between privacy advocates and law-enforcement officials: Should police be able to use facial recognition software to search troves of driver’s license photos, many of which are images of people who have never been convicted of a crime? Wall Street Journal

US – 150,000 People Tell Amazon: Stop Selling Facial Recognition Tech to Police

On Monday afternoon, civil rights, religious, and community organizations [took] their demand that Amazon stop providing face surveillance technology to governments, including police departments, to the company’s headquarters in Seattle. The groups delivered over 150,000 petition signatures, a coalition letter signed by nearly 70 organizations representing communities nationwide, and a letter from Amazon shareholders. Monday’s action is a part of a nationwide campaign to stop the spread of face surveillance technology in government before it is unleashed in towns, cities, and states across the country. Documents obtained by the ACLU reveal Amazon is aggressively marketing its Rekognition face surveillance tool to law enforcement in the United States, and even helping agencies deploy it. Among other capabilities, the technology provides governments the ability to rewind backwards in time to see where we’ve been, who we’ve been with, and what we’ve been doing. [ACLU and at: Mashable, CNN Tech, Planet Biometrics and GeekWire]

US – School Facial Recognition System Sparks Privacy Concerns

New York’s Lockport City School District has committed to purchase the facial and object recognition software from Ontario-based firm SN Technologies, as part of a $3.8m security update using a grant provided by the 2014 Smart School Bond Act SSBA). The district wants to be a model of security, but it has privacy and civil rights advocates up in arms. In a letter to the New York State Education Department (NYSED), the New York Civil Liberties Union protested the purchase, disputing the accuracy of facial recognition systems and voicing privacy concerns [NYCLU Blog post here]. Student images are part of students’ biometric records and classified as personally identifiable information under New York state law, said the NYCLU. It added that because student images would be stored for 60 days in the SN Technologies system, schools could use the images to analyse students’ movements and interactions. Lockport won’t be the first school district in the US to use facial recognition technology. Arkansas’ Magnolia School District is also spending $287,000 on similar systems, according to reports. [NakedSecurity and at: Security Info Watch and Lockport Union Sun & Journal]

WW – Biometric Driver ID Market Expected to Grow to US$ 25 Billion by 2022

Biometric driver identification systems are being used to prevent unauthorized access to vehicles. Automobile industry is increasingly adopting biometric identification system to ensure security of the car. Manufacturers are offering various biometrics technology for authentication such as facial and fingerprint recognition, voice analysis, iris-based in-car biometrics, hand geometry, etc. biometric identification system are being developed with some advanced features such as behavior-based algorithms to ensure performance and safety. This Research Report Insights report discusses key prospects for growth of global biometric driver identification system market during the forecast period, 2017-2022, offering pragmatic insights to lead market players towards devising & implementing informed strategies. True Industry News

US – FaceFirst Launches Biometric Shoplifter Alert System for Retailers

L.A.-based FaceFirst has launched a new facial recognition solution for security surveillance aimed at the retail market. Dubbed “Sentinel-IQ”, the platform is designed to identify known shoplifters and criminals, and to send an alert to administrators the moment such individuals are detected by the surveillance system. And it’s available in multiple deployment configurations including a SaaS-based setup that allows it to run on almost any HD camera with a compatible processor. Sentinel-IQ’s ability to identify criminals can only be as effective as the databases upon which it relies, and FaceFirst offers Watchlist as a Service solutions for this purpose. And the company has a track record, with its facial recognition surveillance technology having previously seen some heavy duty deployments including an airport security implementation in Panama and a CCTV deployment for police in the Indian city of Bengaluru. Now, with facial recognition becoming ever more mainstream, FaceFirst could find more interest than ever in this technology from the retail sector at which Sentinel-IQ is aimed. [Find Biometrics]

Canada

CA – Federal Bill Expands OPC Enforcement Powers

Bill C-413, amending PIPEDA in relation to the Office of the Privacy Commissioner of Canada’s enforcement abilities, had its first reading in the House of Commons. If passed, the OPC can order organizations that contravened PIPEDA to take any reasonable action to ensure compliance, and can decide not to conduct investigations where not necessary or reasonably practicable; fines up to $30 million can be imposed for knowing, reckless violations considering the nature and gravity of the violation, organization’s resources and size, number of affected individuals, and mitigation measures taken. [Bill C-413 – An Act to Amend PIPEDA (Compliance with Obligations) – Parliament of Canada Bill Status | Bill Text

CA – Federal Government Launches Consultations on National Data Strategy

The Trudeau government will take fresh steps towards equipping the country for the rapidly advancing era of big data. The Minister of Innovation, Science, and Economic Development Navdeep Bains announced that the federal government would be launching a series of consultations regarding a national data strategy [see PR here]. According to the Ministry of Innovation, Science, and Economic Development, the consultations will take the form of several roundtable discussions [see here] that will be held over the summer in cities across Canada, with businesses, educational institutions, and private citizens invited to participate. Whether the target is businesses or government, however, not every privacy expert believes Canada’s current data standards are an issue. Halifax-based internet, technology, and privacy lawyer David Fraser called the data gathering policies employed by tech giants such as Google and Facebook nothing more than “simple reality The reason Facebook has information on 28 million Canadians is because 28 million Canadians choose to use Facebook” [ITWorld Canada see also: MobileSyrup and iPolitics | The Globe and Mail | National Post]

CA – Apply Privacy Laws to Canadian Political Parties, Committee Recommends

The House of Commons’ ethics committee unanimously recommended sweeping changes to Canada’s privacy regime, including bringing in strict data protection rules similar to those recently adopted by the European Union. The committee’s recommendations [see report notice here & 56 pg PDF report here] can be grouped into three broad categories. First, they suggest applying Canada’s privacy laws to federal political parties, as well as increasing transparency around how political actors use big data to target voters or advertising. Second, the committee restated earlier recommendations to increase the power of the federal privacy commissioner, giving the office enforcement powers like levying fines and seize company’s documents in the course of an investigation. Finally, and perhaps most consequentially, the committee recommended the Liberals urgently move to mirror the strict privacy framework recently adopted by the European Union, the General Data Protection Regulation (GDPR). Taken together, the measures would represent a significant shift in Canada’s privacy regime. [Toronto Star and at: CBC News, iPolitics, The Canadian Press (via NP) and The Globe and Mail]

CA – Poll: 72% Majority Want Stronger Privacy Rules for Political Parties

According to an Innovative Research Group poll, people in Canada overwhelmingly support greater privacy standards for political parties, which are currently not subject to any federal privacy legislation. Only 3% of those polled support the status quo policy of fewer privacy requirements for political parties. The law that governs the privacy practices of businesses in Canada (PIPEDA) [see here, OPC info here & wiki here], does not currently apply to political parties. Bill C-76 [the Elections Modernization Act — see PR here & Text here], the government’s current proposal to amend our elections laws, only proposes one change to this; requiring that parties publish a privacy policy. C-76 does not put any limitations or requirements for how individuals’ data is handled once collected. Key findings from the polling include: 1) A large majority – 72% – supported changing the law so that political parties follow the same privacy rules as private companies; 2) Only 3% of those polled supported the status quo policy of fewer restrictions for political parties; 3) Support for extending PIPEDA to political parties has broad support across partisans from all parties; and 4) 65% of respondents are concerned about the possibility of private companies collecting personal information about Canadians and using it in an attempt to influence the next election – Of those that followed the issue closely, 80% were concerned. [Open Media and also Elections Canada ‘blind’ to how political parties could use – or abuse – personal information and HuffPost Canada and The Globe and Mail]

CA – OPC Issues New PIPEDA Guidance on Inappropriate Data Practices

The OPC released a critical interpretation document [PR here] intended to guide how companies subject to the PIPEDA, will be allowed to collect, use and disclose personal information, as viewed from the perspective of the reasonable person. The guidance on inappropriate data practices is intended to offer interpretation on s. 5(3) of PIPEDA, which requires that organizations may collect, use or disclose personal information only for purposes that a “reasonable person would consider appropriate in the circumstances.” The OPC will begin to apply the guideline on July 1, 2018. Recognizing that any evaluation of an organization’s information practices under this subsection will necessarily require both contextual analysis and a review of the particular facts, the OPC has nonetheless established six “no-go zones” of behaviour that are completely offside PIPEDA and are essentially prohibited. The current no-go zones described in the guideline are as follows: 1) Collection, use of disclosure that is otherwise unlawful; 2) Profiling or categorization that leads to unfair, unethical or discriminatory treatment contrary to human rights law; 3) Collection, use or disclosure for purposes that are known or likely to cause significant harm to the individual; 4) Publishing personal information with the intended purpose of charging individuals for its removal; 5) Requiring passwords to social media accounts for the purposes of employee screening; and 6) Surveillance by an organization through audio or video functionality of the individual’s own device. [Canadian Lawyer Magazine]

CA – Canada’s Rape-Shield Law Can’t Be Used to Prevent an Accused from Mounting Defence, Ont. Court Rules

Canada’s so-called rape-shield law, which aims to protect sexual-assault complainants from unfair and irrelevant scrutiny of their sex lives, cannot be used to prevent an accused from mounting a reasonable defence, Ontario’s top court ruled [see R. v. R.V. here]. The court acknowledged the critical importance of protecting complainants from questioning about their sexual activity when that activity does not form the subject matter of the charge. “Notwithstanding these powerful considerations, there are times when such questioning must be permitted,” the Appeal Court said. “This is one of those cases where a proper balancing requires that such questioning be permitted.” In October 2016, Judge Robert Gee convicted R.V. after upholding the earlier ruling as binding on him. Both those decisions were in error, the Appeal Court said. The higher court said the pre-trial judge was wrong in finding that R.V.’s attempt to question the teen amounted to a “fishing expedition” despite knowing exactly what the cross-examination would have entailed. [CBC]

CA – OIPC ON Annual Report Celebrates 30 Years

2017 was a milestone year for the OIPC Ontario, which proudly celebrated 30 years of service on behalf of all Ontarians. The OIPC released its 2017 Annual Report, Thirty Years of Access and Privacy Service [see PR here], in which the OIPC calls for a number of legislative changes to enhance both access to information and protection of privacy in Ontario. Among the recommendations is a call to expand the IPC’s oversight to include Ontario’s political parties. Political parties collect and use personal information to target individuals in specific and unique ways. These increasingly sophisticated big data practices raise new privacy and ethical concerns and the need for greater transparency is evident. Subjecting Ontario’s political parties to privacy regulation and oversight will help to address the privacy, ethical and security risks associated with how political parties collect and use personal information. The OIPC also tabled the following recommendations in this year’s report: 1) Enact legislation that provides a strong, government-wide big data framework; 2) Ensure smart city initiatives are privacy protective; 3) Implement MOU for police services who adopt the use of the Philadelphia Model; and 4) Amend Ontario’s access laws to affirm IPC’s power to compel the production of records [IPC and at: The Canadian Press (via CTV)]

CA – OIPC SK Annual Report Emphasizes Privacy Breach Risk Reduction

“Reducing the Risk” is the title of OIPC SK Commissioner’s 2017-2018 annual report [see PR here]. In the report, Ron Kruzeniski [IPC] reflects on the progress and accomplishments of his team during the past year, hopes for the upcoming year and provides recommendations to reduce the risk of future privacy breaches. Recommendations for organizations to reduce risk were broken down into four sections [Prevention (p.14), Specific Controls (p.15), Policies (p.16) & Monitoring and Taking Action (p.18)] and include things like mandatory annual privacy training for all staff, and for staff to sign confidentiality agreements at least once a year. The report urges people to use complex passwords, not let co-workers use your computer if it means they will have access to information they shouldn’t, and use email encryption. The office has experienced an increase in the number of reviews, investigations and consultations, resulting in more files being opened [from 182 in 2014-2015 to 345 in 2017-2018] Kruzeniski also repeated the office’s recommendations from last year’s report [see here] to make amendments to The Health Information Protection Act, which the Ministry of Health has yet to implement. [Leader Post and at: CBC News]

CA – CSIS Risks Privacy of Innocent People Despite Scathing Court Ruling

In a report made public, the Security Intelligence Review Committee said the Canadian Security Intelligence Service has failed to ensure it doesn’t illegally hold on to sensitive information about innocent people, a federal spy watchdog says.[It also expresses concern that CSIS lacks the ability to make the necessary changes, two years after a scathing court ruling about its practices [2017-2018 SIRC Annual Report – see PR here]. An October 2016 Federal Court decision [see redacted Ruling here & Summary here] said CSIS broke the law by keeping and analyzing electronic data about people who were not actually under investigation. The report noted that CSIS has since destroyed most of the metadata in question. But it found the spy service was “still dealing with the implications” of the court decision when it comes to handling information about third parties. In a statement [see here], Public Safety Minister Ralph Goodale said he takes the matter “very seriously,” and a full review of such cases is underway. [Penticton Herald and at: The Globe and Mail and CBC News]

CA – OPC Funding Research on Public Wi-Fi ‘Privacy Leakage’, Smart Cities

The office of Canada’s privacy commissioner has announced it will fund research into privacy risks related to public Wi-Fi hotspots through its 2018 to 2019 contributions program. The project will assess privacy policies, measure personal information leakage to hotspot operators, and identify issues such as potential attack opportunities for malicious users. Research and analysis from the report will culminate in a public hotspot privacy report card and presentation of recommendations. Eight other projects will receive funding, as well. Among them is a project that examines the potential privacy impact for children when parents share their personal information on social networks. There are also studies on the privacy implications of smart cities in Canada, as well as children’s smart toys. Funding for the projects ranges from $21,155 to $74,110 CAD. betakit

CA – OIPC AB Issues Guidelines in Light of Post Election Paper-Shredding

AB OIPC wants to see more in-depth training for government workers who deal with freedom of information and privacy requests. The office also wants the government to close a loophole that allows some public bodies to avoid being subject to the Alberta government’s records management program. The recommendations are contained in two new reports, released Tuesday. The first report [20 pg PDF], written by senior information and privacy manager Chris Stinner, examined the government’s FOIP request tracking system. The office’s second investigation [18 pg PDF – by senior information and privacy manager Elaine LeBuke] centred on two access to information requests made to the Balancing Pool in 2016 and 2017. [Edmonton Journal and at: Alberta OIPC]

CA – Liberal Backbencher Tables Bill to Give Privacy Commissioner More Power

On June 20, Liberal backbencher Nathaniel Erskine-Smith introduced a bill [Bill C-413 – see here & Text here] to give “new powers” to Canada’s privacy commissioner allowing the office to hold social media companies and other to account for breaking the law. [The bill aims to] allow the commissioner to make orders, impose fines, conduct audits and undergo investigations into suspected breaches of the Personal Information Protection and Electronic Documents Act. Under his proposed legislation, when companies are found in violation of the law and aren’t taking steps to comply with it thereafter, hefty financial penalties would ensue. Fines could range from $15 million to $30 million, depending on the offence. Unlike the EU’s GDPR which] encompasses acts of negligence, Erskine-Smith’s bill only captures “intentional conduct” — groups that have acted recklessly towards the law. Some provincial privacy commissioners technically have “more power” than their federal counterpart. For instance, B.C.’s representative Michael McEvoy has the authority to make orders and issue fines of up to $100,000.To that end, a company operating in B.C. would be subject to stronger privacy regulations than a company operating in Ontario. Erskine-Smith’s bill was adopted following question period and will be addressed when the House returns in the fall. [iPolitics]

CA – Complainants in Intimate Images Cases Don’t Get Automatic Publication Ban

In a recent Nova Scotia Supreme Court advisory, there is a stipulation stating adults will not be able to count on a publication ban when they come forward in cases of cyberbullying and the non-consensual sharing of intimate images. On June 22, the Supreme Court issued advice for lawyers on how they should handle the relatively new Intimate Images and Cyber-Protection Act. Adults will be able to request a publication ban on their name, but will have to go through an application process. In 2017, the Intimate Images and Cyber-Protection Act replaced the Cyber-Safety Act, which was deemed unconstitutional. Though it was released last year, the new law is not in effect. In the meantime, the Supreme Court has released the advisory to instruct lawyers as to how to implement the law. The stipulation about adults having their names used as the default position, while minors remain unnamed, is bringing up concerns. [CBC News]

Consumer

CA – Common Sense Finds Social Media Privacy Matters To Teens

New research from nonprofit org Common Sense Media shows that nine out of 10 teens think it’s important that sites clearly label what data they collect and how it will be used. The research follows recent blunders by big social media companies that have unnerved young users and their parents, including the scandal surrounding political consulting firm Cambridge Analytica harvesting raw data from up to 87 million Facebook profiles unbeknownst to the users. The majority, 69% of teens and 77% of parents, responded that it is “extremely important” for sites to ask permission before selling or sharing their personal information. The vast majority, 97% of parents and 93% of teens, also agree that it is at the very least, moderately important. Very few people surveyed think that sites do a good job of explaining what they do with user’s information. Only 36% of teenagers and 25% of parents agree that social networking sites and apps actually do a good job of explaining what they do with users’ data. On top of that, most parents and teens are concerned about ad targeting by social media sites with 82% of parents and 68% of teens saying they are at least “moderately” worried that those sites already use their data to allow advertisers to target them with ads. Many of those surveyed have already taken action with 79% of teens saying they have changed their privacy settings on a social networking site to limit what they share with others. Parents are also concerned, with 86% changing their own privacy settings. Despite these concerns, 30% of parents and 57% of teens reported never reading the terms of service, with 66% of parents and 65% of teens saying it’s because they are not interested in what those privacy terms have to say. Parents of teens are far more concerned about bots on social media, with 85% saying that they are moderately to extremely concerned about the fake accounts’ influence online. Teens are less concerned, with 72% reporting they are moderately to extremely concerned. This new data also comes on the heels of GDPR rolling out in Europe on May 25, only a few days after the survey was completed. One of the changes with the new EU data privacy and security legislation is that countries can choose at what age someone is considered a child online. In Italy, Germany and Ireland, for example, the cut-off ranges from ages 13 to 16. A number of social apps have already responded to the changes, including WhatsApp which changed its required age of use to 16 all across Europe. [kidscreen]

CA – Canadian Businesses Not Guarding Private Information Carefully: Survey

The results of a government-commissioned survey reveal that a staggering 94% of Canadian companies now collect basic contact information like names, phone numbers and email addresses from their customers. Opinions, evaluations, and comments are collected by 29% of businesses, financial information like credit card numbers by 25%, and identity documents (even social insurance numbers) are collected by 21%. 15% tracked “purchasing habits.” Once they have it in hand, 73% of businesses store this information on-site in electronic form, which the survey notes is “a shift from previous years” when storing information on paper was the most popular method. The research was conducted late last fall by Phoenix Strategic Perspectives, and involved 1,014 Canadian businesses, the vast majority of which were small or medium-sized. The survey was commissioned by the Office of the Privacy Commissioner of Canada. There was a mixture of good and bad news when it came to the security of customers’ personal data. [Global News]

Encryption

CA – Government of Canada Mandates HTTPS, HSTS

Effective June 27, 2018, all Canadian government websites should implement HTTPS for web connections. The government of Canada has issued an Information Technology Policy Implementation Notice (ITPIN) directing all “departments” to implement Transport Layer Security and migrate to HTTPS. The Notice is effective as of June 27th. All departments, agencies and organizations that in Canadian government that are not subject to the Policy on Management of Information Technology are advised to abide the ITPIN. Canadian departments are to implement safeguards that ensure their services are only offered via a secure connection. [Hashed Out]

EU Developments

EU – LIBE Wants Privacy Shield Axed by September If US Doesn’t Act

Yet more pressure on the precariously placed EU-US Privacy Shield [see here, here & wiki here]: The European Union parliament’s civil liberties committee [LIBE – here] has called for the data transfer arrangement to be suspended by September 1 unless the US comes into full compliance. Though the committee has no power to suspend the arrangement itself, it has amped up the political pressure on the EU’s executive body, the European Commission. In a vote late yesterday the Libe committee agreed [see PR here] the mechanism as it is currently being applied does not provide adequate protection for EU citizens’ personal information. The Libe committee says it wants US authorities to act upon privacy scandals such as Facebook Cambridge Analytica debacle without delay — and, if needed, remove companies that have misused personal data from the Privacy Shield list. MEPs also want EU authorities to investigate such cases and suspend or ban data transfers under the Privacy Shield where appropriate. The EU parliament as a whole is also due to vote on the committee’s text on Privacy Shield next month, which — if they back the Libe position — would place further pressure on the EC to act. Though only a legal decision invalidating the arrangement can compel action. [TechCrunch and at: Out-Law (Pinsent Masons), ITPro, EURACTIV and The Register]

EU – Parliament Advocates Blockchain Ledger Technology

The EU Parliament issued an opinion on blockchain technology. Blockchains shift control over daily interactions with technology to users, provides transparency through its immutability, and permits decoupling of user identities from tracking the movement of goods; issues to consider include that with enough effort, it can still be possible to connect transactions to particular parties, and the ledger’s immutability may compromise a user’s right to be forgotten. [European Parliament – How Blockchain Technology Could Change Our Lives: Report | Press Release]

UK – ICO Guidance on Data Protection by Design and Default

The UK’s Information Commissioners’ Office issued guidance on data protection by design and default under the GDPR. Data protection by design and default should begin at the time of the determination of the means of processing, time of processing, and initial phase of any system, service, product or process; organisations should make data protection an essential part of the core functionality of processing systems and services, practice data minimisation, and provide individuals with tools to determine how their data is used and whether the organisation properly enforces its policies. [UK ICO – Data Protection by Design and Default]

UK – ICO Seeks Views on How Kid-Friendly Websites Should Be Designed

The UK Information Commissioner’s Office is crowdsourcing ideas for the code that will govern how websites and apps aimed at under-16s are designed [see Commissioner’s Blog post here]. The ICO, which must publish a statutory code on age-appropriate design as part of the Data Protection Act – has today acknowledged this fine balancing act as it called for opinions on the code [see here]. The ICO is seeking views [consultation closes September 19] on how websites and apps should be designed to take into account children’s rights and needs, from industry, online service providers, academics and children’s advocacy services. Separately, the ICO said it plans to run a direct consultation with children, parents and guardians – an effort to emphasise the importance it is putting on the opinions of those who are going to be affected by the code. [The Register]

UK – ICO Penalizes Failure to Protect Against Ransomware

The UK Information Commissioner’s Office issued a monetary penalty notice against the British and Foreign Bible Society for violations of the Data Protection Act. A Society failed to take preventative measures to ensure the security of the personal data of its supporters and protect its network from ransomware attacks, including by changing default credentials, restricting access rights, and using network segmentation; the unauthorized access to sensitive information could be used for fraudulent activities and identity theft. [ICO UK – Monetary Penalty Notice – The Bible Society]

Facts & Stats

CA – Canada Revenue Agency Logs 2,338 Privacy Breaches in 2 Years

The personal, confidential information of over 80,000 individual Canadians held by the Canada Revenue Agency may have been accessed without authorization over the last 21 months, according to government documents made public. But while the number of potential privacy breaches may be eye-popping, the CRA is downplaying the seriousness of most of them. Government documents tabled in the House of Commons outline privacy breaches across all government departments and agencies since mid-September 2016. The CRA has experienced the most privacy breaches, recording a total of 2,338 in the 21-month time span. There have been dozens of cases involving unauthorized access over the last 21 months, and 24 of them were considered serious enough to notify the Office of the Privacy Commissioner. [Global News and at: Narcity]

WW – Data Breaches Decline in 2018

According to Risk Based Security’s Q1 2018 Data Breach QuickView Report [see PR here, see 30 pg PDF here or download here], following year over year increases in the number of publicly reported data breaches, the first three months of 2018 saw a respectable decline. But while the numbers look good, they may reflect a change in criminal targeting and goals and less an indication that cyber-criminals are waving white flags. According to the report the number of breaches disclosed in the first three months of this year declined to 686 compared to 1,444 breaches reported in the same year-ago period. Still, the number of records exposed were high: more than 1.4 billion. It seems, for the period, a shift from targeting files for theft to mining cryptocurrencies could explain the turn of events. [Security Boulevard/]

Finance

US – Free Credit Freezes Are Coming

Thanks to a new federal law [Economic Growth, Regulatory Relief, and Consumer Protection Act – signed by POTUS May 24 – see S.2155 here & wiki here], soon you can get free credit freezes and year-long fraud alerts. When the law takes effect in September, Equifax, Experian and TransUnion must each set up a webpage for requesting fraud alerts and credit freezes. The FTC will also post links to those webpages on IdentityTheft.gov. And if you’re in the military, there’s more. Within a year, credit reporting agencies must offer free electronic credit monitoring to all active duty military. Here’s what to look forward to when the law takes effect on September 21st [FTC and at: Cuna.org and All Things Finreg]

CA – Class-Action Lawsuits Filed Against Bank of Montreal, CIBC’s Simplii

Law firms Siskinds LLP and JSS Barristers say [see PR here] they have filed in the Ontario Superior Court of Justice proposed class-action lawsuits against Bank of Montreal and CIBC’s direct banking division Simplii Financial over recently disclosed cybersecurity breaches impacting up to 90,000 customers. They are alleging the institutions failed to establish robust security measures to protect clients’ sensitive information. Simplii and BMO warned in May that “fraudsters” may have accessed certain personal and financial information of some of its customers, up to 40,000 clients and 50,000 clients, respectively. [CTV News]

FOI

CA – Best Practices: Calculating FOI Request Fees in Ontario

The Ontario OIPC issued guidance on calculating fees for access requests, pursuant to the:

The IPC outlined when entities can charge fees for responding to access requests, including manual record searches, preparing records for disclosure, shipping costs, costs for locating and copying records, photocopies, and CD-ROM records; fees cannot be charged for associated legal costs, third party processing costs, registered mail, employee overtime in responding to requests, or restoring records to their original state. [IPC ON – Fees, Fee Estimates and Fee Waivers – June 2018]

Genetics

WW – Investigative Strategy of Police Prompts Debate on DNA Privacy Rights

A new investigative technique [genetic genealogy] that American police have been using to comb through the genetic family trees of potential suspects in unsolved crimes has prompted debate in Canada about privacy rights. Josh Paterson, executive director for the B.C. Civil Liberties Association, warned that positive results don’t necessarily justify the process. “The fact of one story or a handful of stories seemingly going in a positive way doesn’t take away our concern for the potential of misuse for these kinds of tools,” he said. Even in cases where a website warns users that their genetic information may be shared with police, Paterson said, it means someone’s third cousin may be consenting on their behalf. In Canada, there are strict rules for good reason around the use of genetic information in the National DNA Data Bank, which limits samples to individuals convicted of certain crimes and regulates their use by police. In contrast, he said American detectives appear to be fishing for suspects through genealogy sites that store genetic information. “They’re basically throwing a net in the sea and asking these companies what they might come back with,” he said. On the other hand, Eike-Henner Kluge, a professor of philosophy at the University of Victoria with an interest in biomedical and information ethics, said there are cases where privacy rights can be breached if there’s a threat of harm to others, and unsolved murders may be one of them. “Any right is subject to the equal and competing rights of others,” Kluge said in an email. “This is also recognized in the classic legal statement, ‘Your right to swing your arms ends just where the other man’s nose begins.’” It’s unclear if Canadian law enforcement are using the same techniques. [The Star and at: Infosurhoy, GenomeWeb and Connecticut Law Tribune. Additional coverage at: Science (Vol. 360, Issue 6393, pp. 1078-1079), Science News, Here & Now (Audio – WBUR) and MediaPost Communications]

Health / Medical

CA – Ontario to Let Companies Access Database of Patient Health Records

The government of Ontario announced Project Spark, an initiative to make healthcare data more accessible to healthcare professionals, researchers, companies, and the people of Ontario themselves. So there’s reason to be excited, and a bit nervous. The government of Ontario has accumulated a vast, central database of its citizens’ electronic health records that in other healthcare systems might be fragmented among various doctor’s offices, health maintenance organizations, and medical labs. While the people of Ontario won’t have to contribute additional data to Project Spark — the government isn’t going to come knocking with cheek swabs for genetic tests — it does turn them and their medical histories into commodities. Commodities that could bring about medical breakthroughs but could also share more personal details than they may want to give. If Project Spark, or any other holder of big data repositories, is about to open for business, it needs to take extra care in advance. Ontario only gets one shot to do this right. Project Spark will have to invest in the right kind of digital infrastructure before kicking into high gear. [Futurism and at: QUARTZ and Canadian Reviewer]

CA – Health Information Breach Notification Obligations under Alberta’s Health Information Act

Commencing August 31, 2018, Alberta’s Health Information Act will require custodians of personal health information to give notice of any health information security breach that presents a risk of harm to an individual. The security breach obligations under the HIA join an increasing number of Canadian statutory regimes that impose information security breach reporting and notification obligations. Custodians subject to the HIA should assess their readiness to comply with the security breach obligations, and make appropriate changes to prepare for compliance. [Borden Ladner Gervais, Lexology]

US – Walmart Wins Patent for Medical Records Stored on Biometric Blockchain

Walmart has been awarded a patent for a system that would store a person’s medical information in a blockchain database and allow first responders to retrieve it in the event of an emergency. The patent, issued by the U.S. Patent and Trademark Office, describes three key parts to the system: a wearable device in which the blockchain is stored; a biometric scanner for an individual’s biometric signature; and an RFID scanner to scan the wearable device, ideally a bracelet or wrist band. According to the patent, first responders would scan the device to access an encrypted private key. They would decrypt that using the biometric identifier and, with a second public key, retrieve the victim’s records. Walmart has been revving up its focus on healthcare. The retail giant has touted the idea of “optimized networks” to improve consumer price and cost transparency while steering patients to providers with better performance ratings. Planet Biometrics

CA – OHIP Billings Should Not Be Public Because ‘Doctors Are Different’

The names of high-billing doctors should not be made public, lawyers for the Ontario Medical Association and two other doctor groups have told the Ontario Court of Appeal. “Doctors are different Why are they different? Because they do not have a contract with government,” lawyer Linda Galessiere, acting for a group of physicians described as “affected third-party doctors,” argued. Others paid from the public purse — including lawyers, consultants and contractors — have actual contracts with government, she said, but with doctors, it is simply legislation that mandates their OHIP payments come from the public treasury, not contracts, Galessiere argued. The contract between the government and the Ontario Medical Association (OMA) is only about the value of specific fees doctors can charge OHIP, she said. Galessiere said physician-identified billings are public in British Columbia, Manitoba and New Brunswick because governments in those provinces passed legislation forcing disclosure. She said that if the Ontario government wants disclosure, then it can also introduce legislation. The doctors and the OMA are appealing a ruling made a year ago by the Ontario Divisional Court that upheld an order by the Information and Privacy Commissioner of Ontario (IPC) [The Star and see IPC Blog here and Order here] to release physician-identified billings of the 100 highest-paid doctors.

US – OCR to Distribute Enforcement Funds to Victims of HIPAA Violations

OCR will seek comments on establishing a way to distribute funds collected from Health Insurance Portability and Accountability Act (HIPAA) enforcement actions to individuals harmed by the underlying incident [see here]. This would fulfill a long-awaited and overdue requirement included in the Health Information Technology for Economic and Clinical Health (HITECH) Act, which required OCR to issue regulations about this methodology within three years of HITECH’s 2009 enactment date. This advanced notice of proposed rulemaking will be released sometime in November 2018. [Data Privacy Monitor]

Horror Stories

US – Equifax Agrees to Cybersecurity Requirements; Former Employee Charged with Insider Trading

Equifax has agreed to comply with security requirements put in place by financial regulators from eight US states. The requirements are a response to the massive data breach that compromised information belonging to more than 147 million individuals. In a related story, a former Equifax employee has been charged with insider trading. Sudhakar Reddy Bonthu, who was one of the Equifax employees orchestrating the company’s public response to the breach, allegedly profited from making trades prior to the breach’s disclosure. [NY Times: 8 States Impose New Rules on Equifax After Data Breach | SC Magazine.com: Equifax agrees to cybersecurity regulations set forth by 8 U.S. States | Reuters: U.S. charges former Equifax manager with insider trading | CNet: Former Equifax exec charged with insider trading following data breach | Justice.gov: Charges filed against second defendant for insider trading related to the Equifax data breach

EU – Irish DPA Finds Against Yahoo in Massive Email Breach

The Irish Data Protection Commissioner has found against Yahoo for a 2014 data breach that affected 500m people and 39m EU citizens. However, the watchdog’s offices said that it will issue no fine or other punitive measure, largely because the events took place before the introduction of the GDPR, which came into force last month. Instead, the DPC has ordered Yahoo to update its data processing systems. Yahoo’s European headquarters are in Dublin. The breach was reported to the DPC in September 2016. It involved the unauthorised copying and taking, “by one or more third parties”, of material contained in approximately 500 million user accounts from Yahoo in 2014. It is the largest breach which has ever been notified to and investigated by the DPC. [Independent and at: Bloomberg, Reuters and SiliconRepublic]

CA – Data Breach Defendant Must Hand Over Computer Forensics Reports: Court

Casino Rama, located near Lake Simcoe, had its computer system hacked in 2016 when a significant amount of information on vendors, employees and customers was stolen facing a class-action lawsuit over the breach, it has lost its bid to prevent plaintiffs from getting their hands on part of a computer forensics investigation report. The casino claimed the report was protected by litigation privilege or solicitor-client privilege. Justice Benjamin Glustein of the Ontario Superior Court of Justice ruled [June 6, 2018 – see 10 pg PDF here] that if the computer forensics reports were subject to solicitor-client privilege or litigation privilege, “then the defendants waived privilege to the extent that the Mandiant Reports address the size and scope of the prospective class. A party cannot disclose and rely on certain information obtained from a privileged source and then seek to prevent disclosure of the privileged information relevant to that issue.” [Canadian Underwriter]

Identity Issues

AU – Australians to Soon Get MyGovId Single Government Identity

The first of several pilot programs using a beta version of a myGovID will begin in October, the Australian government confirmed. In a statement, Minister for Human Services and Minister Assisting the Prime Minister for Digital Transformation Michael Keenan said having 30 different log-ins for government services is “not good enough”, and it is anticipated the single log-in will allow Australians to access almost all government services by 2025. “Think of it as a 100-point digital ID check that will unlock access to almost any government agency through a single portal such as a myGov account,” he said. “The old ways of doing things, like forcing our customers to do business with us over the counter, must be re-imagined and refined.” Citizens will need to establish a digital identity before being able to use it across services, the minister explained. Keenan confirmed the first of several pilot programs using a beta version of the myGovID will begin in October, after the Digital Transformation Agency (DTA) revealed last month it had pencilled in the date for delivery of its first Govpass pilot. ZDNet

CA – Mogo Survey: 86% Believe Risk of Identity Fraud Is Growing

A recent survey conducted by Maru/Blu on behalf of Mogo Finance Technology Inc. [here] revealed that 86% of Canadians believe they are increasingly at risk of identity theft and identity fraud – yet only 24% of respondents currently have identity fraud protection. The survey which included more than 1,500 participants, revealed the following: 1) 86% of Canadians believe that in today’s digital world, they are increasingly at risk of identity theft and identity fraud; 2) While Canadians know the risk, only 24% have some sort of identity fraud protection solution; 3) 85% of Canadians believe that if they are a victim of identity theft or fraud, it will have an impact on their financial life; and 4) 35% of Canadians know someone who has been a victim of identity fraud. [PR Newswire]

EU – Plans to Include Fingerprints in Identity Cards Unjustified and Unnecessary

The European Commission has published a proposal calling for the mandatory inclusion of biometrics (two fingerprints and a facial image) in all EU Member States’ identity cards. The demands to include fingerprints are an unnecessary and unjustified infringement on the right to privacy of almost 85% of EU citizens, as explained in an analysis published by civil liberties organisation Statewatch. The foreseen rules would not oblige Member States to introduce any kind of national identity card and do not require the establishment of any kind of database, either at EU or national level. However, national governments may well take the opportunity provided by the introduction of biometrics into ID cards to establish national databases. An appetite may then develop for linking up them up under the EU’s ongoing “interoperability” initiative, which foresees bringing together all existing and future EU databases and the establishment of a giant, EU-level ‘Central Identity Repository’ which, in its first phase, will hold the biometric and biographical data of almost all “third-country nationals” who enter the EU. Proposals currently under discussion foresee this being extended in the future to include national databases holding information on EU citizens [see 12 pg PDF here]. [Statewatch]

WW – ID Management Study Finds Unfettered Access to Sensitive Information

A data risk report on 130 organizations that were assessed to help them understand where sensitive and classified data reside in their IT environment, and how much is exposed and vulnerable. Assessments performed in more than 50 countries and across 30+ industries, including: insurance; financial services; healthcare; pharma and biotech; manufacturing; retail; utilities and energy; construction; IT and computer software; education; and local, state and regional governments. This study provides recommendations to mitigate key data exposure issues, namely, stale user accounts Spot inactive users and govern active user accounts), toxic permissions (remove global access and restrict user access to relevant data), and password issues Set expiration dates for passwords and use multifactor authentication). [Data Under Attack – 2018 Global Data Risk Report – Varonis]

Law Enforcement

CA – Report Calls for Changes to Edmonton Police’s Use of Street Checks

A report examining the Edmonton Police Service’s use of street checks has recommended the force increase its diversity, monitor for inappropriate stops and initiate a public dialogue around the practice sometimes referred to as carding. The 300-plus page report was released by the Edmonton Police Commission, which oversees the Edmonton police force and is comprised of city councillors and members of the community. The commission announced the review in July, shortly after Black Lives Matter Edmonton obtained street-check data from the police force through a Freedom of Information request. The group released a report that found people who were black or Indigenous were more likely to be subjected to street checks than individuals who were white. [The Globe and Mail]

CA – Ontario Cops Push Access to Private Surveillance Footage

The St. Thomas police service is among a growing number of Ontario police forces that want to tap into home and business video surveillance systems to help fight crime. Police are encouraging home and business owners in St. Thomas to voluntarily identify their video surveillance locations in the community, so they can be mapped and stored on an internal database. Homeowners and businesses can register [see here] their information on the St. Thomas police website. And if there’s a crime in their community, police may come and ask if they can view their video. While police think it could help solve and deter crimes in the community, the trend disturbs former Ontario privacy commissioner Ann Cavoukian. She is worried about homeowners handing over videos that could include images of their neighbours and others who have no idea the information is being shared. She is also concerned about how easily police could obtain the information. [CBC News]

Online Privacy

CA – NEB Plan to Monitor Social Media En Masse “Alarming”

The National Energy Board’s plan to hire a security firm to monitor “vast amounts” of social media chatter may seem like the simple aggregation of publicly available data but actually raises a host of privacy concerns, says a prominent digital security and human rights researcher. Ron Deibert, director of the Citizen Lab at the University of Toronto’s Munk School of Global Affairs, has written an open letter asking the Calgary-based NEB to clarify exactly why it wants to accrue all this data and how it plans to use and share the information. In a recently posted request for information, the NEB — which is responsible for regulating pipelines and other energy infrastructure in Canada — says it is only looking to monitor publicly available data in accordance with existing privacy laws in order to identify potential risks or threats. But Deibert says many Canadians don’t realize just how much of their information could be considered public and the extent to which their online activity can be tracked. “Many of these companies have technologies and tools that enable them to gather up a lot of information that they would consider to be public information but is much deeper and far more revealing than what is posted publicly on a Facebook page,” he said. Social media platforms are constantly changing, he added, and it’s not always clear what defines public versus private data. The NEB has received Deibert’s letter and “will provide a response in due course.” [CBC News]

WW – ICANN Appeals Court Decision to Minimize WHOIS Data Collection

ICANN, has appealed [see PR here & 37 pg PDF Text here] a decision made by a German court last month over the information that should be collected on domain registrants. The German court’s decision [see 6 pg PDF here] was the latest development in a situation that has left many registrars unclear on what approach to take on WHOIS data in order to comply with the EU’s General Data Protection Regulation. The court ruled that while EPAG [located in Bonn, here], which is a subsidiary of the world’s second largest domain registrar, Tucows, has a contractual obligation to collect data to prevent misuse, it’s not required to collect the additional data ICANN wants it to collect e.g. administrative and technical contact data. ICANN argues that while the court ruled that EPAG was only required to collect data on the domain holder, it didn’t rule whether collecting technical and administrative contact data contravened the GDPR. It is asking the court to order EPAG to collect the additional data requested or face a penalty of 250,000 EUR. [Indivigital and at: The Register, World Trademark Review, Domain Name Wire,

EU – German Authorities: Tracking and Profiling Cookies Require Opt-In Consent

The Conference of German Data Protection Authorities released a position paper on the applicability of the German Telemedia Act (TMA) after 25 May 2018. The Position Paper clearly states that tracking and profiling cookies now require informed prior opt-in consent. The Position Paper has received a great deal of criticism. [Technology Law Dispatch]

WW – Facebook Quiz App Leaked Data on ~120M Users For Years

Facebook’s historical app audit [see Zuckerberg’s announcement here] conducted in the wake of the Cambridge Analytica data misuse scandal has already suspended around 200 apps But you do have to question how much the audit exercise is, first and foremost, intended to function as PR damage limitation for Facebook’s brand — given the company’s relaxed response to a data abuse report concerning a quiz app [NameTests.com] with ~120M monthly users, which it received right in the midst of the Cambridge Analytica scandal. Because despite Facebook being alerted about the risk posed by the leaky quiz apps in late April — via its own data abuse bug bounty program — they were still live on its platform a month later. Self-styled “hacker” Inti De Ceukelaire went hunting for data abusers on Facebook’s platform after the company announced a data abuse bounty on April 10 [read De Ceukelaire’ account here] and quickly realized the company was exposing Facebook users’ data to “any third-party that requested it”. NameTests was displaying the quiz taker’s personal data Such as full name, location, age, birthday) in a javascript file — thereby potentially exposing the identify and other data on logged in Facebook users to any external website they happened to visit. He also found it was providing an access token that allowed it to grant even more expansive data access permissions to third party websites — such as to users’ Facebook posts, photos and friends. He reckons people’s data had been being publicly exposed since at least the end of 2016. De Ceukelaire found that NameTests would still reveal Facebook users’ identity even after its app was deleted. Here are the details. [TechCrunch and at: Medium, The Register, CNET, The Verge and GIZMODO]

WW – Facebook Patents System That Can Use Your Phone’s Mic to Monitor You

Facebook has patented a system that can remotely activate the microphone on someone’s phone using inaudible signals broadcast via a television. The patent application describes a system where an audio fingerprint embedded in TV shows or ads, inaudible to human ears, would trigger the phone, tablet or long-rumoured smart speaker to turn on the microphone and start recording “ambient audio of the content item”. The recording could then be matched to a database of content to allow Facebook to identify what the individual was watching – like Shazam for TV, but without the individual choosing to activate the system. The patent positions the technology as a way for broadcasters to know exactly who is watching their TV shows or ads and for how long. Privacy experts are concerned about the intrusion into people’s homes, particularly as the ambient audio recording would likely catch snippets of people’s private conversations without their knowledge. Such a system could also give Facebook a better understanding of people’s social connections as it would show the social network which people were meeting up in real life. Facebook was quick to downplay [see here] the patent filing. [The Guardian and at: Mashable, Ars Technica, Fortune, Naked Security, New York Times and Engadget and also The Verge: No, Facebook did not patent secretly turning your phone mics on when it hears your TV and at: GIZMODO Australia]

US – Groups ask FTC to Probe Facebook’s Nudging Users for Max Data

Consumers Union, the advocacy division of Consumer Reports, which helmed a study of Facebook in the wake of the Cambridge Analytica third-party sharing fiasco that led to congressional hearings and increased scrutiny, said it is calling for an FTC investigation [see CU PR here, CR report here & 8 pg PDF letter here] …The Consumer Reports study is being released at the same time as a Norwegian Consumer Council report, “Deceived by Design” [see PR here & 44 pg PDF Report here], looking at the pop-up privacy boxes announcing companies’ new privacy policies in Europe in the wake of the enhanced privacy framework — General Data Protection Regulation or GDPR — adopted by the EU in May. Consumer Watchdog and [seven] other groups are also calling on the FTC to investigate Google based on the NCC findings [see PR here & 3 pg PDF letter here] Jeff Chester, executive director of the Center for Digital Democracy [here], said that almost two dozen organizations in Europe are part of a letter-writing campaign to seven different regulatory jurisdictions. [Multichannel News and at: Consumer Reports, The Hill and Compliance Week]

US – Facebook Gives Lawmakers the Names of Firms It Gave Deep Data Access

In a major data dump, Facebook handed Congress a ~750-page document with responses to the 2,000 or so questions it received from US lawmakers sitting on two committees in the Senate and House back in April. Facebook repeats itself a distressing amount of times. TextMechanic‘s tool spotted 3,434 lines of duplicate text in its answers — including Facebook’s current favorite line to throw at politicians, where it boldly states: “Facebook is generally not opposed to regulation but wants to ensure it is the right regulation”, followed by the company offering to work with regulators like Congress “to craft the right regulations”. Below is the full list of 52 companies Facebook has now provided to US lawmakers — though it admits the list might not actually be comprehensive, writing: “It is possible we have not been able to identify some integrations, particularly those made during the early days of our company when our records were not centralized. It is also possible that early records may have been deleted from our system”. Last month the New York Times revealed that Facebook had given device makers deep access to data on Facebook users and their friends, via device-integrated APIs. [TechCrunch and at: BankInfo Security]

US – Facebook Releases Privacy Safeguards After Pressure from Advertisers

Facebook is installing new controls it says will better inform its members about the way companies are targeting them with advertising, the latest step to quell a public outcry over the company’s mishandling of user data. Starting July 2, Facebook for the first time will require advertisers to tell its users if a so-called data broker supplied information that led to them being served with an ad. Data brokers are firms that collect personal information about consumers and sell it to marketers and other businesses. Facebook has also set up new procedures for the handling of names of potential customers supplied by data brokers. Advertisers seeking to upload lists of these prospects onto Facebook’s platform will first have to promise that the data vendor obtained any legally required consent from those consumers. Facebook says the new policies will create more transparency for its users and require more accountability from advertisers. The new policies are the second big push by Facebook this year to shore up its policy regarding data brokers. On March 28, Facebook moved to banish data brokers from its platform as part of efforts to burnish its image. But the company quickly softened its stance after big marketers threatened to pull their ad dollars from Facebook, according to three people familiar with the decision. Advertisers said the restrictions on data brokers would hurt their ability to aim their ads at customers most likely to buy their products. Details of advertisers’ pushback, and Facebook’s retreat, have not been previously reported. Reuters

WW – Apple Cracks Down on Apps Sharing Info on Users’ Friends

Apple Inc. changed its App Store rules last week to limit how developers harvest, use and share information about iPhone owners’ friends and other contacts. The move cracks down on a practice that’s been employed for years. Developers ask users for access to their phone contacts, then use it for marketing and sometimes share or sell the information — without permission from the other people listed on those digital address books. On both Apple’s iOS and Google’s Android, the world’s largest smartphone operating systems, the tactic is sometimes used to juice growth and make money. Sharing of friends’ data without their consent is what got Facebook Inc. into so much trouble when one of its outside developers gave information on millions of people to Cambridge Analytica, the political consultancy. Apple has criticized the social network for that lapse and other missteps, while announcing new privacy updates to boost its reputation for safeguarding user data. The iPhone maker hasn’t drawn as much attention to the recent change to its App Store rules, though. Bloomberg News, adage.com

US – Google to Fix Location Data Leak in Google Home and Chromecast

Google plans to fix a privacy issue that affects its Google Home and Chromecast devices. An authentication vulnerability allows attackers to obtain location data for the devices by tricking users into opening a link while connected to the same Wi-Fi network as a vulnerable device. Google is scheduled to release the fix next month. [krebsonsecurity.com: Google to Fix Location Data Leak in Google Home, Chromecast | www.tripwire.com: Google’s Newest Feature: Find My Home]\

Other Jurisdictions

AU – Experts Call for Kids’ Data Protection in Australia

Australia will inevitably need to follow other countries legislating against the collection of data about children from the internet, a data privacy protection expert warns. Dylan Collins, the chairman of the kids’ digital media company TotallyAwesome, believes the internet was designed for adults and many services are struggling to adapt to the extraordinary number of youngsters logging on every day. “Pretty much everything is based around capturing personal data and monetising it in some form,” the Irish entrepreneur said. “That’s just not safe or appropriate for six, seven or eight years olds.” In recent years, the US, Europe and China have created so-called “zero-data environments” which prohibit companies from collecting data on people under a set age – ranging between 13 and 16. “It’s probably inevitable that something similar will come to Australia in the not too distant future,” Mr Collins said. He predicted that over the next five to seven years there will be a universal right for children to have access to the internet without being tracked. Australian Associated Press

Privacy (US)

US – Supreme Court: Warrant Needed to Access Cell Site Location Data

The US Supreme Court has ruled that law enforcement must obtain a warrant to collect a suspect’s cell site location information (CSLI). In a 5-4 decision, Chief Justice John Roberts wrote in the majority opinion that “when the Government tracks the location of a cell phone it achieves near perfect surveillance, as if it had attached an ankle monitor to the phone’s user.” The ruling does not overturn the “third-party doctrine,” a legal precedent that found that people have no “reasonable expectation of privacy” regarding information collected by a third party, nor does it cover real-time tracking. [Supremecourt.com: Carpenter V. United States: Certiorari to the United States Court of Appeals for the Sixth Circuit (PDF) | Wired.com: The Supreme Court Just Greatly Strengthened Digital Privacy
SCmagazine.com: Supreme Court rules government generally needs warrant for long-term surveillance using location data | ZDnet.com: Supreme Court says police need a warrant for historical cell location records | Ars Technica: Supreme Court rules: Yes, gov’t needs warrant to get cellphone location data]

US – Analysis: SCOTUS “Carpenter v. United States” a Big Win for Privacy

Over 40 years ago, the Supreme Court outlined what has come to be known as the “third-party doctrine“– the idea that the Fourth Amendment does not protect records or information that someone voluntarily shares with someone or something else. On June 22 in “Carpenter v. United States” [see here & 119 pg PDF text here] an opinion [written] by Chief Justice John Roberts [and] joined by Justices Ruth Bader Ginsburg, Stephen Breyer, Sonia Sotomayor and Elena Kagan, the Supreme Court ruled that, despite this doctrine, police will generally need to get a warrant to obtain cell-site location information, a record of the cell towers (or other sites) with which a cellphone connected. …Roberts characterized the case as involving two, potentially conflicting lines of the Supreme Court’s precedent. The first involves whether someone like Carpenter can expect to have his whereabouts kept private [the so-called reasonable expectation of privacy test – wiki here]. The second line of precedent is the third-party doctrine [see wiki here]. Roberts emphasized that today’s ruling “is a narrow one” that applies only to historical cell-site location records. He took pains to point out that the ruling did not “express a view on” other privacy issues, such as obtaining cell-site location records in real time, or getting information about all of the phones that connected to a particular tower at a particular time. He acknowledged that law-enforcement officials might sometimes still be able to obtain cell-site location records without a warrant – for example, to deal with emergencies such as “bomb threats, active shootings, and child abductions.” And in a footnote, he also left open the possibility that law-enforcement officials might not need a warrant to obtain cell-site location records for a shorter period of time than the seven days at issue in Carpenter’s case – which might allow them to get information about where someone was on the day of a crime, for example. But what law-enforcement officials do not have, he wrote in closing, is “unrestricted access to a wireless carrier’s database of” cell-site location information. Justice Anthony Kennedy dissented from today’s ruling, in an opinion that was joined by Alito and Justice Clarence Thomas [starting at pg 28 here]. Alito filed a lengthy dissent, joined by Thomas, in which he stressed that, as originally understood, the Fourth Amendment would not have applied at all to the methods that law-enforcement officials use to obtain documents. [starting at pg 72 here]. Thomas also wrote alone to suggest that the court should reconsider its use of the “reasonable expectation of privacy” test, complaining that it “has no basis in the text or history of the Fourth Amendment.” [starting at pg 51 here]. …the most interesting separate dissent of the day came from Justice Neil Gorsuch [starting at pg 99 here], who specifically agreed with what he described as the majority’s “implicit but unmistakable conclusion that the rationale” for the third-party doctrine is wrong. Gorsuch would scrap both the third-party doctrine and the “reasonable expectation of privacy” test and focus instead on whether someone has a property interest (even if not a complete one) in the records at issue. But here, he pointed out, the court does not have any information on this question, because Carpenter didn’t make this argument in the lower courts. [SCOTUSblog and at: Lawfare Blog, DeepLinks Blog (EFF), Inside Privacy (Covington), The Volokh Conspiracy, Ars Technica, The New York Times, CNET and WIRED | Neil Gorsuch Joins Sonia Sotomayor in Questioning the Third-Party Doctrine and at: Cato at Liberty Blog, Hot Air , Slate, Washington Examiner and The Originalism Blog]

US – Eleventh Circuit LabMD Decision Potentially Limits FTC’s Remedial Powers

The Eleventh Circuit has issued its decision in LabMD v. FTC, a closely watched case in which LabMD challenged the FTC’s authority to regulate the data security practices of private companies. The Court of Appeals declined to decide that issue, instead finding that the FTC’s order requiring LabMD to implement certain data security reforms was unenforceable because it lacked specificity. The court’s decision may nevertheless impact many of the FTC’s consent orders. It is not yet clear how the FTC will respond to this decision. The Commission might seek rehearing en banc or appeal the decision to the Supreme Court in order to address some of the questions left unanswered by the Eleventh Circuit’s opinion. If the decision stands, however, it could affect the viability of some of the Commission’s remedial powers. Many of the consent orders that the FTC has required companies to adopt—particularly those involving data security but also some related to other issues—have included broad prophylactic remedies that are similarly premised on a reasonableness standard. [Inside Privacy andat: Ward PLLC Blog, Data Security Law Blog (Patterson Belknap), BNA on Data, Data Privacy Monitor (Baker Hostetler), Mayer Brown, Health IT Security and Law360 | FTC Rebuked in LabMD Case: What’s Next for Data Security?

US – Federal Appeals Court Throws Out FTC’s LabMD Ruling

A US federal appeals court has thrown out the Federal Trade Commission’s (FTC’s) ruling requiring LabMD to revamp its security policies and practices, saying that the FTC’s order is unenforceable. The FTC filed the complaint against the medical testing company, in 2013 following a series of breaches that compromised patient data. LabMD challenged the FTC’s ruling in court on the grounds that the agency lacked the authority to regulate how the company handled consumer data. A federal appeals court granted a stay of the FTC’s order, which LabMD challenged in 2016, filing a petition for review. files.consumerfinance.gov: Dwolla Consent Order (PDF) | healthitsecurity.com: Court Dismisses FTC Order on LabMD’s Data Security Lapses | media.ca11.uscourts.gov: Petition for Review of a Decision of the FTC.

US – FTC Hitting the Road for Ideas on Privacy & Regulating Tech

The FTC announced plans to embark on a cross-country listening tour to gauge how academics and average Web users believe the U.S. government should address digital-age challenges that include the rise of artificial intelligence and the data-collection mishaps [see PR here]. The tour includes 15 or more public sessions in a series of cities that have yet to be announced. The hearings are expected to touch on topics like the agency’s “remedial authority” to address privacy and security abuses, the potential risks posed by big data, and the commission’s tools to enforce antitrust laws as media, tech and telecom companies gobble each other up or seek to enter new lines of business [see comments topics here]. The public outreach will begin in September and continue into January 2019, the agency said. It could presage tougher scrutiny of Silicon Valley in response to complaints that the FTC has been too soft on tech giants and the ways they collect, swap and manipulate personal information about billions of people. [The Washington Post and at: The Hour, The Hill, Multichannel News, USA Today and The National Law Journal]

US – Court Rules No Privacy for Cellphone With 1-2-3-4 Passcode

A man serving 18 years in prison in South Carolina for burglary was rightfully convicted in part because he left his cellphone at the crime scene and a detective guessed his passcode as 1-2-3-4 instead of getting a warrant, the state Supreme Court ruled. Lawyers for Lamar Brown argued detectives in Charleston violated Brown’s right to privacy by searching his phone without a warrant. After storing the cellphone in an evidence locker for six days in December 2011, the detective guessed right on Brown’s easy passcode, found a contact named “grandma” and was able to work his way back to Brown. The justices ruled in a 4-1 decision that Brown abandoned his phone at the Charleston home and made no effort to find it. The law allows police to look at abandoned property without a court-issued warrant allowing a search. The Associated Press

US – Amazon, Microsoft, Uber Oppose California Consumer Privacy Act

Amazon, Microsoft, and Uber have made large contributions to a group attempting to prevent a privacy act from becoming law in California. As per state disclosure records, the three tech giants join a number of other well-known companies, including Facebook, Google, AT&T, and Verizon, which are all working against the proposed California Consumer Privacy Act by donating to the Committee to Protect California Jobs (CPCJ). Amazon and Microsoft recently donated $195,000 each to the Committee, while Uber has offered up $50,000. Facebook, Google, AT&T, and Verizon, on the other hand, have all contributed $200,000, though after Mark Zuckerberg faced tough questions from Congress about Facebook’s privacy practices, Facebook has pledged to withdraw support from the group. According to CPCJ spokesperson Steven Maviglio] tech giants are not the only ones opposed to the legislation …”Credit unions, grocers, and car manufacturers are among the many recent additions to the coalition and are the top of the iceberg” [Digital Trends and at: engadget, Techwire, The Verge, PYMNTS, Morgan Lewis Law Flash, Bloomberg BNA and Media Post]

RFID / IoT

US – Build Privacy Controls Into IoT Devices Now: Report

Limiting the cyber security risks of Internet of Thing devices has long been a plea by experts. But a new report says lawmakers, regulators and manufacturers need to pay equal attention to sealing off the privacy risks of sharing data through so-called smart devices, according to a new report from the University of California’s Center for Long-Term Cybersecurity and the IoT Privacy Forum. Policymakers should take steps to regulate the privacy effects of the IoT before mass sensor data collection becomes ubiquitous, rather than after, the authors say. Omnibus privacy legislation can help regulate how data is handled in the grey areas between sectors and contexts. At the same time makers of IoT products and services should employ a variety of standard measures to provide greater user management and control, as well as more effective notification about how personal data is captured, stored, analyzed, and shared. “The IoT has the potential to diminish the sanctity of spaces that have long been considered private, and could have a “chilling effect” as people grow aware of the risk of surveillance,” the report says. “Yet the same methods of privacy preservation that work in the online world are not always practical or appropriate for the personal types of data collection that the IoT enables.” Clearly Opaque: Privacy Risks of the Internet of Things | IT World Canada]

US – Cybersecurity: Advocates Push For Internet of Things Standards

EPIC responded to the Consumer Product Safety Commission’s request for comments on potential safety issues and hazards associated with internet-connected consumer products. The Consumer Product Safety Commission should develop mandatory privacy and security standards (e.g. certification before devices can be sold, vulnerability disclosure policies, system outage resiliency, mechanisms for consumers to delete their data), require IoT manufacturers to conduct PIAs (to examine data flows and flag potential hazards), and remove products from the marketplace where baseline requirements are not implemented. [Comments of EPIC to the Consumer Product Safety Commission on IoT and Consumer Product Hazards]

US – MIT Frequency Hopping Transmitter Could Help Secure IoT

Researchers at MIT have developed technology that could be used to help secure Internet of Things (IoT) devices. A frequency-hopping transmitter scatters data packets onto different, random radio frequency channels. [Eurekalert.org: Novel transmitter protects wireless devices from hackers | SC Magazine.com: MIT researchers develop frequency-hopping transmitter that fends off attackers | v3.co.uk: MIT researchers develop transmitter to prevent hackers from attacking IoT devices]

Security

CA – Businesses Unprepared for Mobile Workplace Data Breaches: Study

While Canadian businesses are continuing to embrace workplace mobility, they aren’t implementing proper data protection policies and training, according to a new findings from the Shred-it Security Tracker [see PR here & report here] The study, conducted by Ipsos, found that nearly 90 percent of C-Suite Executives (C-Suites) and half of Small Business Owners SBOs) reported their employees are able to work off-site in some capacity. Further, more than two-thirds of businesses said they believe that the trend towards working remotely will only increase over the next five years. That said, 82 percent of C-Suites and 63 percent of SBOs said they feel that they are more susceptible to data breaches when employees work off-site. …Additionally, Shred-it found that out of all age groups, millennials (18-34) are less effective at implementing safe data protection practices than generation X (35-55) and baby boomers (55+). [MobileSyrup]

WW – 86% of CXOs Say Remote Workers Increase Chances of Breach

The majority of C-Suite executives and small business owners SBOs) agree cyber security risks increase with remote workers, according to Shred-it’s State of the Industry Report, released Wednesday [see here]. Shred-it’s report unveils information security risks currently threatening businesses and features survey results conducted by Ipsos. When studying the cause of cybersecurity breaches, 47% of CXOs and 42% of SBOs cited accidental loss or employee negligence as the top reason, according to the report. “The study’s findings clearly show that seemingly small habits can pose great security risk and add up to large financial, reputational and legal risks,” said Shred-it vice president Monu Kalsi in the press release [see here]. The report found 86% of business executives agreed data breaches are more likely to occur when employees are working out of office. While CXOs do have security plans in place for these occurrences, only 35% of SBOs currently have a policy for storing or deleting confidential data remotely, and 54% of SBOs have no policy whatsoever, said the report. [TechRepublic and at: CNBC, Infosecurity Magazine and Insurance Business]

Surveillance

EU – 60 NGOs Join Call to Halt Mandatory Communications Data Collection

UK-based Privacy International, Liberty, and Open Rights Group have joined more than 60 non-governmental organisations, community groups and academics across Europe in calling for a halt to the collection of communications data [see 4 pg PDF letter here]. The groups have filed complaints to the European Commission calling for EU governments to stop requiring companies to store all communications data. Despite the two major rulings by the CJEU in 2014 and 2016, which made blanket and indiscriminate retention of personal data unlawful, the groups said the majority of EU member states have yet to stop this form of surveillance. The groups say it is clear that current data retention regimes in Europe violate the right to privacy and other fundamental human rights. Complaints have been filed in 11 EU member states: Belgium, the Czech Republic, France, Germany, Ireland, Italy, Poland, Portugal, Spain, Sweden and the UK. [Computer Weekly and at: Infosecurity Magazine, Forbes, The Register]

CN – China to Mandate Car-Tracking Chips from 2019: Report

Tracking devices will soon be fitted to cars registered in China [ostensibly] in an effort to tackle the country’s notorious congestion and pollution problem. Starting in July the country will begin fitting cars with radio-frequency identification (RFID) tags at registration time. Although the scheme won’t be compulsory at first, it looks likely it will become mandatory for new cars starting from 2019. The program will be run by the Traffic Management Research Institute, which is part of the country’s Ministry of Public Security. This has raised fears it could be another plank in the country’s growing surveillance apparatus, which includes the recently-introduced social credit scheme and more widespread use of facial recognition technology. [CarAdvice and at: The Wall Street Journal, The Verge, BusinessInsider, Futurism and SiliconANGLE News]

Telecom / TV

US – Verizon, AT&T to End Location Data Sales to Brokers

Verizon and AT&T have pledged to stop providing information on phone owners’ locations to data brokers, stepping back from a business practice that has drawn criticism for endangering privacy. The data has apparently allowed outside companies to pinpoint the location of wireless devices without their owners’ knowledge or consent. Verizon said that about 75 companies have been obtaining its customer data from two little-known California-based brokers that Verizon supplies directly — LocationSmart and Zumigo. Verizon became the first major carrier to declare it would end sales of such data to brokers that then provide it to others. It did so in a June 15 letter to Sen. Ron Wyden, an Oregon Democrat who has been probing the phone location-tracking market. AT&T followed suit Tuesday after The Associated Press reported the Verizon move. Neither company said they are getting out of the business of selling location data. Verizon and AT&T are the two largest U.S. mobile carriers in terms of subscribers. [KSFY and at: CNET, Ars Technica and TechCrunch and also CBC – US Phone Companies Limit Sharing Of Location Data, While Canadian Carriers Insist They Already Do]

US Government Programs

US – NSA Deletes Hundreds of Millions of Call Records Over Privacy Violations

The NSA unfortunately has a long history of violating privacy rules, although this time the agency might not be entirely to blame. The NSA is deleting hundreds of millions of call and text message data records (collected since 2015) after learning of “technical irregularities” that led to receiving records it wasn’t supposed to obtain under the USA Freedom Act. General counsel Glenn Gerstell said in an interview that “one or more” unnamed telecoms had responded to data requests for targets by sending logs that included not just the relevant data, but records for people who hadn’t been in contact with the targets. As it was “infeasible” to comb through all the data and find just the authorized data, the NSA decided to wipe everything. The deletions began on May 23rd. It’s not certain when the purge ends, but this is all metadata, not the content of the calls and messages themselves. A spokesperson also told the NYT that it didn’t include location data, as the Freedom Act doesn’t allow gathering that information under this collection system. The companies involved have “addressed” the cause of the problem for data going forward, the NSA said. While the step shows that the NSA is willing to err on the side of caution, it continues a streak of privacy violations at the agency since its bulk phone data collection fell under the Foreign Intelligence Surveillance Act in 2004. It also illustrates the problem with keeping such large-scale monitoring in check. The system depends on both the NSA and telecoms strictly honoring the law, and all it takes is a mistake to create a serious privacy breach. [Engadget | The NSA and the USA Freedom Act and at: CSO Online, The Verge, The New York Times, The Associated Press, Tech Republic, and GIZMODO]

US Legislation

US – Legislation: California Enacts Comprehensive Privacy Rules

AB 375, the California Consumer Privacy Act of 2018, has been approved by the Legislature and signed by the Governor. Effective January 1, 2020, organizations must comply with individual requests to provide categories of personal information collected and shared, stop selling personal information Services cannot be refused and prices cannot be increased as a result), delete personal information, and provide their information in a portable format; the Attorney General can impose civil penalties for violations and there is a private right of action for breaches resulting from reckless behavior. [AB 375 – The California Consumer Privacy Act of 2018 – State of California]

US – California Data Privacy Bill Becomes Law

California Governor Jerry brown has signed the California Consumer Privacy Act of 2018. Taking effect on January 1, 2020, the law will give California residents the right to know what data companies collect about them and how that information is shared. Consumers will also have the authority to prohibit companies from selling their data. The bill bears similarities to the EU’s GDPR, which went into effect in late May. The bill’s passage has prompted the withdrawal of a state ballot initiative that would have accomplished many of the same things. One of the differences is that the ballot initiative would have prohibited companies from denying services to consumers who choose not to have their data stored and tracked; the bill allows companies to charge consumers varying rates for service depending on the level of data sharing they have chosen. [Wired: California Unanimously Passes Historic Privacy Bill | money.com: California passes strictest online privacy law in the country | Fortune: California Passes Groundbreaking Consumer Data Privacy Law With Fines for Violations | Mercury News: California data privacy bill signed to head off ballot initiative]

+++

 

20 May–09 June 2018

Biometrics

CA – Canada Will Make Foreign Visitors Pay for Biometrics Collection

Details have emerged about the expansion of a program for collecting fingerprints and facial images from foreign nationals visiting Canada. The program previously applied only to refugee claimants, asylum seekers, and visa applicants from countries considered to present a heightened risk of ID document fraud. The previously announced expansion from 30 to roughly 150 countries will strengthen border security and immigration systems, Immigration Minister Ahmed Hussen said. Applicants will have to pay a CAD$85 fee to cover the cost of the program. It will apply to visitors from Europe, the Middle East, and Africa as of July 31, and to those from Asia, the Asia-Pacific region, and the Americas as of December 31. It only applies to those between 14 and 79 years old, and there are several exemptions, such as for U.S. citizens on work or student visas. [Biometrics Update and at: Digital JournalCBC NewsBusiness in Vancouver and One World Identity and also U of T researchers developing tool to jam facial recognition software and at Naked SecurityThe Toronto Star and Digital Journal]

US – Facial Recognition Product Should Not Be Sold to Government

A coalition of consumer and privacy advocacy, labor and legal groups wrote to Amazon.com about their Rekognition product. Amazon is providing product and consultation support to government customer for its Rekognition product, which can identify people in real-time by instantaneously searching databases containing tens of millions of faces; privacy advocates are concerned that Amazon does not restrict government use of the product, which could be used to identity certain vulnerable groups and minorities. [Letter to Amazon.com Regarding Rekognition – American Civil Liberties Union et al.]

US – JetBlue Will Test Facial Recognition for Boarding

Jetblue will test facial-recognition check- for flights from Boston to Aruba, the latest attempt by the industry to streamline boarding. Passengers will step up to a camera, and the kiosk will compare the facial scan to passport photos in the U.S. customs database to confirm the match. (You still have to bring your passport.) A screen above the camera will let passengers know when they’re cleared to board. JetBlue is collaborating on the technology with SITA, a tech company that specializes in air travel, including products like robotic check-in kiosks that autonomously rove around airports, sensing where they are needed. JetBlue says it will be the first airline to use facial recognition for boarding. The airline says it won’t have access to the photos — only SITA will. SITA said it will not store the photos. Delta Air Lines plans to test face-scanning technology with four kiosks at Minneapolis-St. Paul this summer for passengers to check their own luggage. [CNN Tech]

Canada

CA – Privacy Commissioners Offer Best Practices on Meaningful Consent

The Office of the Privacy Commissioner of Canada, the Alberta and British Columbia Privacy Commissioners have issued final guidance on obtaining meaningful consent based on: PIPEDA; the Alberta Personal Information Protection Act; and the British Columbia Personal Information Protection Act. The OPC will begin applying these guidelines on January 1, 2019. Consent should be a dynamic, ongoing process (through regularly updated FAQs, smart technologies, and chatbots), innovative consent processes should be used (just-in-time, interactive walkthroughs, videos, infographics), and individuals should be periodically reminded about their privacy options; consent is not a free pass to engage in indiscriminate collection and use, and does not waive other privacy obligations (i.e. accountability, safeguards). [OPC Canada – Guidelines for Obtaining Meaningful Consent]

CA – OPC Issues Guidelines for Consent and Inappropriate Data Practices

On May 24, 2018, the OPC published two important PIPEDA guidance documents:

The publication of the above guidance documents comes on the heels of the Commissioner’s consultation on consent and the recent updating of guidance on “Recording of Customer Telephone Calls“.

The Consent Guidelines provide that organizations should follow seven key principles in seeking to obtain meaningful consent under PIPEDA:

  1. Emphasize key notice elements – this contributes to meaningful consent, especially:
    1. What personal information is being collected, used and disclosed:
    2. The purpose for which the information is being collected, used or disclosed:
    3. Information-sharing with third parties:
    4. Whether there is a risk of harm arising from the collection, use or disclosure of information:
  2. Use layered approach to notices – allow individuals to control the level of detail
  3. Provide individuals with clear options to say ‘yes’ or ‘no’
  4. Experiment and adapt to contextual needs
  5. Consider the individual’s perspective – consult and test
  6. Make consent a dynamic and ongoing process
  7. Be accountable: stand ready to demonstrate compliance

The Guideline also reminds organizations to consider what type of consent is appropriate given the circumstances. While in some situations implied consent may be adequate, others will require express consent, including: (a) when the information being collected, used or disclosed is sensitive in nature; (b) when an individual would not reasonably expect certain information to be collected, used or disclosed given the circumstances, and (c) when there is a more than minimal risk of significant harm.

Another contextual factor is whether the target individuals include children. The OPC requires that, for children 13 and under, a parent or guardian give consent on the child’s behalf.

At the conclusion of the Consent Guidelines, the Commissioner provides a checklist of “Should do” and “Must do” action items for organizations seeking to obtain meaningful consent under PIPEDA.

Concurrently with publishing the Guidelines, the Commissioner published the Data Practices Guidance, which sets out various considerations that organizations should keep in mind when assessing whether purposes are reasonable and appropriate. Like meaningful consent, whether or not a purpose is inappropriate requires a contextual approach. The following factors have been applied by the Commissioner and the courts:

  • Whether the organization’s purpose represents a legitimate need / bona fide business interest;
  • Whether the collection, use and disclosure would be effective in meeting the organization’s need;
  • Whether there are less invasive means of achieving the same ends at comparable cost and with comparable benefits; and
  • Whether the loss of privacy is proportional to the benefits (which includes consideration of the degree of sensitivity of the personal information at issue).

The Commissioner has also established a list of prohibited purposes under PIPEDA, which they have deemed “No-Go Zones.” The Commissioner considers that a reasonable person would not consider the collection, use or disclosure of information to be appropriate in these circumstances. Currently, the list of “No-Go Zones” are:

  • Collection, use or disclosure that is otherwise unlawful (e.g. violation of another law);
  • Collection, use or disclosure that leads to profiling or categorization that is unfair, unethical or discriminatory in a way which is contrary to human rights law;
  • Collection, use or disclosure for purposes that are known or likely (on a balance of probabilities) to cause significant harm to the individual (e.g. bodily harm, humiliation, damage to reputation or relationships, loss of employment, business or professional opportunities, financial loss, identity theft, negative effects on credit record or damage to or loss of property);
  • Publishing personal information with the intended purpose of charging individuals for its removal (i.e. “blackmail”);
  • Requiring passwords to social media accounts for the purpose of employee screening; and
  • Surveillance by an organization through the use of electronic means (e.g. keylogging) or audio or video functionality of the individual’s own device.

While these “No-Go Zones” are important to note, organizations should also remember that the list is not binding, determinative or exhaustive, and that subsection 5(3) requires a contextual analysis. What a reasonable person would consider appropriate is a flexible and evolving concept which will be revisited by the Commissioner from time to time. [Fasken Martineau DuMoulin LLP] See also: The OPC Publishes its Report on Consent | Canadian firms must improve personal data collection practices: Privacy czar | Canada’s Privacy Commissioner Pursues a Stronger Consent Framework and More Proactive Enforcement | Commissioner: Digital revolution and Canadians’ privacy fears demand real solutions | Do customers really consent to how you use their data? Federal privacy commissioner wants to know ]

CA – CSIS Kept ‘All’ Metadata on Third Parties for a Decade: Top Secret Memo

When CSIS intercepted the communications of innocent people between 2006 and 2016 “all” the metadata related to those communications was retained in a controversial database, a top secret memo obtained by the Star suggests. The document relates to CSIS’s Operational Data Analysis Centre (ODAC) and a now-discontinued program that stored data intercepted from the service’s targets — and people who were in contact with them at the time. The Federal Court ruled in 2016 it was illegal for the service to indefinitely keep data on people who posed no threat to Canada’s national security — such as the family, friends or coworkers of CSIS targets — for future analysis. While the basics of the program were revealed in heavily censored court documents, the scale of the program is not widely understood. CSIS told parliamentarians earlier this year that it didn’t know how many Canadians were caught up in the ODAC. But in an October 2016 memo to Public Safety Minister Ralph Goodale, outgoing CSIS director Michel Coulombe suggested the court’s ruling would have a significant impact… [Toronto Star]

CA – Journalist Shield Law Could Soon Become Reality in Canada

The federal Liberal government is prepared to support proposed legislation to protect the identity of journalists’ confidential sources. The government will back a Conservative senator’s privately sponsored bill that would, for the first time in Canada, provide statutory protection for the identity of journalists’ sources. The bill, The Journalistic Sources Protection Act, S-231, would make it harder for police and other law enforcement or security agencies to spy on journalists’ communications or to seize documents that could reveal their sources. It would also make it harder for the cops to use whatever information is seized or captured by warranted surveillance. The Liberals will propose a handful of technical amendments to address “legal and policy concerns” with the bill as drafted. The amendments are intended to ensure that journalistic source protections would not interfere with the ability of law enforcement or security agencies to act in urgent or emergency situations “particularly in a national security context.” The amendments would also ensure that the protection extends to the sources, not to a reporter as an individual if he or she was the object of a criminal investigation. “Without that amendment there would be a risk that the search warrant against journalists who themselves commit crimes would be improperly invalidated.” It’s unusual for governments to back private member’s bills, let alone a senator’s bill and an opposition one at that. [The Star]

CA – Ontario Bill Requires Written Tenant Notice

Bill 45, An Act to Amend the Residential Tenancies Act, 2016 with respect to Tenant Privacy, had its first reading in the Legislative Assembly of Ontario. If passed, tenants must receive 48-hour written notice from landlords, brokers and salespersons to take photos or visual records of rental units; the notice must set out which area will be photographed or recorded, the purpose, the date and time this will take place, and how long the photo or record will be used and retained. [Bill 45 – Residential Tenancies Amendment Act (Tenant Privacy) 2018 – Legislative Assembly of Ontario]

CA – OIPC ON Finds Problematic School Photo Processing

The IPC ON decision addresses a privacy complaint about the Toronto District School Board pursuant to the Municipal Freedom of Information and Protection of Privacy Act. A school’s collection and use of student photos and disclosure to a vendor is lawful for administrative/security purposes, and authorized by provincial education law; however, the notice of photo day to parents was not sufficiently transparent (it lacked the school’s authority for collection, purpose of the photos and a school/Board contact), and the Board’s service agreement with the vendor requires amending to address retention and security for students’ PI. [IPC ON – Privacy Complaint Report MC16-4 – Toronto District School Board]

CA – OIPC SK Recommends Employee Termination for Insider Threats

The OIPC SK investigates a complaint against the Saskatchewan Health Authority involving personal health information pursuant to the Health Information Protection Act. The employee intentionally accessed the PHI of 880 individuals without a business purpose (including co-workers, clients, and relatives); the health entity conducted an extensive audit of the employee’s actions in its electronic database, interviewed the employee and their current and previous manager, and plans to continue staff education regarding privacy and confidentiality of PHI. [OIPC SK – Investigation Report 284-2017 – Saskatchewan Health Authority]

CA – OIPC SK Finds University Appropriately Handled Breach

The OIPC SK investigates a breach by the University of Regina pursuant to the Local Authority Freedom of Information and Protection of Privacy Act. After a hacker accessed the student grading system to alter the grades of 31 students, the University reset account credentials and passwords, and conducted a thorough analysis of suspicious system activity; recommendations include enforcing changes to default passwords, conducting random system audits and notifying affected individuals of all compromised PI. [OIPC SK – Investigation Report 260-2017 – University of Regina]

CA – OIPC SK Formalizes Proactive Breach Reporting

This OIPC SK has issued guidance on voluntary breach reporting. Public bodies are encouraged to proactively report breaches to the OIPC, which can provide expert guidance on what to consider, what questions to ask and what parts of legislation may be applicable; the OIPC will issue a formal report, which may include recommendations, if a breach is egregious or affects a large number of individuals or an individual makes a formal complaint. [OIPC SK – Proactively Reporting Breaches to the IPC]

CA – Canadians ‘Reluctant’ to Accept New Police Powers, Prefer Privacy Online, Government Finds

Last fall, the government asked Canadians to weigh in on the future of the country’s national security legislation. It was, in part, a response to outcry over elements of the controversial anti-terrorism Bill C-51, parts of which the Liberal government has promised to repeal. A report summarizing the results of the consultation was released, with one topic in particular drawing considerable attention: what sort of powers should law enforcement and intelligence agencies have when investigating crimes in the digital world? Police have called for warrantless access to basic subscriber information, arguing that it is too difficult to obtain from telecom companies in a timely manner, and said that encrypted communications have made their investigations more difficult. There have also long been calls for so-called lawful access legislation — a legal requirement that all telecommunications providers install interception equipment on their networks — and a requirement that phone and internet companies retain certain types of data to assist police in criminal investigations.

But it seems that Canadians — at least, those that participated in the government’s consultation — generally disagree. “Most participants in these Consultations have opted to err on the side of protecting individual rights and freedoms rather than granting additional powers to national security agencies and law enforcement, even with enhanced transparency and independent oversight,” the report reads. “The thrust of the report suggests that there’s significant appetite for reform,” said Craig Forcese, a law professor at the University of Ottawa who has written extensively on Bill C-51 — in particular, “a significant appetite for limiting state power in terms of the sorts of powers that security services have.” Some numbers:

  • 70% consider basic subscriber information — that is, metadata such as name, home address, phone number, and email address — to be as private as the content of their communications (law enforcement disagree).
  • 48% said basic subscriber information “should only be provided in ‘limited circumstances’ and with judicial approval” — similar to what is currently required.
  • 68% believed that “law enforcement should operate the same in both the physical and the digital worlds” with regards to privacy rights, due process, and how warrants are granted and scrutinized.
  • More than 80% of respondents believed that “the expectation of privacy in the digital world is the same as or higher than in the physical world.”
  • 78% opposed a law mandating telecom companies maintain interception capabilities.
  • Most of the online respondents and organizations consulted opposed implementing backdoors in encryption, while law enforcement believed they should have “the tools they need to access the communications of those who use secure communications technologies for criminal purposes”.
  • 68% opposed a legal requirement for telecom companies to retain user data.
  • 44% were against giving law enforcement and intelligence agencies updated tools, while 41% supported the idea given proper justification and oversight. [CBC News]

Consumer

CA – Privacy a Make-or-Break Issue for Cannabis Users and Retailers: Report

As cannabis legalization approaches, Canadian consumers say cybersecurity tops the list of “must-haves” in a legal market, according to a new report from Deloitte [see PR here & report here]. The report, which looks at a wide range of consumer behaviours and preferences related to the cannabis market, shows one-third of cannabis consumers would prefer to purchase pot online. Assurances of online privacy are cited as their No. 1 concern. The Deloitte report suggests this landscape of uncertainty regarding how cannabis use will affect employment is at least partly behind consumers’ concerns for privacy protection. The report also notes that even in-store pot buyers “will be sharing personal information with retailers, such as allowing their ID to be scanned at point-of-sale terminals and their image captured on security cameras.” Cybersecurity will therefore be as much of a concern for brick and mortar retailers as for online operations. [The Star and at: Global NewsCBC News and Sault Online]

US – How Americans Have Viewed Government Surveillance and Privacy Since Snowden Leaks

Five years ago, news organizations broke stories about federal government surveillance of phone calls and electronic communications of U.S. and foreign citizens, based on classified documents leaked by then-National Security Agency contractor Edward Snowden. The initial stories and subsequent coverage sparked a global debate about surveillance practices, data privacy and leaks. Here are some key findings about Americans’ views of government information-gathering and surveillance, drawn from Pew Research Center surveys since the NSA revelations:

1)   Americans were divided about the impact of the leaks immediately following Snowden’s disclosures, but a majority said the government should prosecute the leaker;

2)   Americans became somewhat more disapproving of the government surveillance program itself in the ensuing months, even after then-President Barack Obama outlined changes to NSA data collection;

3)   Disclosures about government surveillance prompted some Americans to change the way they use technology;

4)   Americans broadly found it acceptable for the government to monitor certain people, but not U.S. citizens, according to the 2014-15 survey;

5)   About half of Americans (52%) expressed worry about surveillance programs in 2014 and 2015, but they had more muted concerns about surveillance of their own data;

6)   The vast majority of Americans (93%) said that being in control of who can get information about them is important, according to a 2015 report;

7)   Some 49% said in 2016 that they were not confident in the federal government’s ability to protect their data; and

8)   Roughly half of Americans (49%) said their personal data were less secure compared with five years prior, according to the 2016 survey.

[PEW Research and at The Guardian here & hereSouth China Morning PostThe Associated PressLawfare Blog and The Australian Financial Review ]

E-Government

CA – Conservative Party Takes Disciplinary Action After Membership List Shared

The Conservative party is demanding that the National Firearms Association destroy a party membership list that it appears to have illicitly obtained from one of the camps in the recent leadership contest. “We are aware that our members are being contacted by an outside organization,” the party said in a Facebook post. “We will be issuing a cease-and-desist letter to the organization in question, demanding that they destroy the list.” The party did not identify the outside organization but the post came after numerous Conservatives complained through social media that they’d received a letter this week from the National Firearms Association, seeking a donation. They suspected that the association had obtained their names and addresses from the party membership list, distributed to each of the 14 candidates during the leadership race, which concluded last weekend with the election of Andrew Scheer. [The Canadian Press

US – FireEye Report: State Election Systems at Risk

A report from FireEye titled “Attacking the Ballot Box” notes that “state and local election infrastructure is increasingly at risk for targeting by a range of threat actors, in particular state-sponsored cyber espionage actors.” The report examines threats to electronic voter registration, state elections websites, voting machines, and election management systems. [www.scmagazine.com: State elections systems still hackable, report | www.bloomberg.com: State Election Systems Increasingly at Risk for Cyberattacks, FireEye Says | media.scmagazine.com: Attacking the Ballot Box: Threats to Election Systems]

E-Mail

CA – Government Suspends CASL Private Right of Action

The Government of Canada published an Order in Council suspending the implementation of the private right of action under Canada’s Anti-Spam Legislation (CASL). The rest of CASL remains in force, July 1, 2017 still marks the end of the special transition rule for implied consent to receive CEMs, and CASL contraventions continue to be subject to regulatory enforcement and potentially severe administrative monetary penalties.

  • CASL’s private right of action, which was scheduled to come into force on July 1, 2017, allows any individual or organization affected by a CASL contravention to sue the persons who committed the contravention or are otherwise liable for the contravention and seek: (1) compensation for actual loss, damage and expense suffered or incurred by the applicant; and (2) statutory (non-compensatory) damages of up to $200 for each contravention and $1,000,000 for each day on which the contravention occurred. It was generally expected that CASL’s private right of action would be invoked to support class actions seeking large statutory damages awards on behalf of large groups of individuals affected by unlawful CEM campaigns.
  • Order in Council P.C. 2017-0580, dated June 2, 2017, indefinitely suspends the effective date of the private right of action. The Precis for the Order in Council explains that the purpose of the Order is to delay the coming into force date of the private right of action “in order to promote legal certainty for numerous stakeholders claiming to experience difficulties in interpreting several provisions of the Act while being exposed to litigation risk”.
  • The Government’s News Release explains that the private right of action is being suspended “in response to broad-based concerns raised by businesses, charities and the not-for-profit sector”, who should “not have to bear the burden of unnecessary red tape and costs to comply with the legislation”. The News Release includes the following statement by the Minister of Innovation, Science and Economic Development. [Mondaq]

EU Developments

EU – EDPB Adopts Art. 29 Working Papers

With the GDPR having come into force, the EDPB [The European Data Protection Board — made up of representatives of national data protection authorities across the EU and the European data protection supervisor] thus replaces the Art. 29 Data Protection Working Party. At its first constituent meeting on 25 May 2018, the EDPB confirmed many of the previous positions of Art. 29 Group [see overview of the position papers adopted here]. Many of the interpretations of the GDPR published first by the Art. 29 Group and now endorsed by the EDPB are regarded as excessively strict by many data protection practitioners. The positions of the EDPB are recommendations for the practical application of the GDPR. They have no binding effect for courts. However, it is to be expected that courts may very well take the EDPB’s requirements into account when applying and interpreting the GDPR. [HLDA Data Protection]

EU – EDPB Issues Criteria for Certification Mechanisms

The EU Data Protection Board identified overarching criteria relevant to certification mechanisms under the GDPR. Certification criteria should clearly describe the scope of processing operations, allow for practical application, allow for application to different types and sizes of organisations, and should take GDPR principles into account (lawfulness, data subjects’ rights, DPIAs, breach notification obligations); existing technical standards can be leveraged (however consider that they are not typically aimed at data protection), and use cases should be provided to allow for compliance assessments. [EDPB – Guidelines on Certification and Identifying Certification Criteria in Accordance with Articles 42 and 43 of the GDPR]

EU – European Commission Proposes Draft Whistleblowing Directive

On 23 April 2018, the European Commission published a proposal for a Directive on the protection of whistleblowers reporting on breaches of EU law, accompanied by an explanatory memorandum [see relevant docs here also see 44 pg PDF text here & 10 pg PDF Annex here]. The intention behind the proposal is to harmonise the minimum level of protection available to whistleblowers across the EU. The draft Directive applies to reports of breaches across a wide range of EU areas of law, including the protection of privacy and personal data, and security of network and information. The proposal is open for feedback via the Commission’s Have Your Say website until 20 June 2018, although this deadline will be extended to allow further opportunities for public consultation. The draft Directive is pending adoption by the European Parliament and Council, and it is anticipated to become applicable in 2021. [Tech Law Dispatch and at: Kramer Levin Perspectives]

UK – ICO Issues GDPR Guidance Documents:

——-   Right to Data Portability

The right only applies when the lawful basis for processing data is consent or for the performance of a contract, and processing is carried out by automated means; pseudonymous data that can be clearly linked to the individual should be included in the response, and controllers are only responsible for the secure and accurate transmission of the data (not for subsequent processing after transmission. [ICO UK – Right to Data Portability]

——-   Codes of Conduct

An organisation can sign up to a code of conduct relevant to its data processing activities or sector (an extension or an amendment to a current code or a brand new code); compliance with such a code can assist the organisation to mitigate against enforcement action (adherence to a code of conduct serves as a mitigating factor). [ICO UK – Codes of Conduct]

——-   High Risk Processing – Data Protection Impact Assessments

Final guidance on data protection impact assessments. Personal data processing requiring DPIAs – intelligent transport systems, dating websites, market research involving neuro-measurement, contract pre-check processes, social media networks, list brokering, wealth profiling, re-use of publicly available data, and eye tracking; consider risks to individual rights and freedoms (inability to access services or exercise rights, identity theft or fraud), and identify mitigating measures (reducing retention periods or processing scope, anonymisation, human review of automated decisions). [ICO UK – GDPR DPIAs]

——-   Transparency

Final guidance on the right to be informed. Conduct data audits and mapping to determine what personal data is held, any data sharing, sources of the data, and retention periods (to ensure drafted policies capture all processing activities), conduct user testing to get feedback on the transparency of policies, provide individuals with the organisation’s privacy policy where data is bought, and be upfront about AI use, including the processing purpose, any new uses of personal data, and automated decisions with legal or significant effect. [ICO UK – Right to be Informed]

——-   Children’s Data Processing

Draft guidance on processing of children’s personal information. A consultation document was previously issued in January 2018. Children should not subjected to automated decisions with legal or significant effect (unless for performance of a contract, authorised by law, or based on explicit consent), consent can be used as a legal basis only where the child understands what they are consenting to, and borderline data subjects’ requests should be assessed based on the child’s level of maturity, nature of personal data, and any detriments to the child if parents or guardians can or cannot access their data [ICO – Children and the GDPR Guidance]

UK – ICO to Fund Research into Big Data, Blockchain, Emerging Tech

The UK’s data watchdog is offering up to £100,000 for projects looking at how emergent tech affects information rights, saying that practical research “needs a stronger voice”. As survey after survey shows declining public trust in the use of their data and the government plans to slurp even more, the UK ICO has decided to fund a research programme. Launched last month, the ICO will award a yet-to-be-decided number of projects between £20,000 and £100,000 (out of a total annual budget of £250,000) to assess the privacy implications of new technologies, and come up with ways to address them. Commissioner Elizabeth Denham said that it was “designed to give practical research and policy a stronger voice in this evolution” of information rights. It is linked to the watchdog’s April 2017 Information Rights Strategic Plan, which sets out five priorities for the organisation, including increasing its leadership and influence as well as working to increase public trust and create a “culture of accountability”. Research projects would last for up to a year, the ICO said, adding that it was particularly interested in work on emerging technologies, such as big data, artificial intelligence, machine learning, social scoring and blockchain. The ICO is also working to build up its technology capacity – it recently hired a CTO and its April Information Rights Strategic Plan lists “staying relevant” and “keeping abreast of evolving technology” as one of its priorities. [The Register]

UK – Study Indicates Widespread Use and Disclosure of Student Data

A UK advocacy group released findings from its review of children’s privacy and data protection in education pursuant to the GDPR. Prior to age 10, most students’ personal data is sent to over 10 commercial companies without parents’ knowledge (data is then often forwarded to app and platform partner affiliates who use it for profiling or marketing purposes), submitted over 25 times in national school censuses and tests (which is fed into a national database for perpetual re-use), and given away to data analytics researchers. [The State of Data 2018 – defenddigitalme.com]

FOI

CA – Federal Information Commissioner Issues 2016−2017 Annual Report

Suzanne Legault tabled her 2016–2017 Annual Report in Parliament. The year began on a positive note for access to information and transparency with many constructive advancements and a promise by the government to reform the Access to Information Act. As the year drew to a close, Commissioner Legault says there is “a shadow of disinterest on behalf of the government.” Several investigations illustrate longstanding deficiencies with the Act, which include the deletion of emails subject to a request, difficulties accessing documents in a ministers’ office, failure to document decisions, and lengthy delays to obtain information. Commissioner Legault says “our investigations highlight that the Act continues to be used as a shield against transparency and is failing to meet its policy objective to foster accountability and trust in our government. The Act urgently needs to be updated to ensure that Canadians’ access rights are respected. A lot of work needs to be done before this government delivers on its transparency promises.” [Newswire]

US – Facebook Complied with Over 70% of Government Requests

Facebook reported on government requests for user data between July and December 2017. A total of 82,341 requests were received between July and December of 2017 from government agencies for disclosure of account or user information (through legal process or for emergency purposes); the US had the highest number of total requests, followed by India and the UK, while Canada had the highest percentage of emergency requests (50.7%). [Government Requests for User Data – July to December 2017 – Facebook]

Health / Medical

US – State Strengthens Health Records Privacy in Discrimination Lawsuits

A Washington state law (SB 6027) set to take effect June 7 limits the use of medical and mental health records in discrimination lawsuits, strengthening patient privacy rights. Employment discrimination attorney Beth Touschner said that the new law prohibits private therapy sessions of plaintiffs from being used in court. Touschner said that defense attorneys had previously been able to use discovery to obtain medical and mental health records that had nothing to do with the alleged discrimination. Jeff James, a lawyer who defends private-sector employers, argued that there are legitimate reasons for introducing medical records into discrimination lawsuits, such as challenging the cause or magnitude of alleged damages. Under the new law, defense attorneys can only request medical and mental health records going back two years in three specific circumstances: 1) Plaintiff alleges a specific and diagnosable physical or psychiatric injury; 2) Plaintiff relies on the records or testimony of a health care provider or expert witness; and 3) Plaintiff alleges failure to accommodate a disability or alleges discrimination on the basis of a disability The law reverses a 2013 state Court of Appeals Division I decision in Lodis v. Corbis Holdings Inc. [see here] that ruled plaintiffs must produce mental-health records when seeking emotional harm or distress in a discrimination suit. [Health IT Security and at: Seattle Times]

CA – Employer-Mandated Physician Visit Is Not A Privacy Violation

Employers are entitled to require employees to visit in-house occupational health department physicians to obtain reasonably necessary medical information if that right is provided for in their collective agreement. This was confirmed in Rio Tinto Alcan Inc (RTA) v UNIFOR, Local 2301 (Medical Information Grievance) when the arbitrator found that the employer [RTA which operates a safety-sensitive aluminium smelter in Kitimat, British Columbia] had not violated employee privacy rights when it required employees to visit in-house occupational health department physicians to confirm eligibility for wage loss protection benefits. The arbitrator also recognises an employer’s right to seek reasonably necessary medical information to ensure that employees are absent from work for legitimate reasons only, and to facilitate their return to work. In-house OHDs can be a valuable tool for employers to learn important medical information about their employees without infringing on privacy rights. [Fasken]

WW – Most Dementia Apps Lack A Privacy Policy: Study

Researchers with Harvard Medical School reviewed 125 iPhone apps built for dementia patients and found that 72 collected user data. Of those apps that collected data, just 33 had an available privacy policy. Many of those mobile apps that had an accessible privacy policy lacked clarity, often failing to address the specific functions of the app, describe safeguards or differentiate between individual protections versus aggregate data protection. The authors said the findings of the study highlighted a significant concern for patients with cognitive impairment and their caregivers, eroding trust among users. [fiercehealthcare.com]

US – HHS Issues Best Practices for Physical Security Measures

U.S. Health and Human Services issued guidance on workstation security. Physical security measures for workstations include port locks for USBs, device locks for CD/ DVD drives, maintenance of current inventory of all electronic devices, relocation of devices from public or vulnerable areas, and awareness training for employees on physical security policies. [HHS – May 2018 OCR Cyber Security Newsletter – Workstation Security – Don’t Forget About Physical Security]

Internet / WWW

WW – Facebook Gave Device Makers Deep Access to Data on Users and Friends

Facebook reached data-sharing partnerships with at least 60 device makers — including Apple, Amazon, BlackBerry, Microsoft and Samsung — over the last decade, starting before Facebook apps were widely available on smartphones, company officials said. The deals allowed Facebook to expand its reach and let device makers offer customers popular features of the social network, such as messaging, “like” buttons and address books. Facebook allowed the device companies access to the data of users’ friends without their explicit consent. Some device makers could retrieve personal information even from users’ friends who believed they had barred any sharing. Most of the partnerships remain in effect, though Facebook began winding them down in April. This contradicts Facebook’s leaders who said that the kind of access exploited by Cambridge in 2014 was cut off by the next year, when Facebook prohibited developers from collecting information from users’ friends. But the company officials did not disclose that Facebook had exempted the makers of cellphones, tablets and other hardware from such restrictions. In interviews, Facebook officials defended the data sharing as consistent with its privacy policies, the F.T.C. consent decree agreement [see FTC posts here & here & 10 pg PDF decree document here & 9 pg PDF order here] and pledges to users. [NYTimes and at: Mother JonesBusiness StandardCNET and The Hill]

WW – Chrome Outlines Plans to Alert Users to Unsecure Websites

In a Chromium blog post, Google has described some of the steps it will take to alert Chrome browser users that they are visiting unsecure websites. In September (Chrome 69), Chrome will stop identifying HTTP sites as secure in the address bar. In October, (Chrome 70) Chrome will begin displaying a red “not secure” warning when users enter data on HTTP sites. Google is in essence turning security indication on their head; instead of labeling sites as secure, Chrome security team project manager Emily Schechter wrote in a blog that “Users should expect that the web is safe by default.” [blog.chromium.org: Evolving Chrome’s security indicators | www.computerworld.com: Google details how it will overturn encryption signals in Chrome]

WW – ICANN’s Proposed Legitimate Processing

The Internet Corporation for Assigned Names and Numbers issues a proposed temporary specification for generic top-level domains pursuant to the GDPR. Personal data included in Registration Data may only be processed for prescribed legitimate purposes, including enabling a reliable mechanism for identifying and contacting the Registered Name Holder, supporting a framework to address issues involving domain name registrations (e.g. investigation of cybercrime and DNS abuse), and providing mechanisms for safeguarding Registration Data in the event of a business or technical failure. [Proposed Temporary Specification for gTLD Registration Data: Working Draft – ICANN]

Law Enforcement

CA – TPS to Embark on Six-Month Pilot Project for Using Body-Scanners

Toronto Police are moving ahead with a six-month pilot project as they prepare to use body-scanners like those used at airports. The scanners are intended to locate evidence or contraband without “level 3” searches, also known as strip searches, police said in a press release [TPS FAQ here]. The use of the technology will not eliminate the use of strip-searches entirely. Rob De Luca, director of the public safety program with the Canadian Civil Liberties Association (CCLA), noted that importance of maintaining personal privacy with the scanner technology. “We don’t have a concern with the technology, as long as there are sufficient safe-guards in place. If they’re using the search solely in cases where a strip search is justified under the law, then I think it could be a helpful addition,” said De Luca. Toronto police conduct some 55 strip searches a day, or 20,000 a year. The project team handling the scanners’ implementation has consulted with Ontario’s Information and Privacy Commissioner, and has received a legal opinion from the Ministry of the Attorney General. The data would be stored for 30 days following the scan provided nothing is found, police say. [Toronto Star and also at: CTV NewsNarcityGlobal News and Blue Line]

Online Privacy

US – Ninth Circuit Stays $30B Facebook Privacy Suit

The Ninth Circuit granted Facebook’s emergency petition to stay a $30 billion privacy suit pending appeal just hours after a lower court refused to delay an upcoming trial. Two Ninth Circuit judges issued theirruling just after 3 p.m. on Tuesday, mere hours after U.S. District Judge James Donato denied [see 4 pg PDF here] Facebook’s motion to stay the case. Facebook argued it was being forced to spend money and risk reputational harm notifying users about an upcoming trial, which could become moot if the Ninth Circuit overturns a prior ruling in the case. Last week, Donato ordered Facebook [see here] to use emails, newsfeed posts and jewel notices, or Facebook alerts, to notify millions of Illinois Facebook users about the lawsuit by May 31. Facebook is accused of harvesting users’ facial data for its “Photo Tag Suggest” function without consent and in violation of a 2008 Illinois privacy law. A jury trial was set for July 9. [Courthouse News and at: Biometric UpdateFindLaw BlogsThe Recorder (Law.com), FindBiometricsMedia Post and The Register]

Privacy (US)

US – 11th Circuit Hands LabMD a Major Victory & Rebukes FTC in Process

On June 6, the U.S. Court of Appeals for the Eleventh Circuit decided the long-awaited LabMD saga [see 31 pg PDF here]. As Wiley Rein attorneys recently explained in a webinar on agency priorities, this case is an important milestone and inflection point for the new Federal Trade Commission (FTC) leadership. The FTC’s authority and role in data security has been key to ongoing debates over federal privacy and security policy domestically and globally. This case raised issues going to FTC power and practice, but ultimately turned on the remedy imposed by the agency which was found to be so vague as to be unenforceable. The court did not address the key substantive questions:

1) First, in a data breach case, what type of consumer injury gives rise to “unfairness” under Section 5 of the FTC Act, an issue sometimes identified as the “informational injury” question? and

2) Second what type of notice is the FTC required to provide regarding reasonable data security measures? Despite its failure to answer these questions, the decision has implications for those issues and the agency’s overall approach to data security. In particular the Eleventh Circuit’s decision was a rebuke to the agency’s remedial efforts, which lean heavily on consent decrees to prod action the agency could not otherwise mandate. The Court found that the FTC’s cease and desist order “mandates a complete overhaul of LabMD’s data-security program and says precious little about how this is to be accomplished.” According to three appeals court judges, “[t]his is a scheme that Congress could not have envisioned.” The FTC will now face the decision of whether to appeal the 11th Circuit’s decision. In light of the narrow scope of the 11th Circuit’s holding, such further appeal may be unattractive to the FTC. [Wiley Rein News & Insights and at: BankInfoSecurity here & hereMultichannel NewsCenter for Democracy & Technology (CDT),Reuters and Law360]

US – FTC Posts Blog on Data Deletion Rule under COPPA

On May 31, 2018, the FTC published on its Business Blog a post addressing the easily missed data deletion requirement under the Children’s Online Privacy Protection Act. The post cautions that companies must review their data policy in order to comply with the data retention and deletion rule. Under Section 312.10 of COPPA [see here], an online service operator may retain personal information of a child “for only as long as is reasonably necessary to fulfill the purposes for which the information was collected.” After that, the operator must delete it with reasonable measures to ensure secure deletion. [Hunton & Privacy and at:Privacy & Security Law Blog (Davis Wright Tremaine)]

Security

US – NIST Issues Drafts Risk Management Framework

The National Institute of Standards and Technology has issued a draft risk management framework for information systems and organizations. The draft guidelines are intended to ensure security and privacy requirements and controls are effectively integrated into the enterprise architecture, and support consistent, informed and ongoing authorization decisions; the framework breaks down the tasks at the different stages of the assessment process (preparation, selection, implementation, assessment and authorization), and identifies the primary responsible roles for achieving the task. [NIST – Risk Management Framework for Information Systems and Organizations – SP 800-37 Rev.2 Draft]

US – DHS Issues Cyber Risk Strategy

A new cybersecurity strategy from the US Department of Homeland Security describes five pillars of cyber risk management: risk identification, vulnerability reduction, threat reduction, consequence mitigation, and enabling cybersecurity outcomes. [www.executivegov.com: DHS Sets Approach to National Cyber Risk Management Through New Strategy | www.dhs.gov

US – Federal Vehicle Telematics Cybersecurity

A March 2015 Executive Order requires that all US federal government vehicle fleet managers gather operational data, including fuel consumption, maintenance, and vehicle location. Because the data are collected and transmitted using telematics, the process raises cybersecurity concerns. The Department of Homeland Security (DHS) and Department of Transportation (DoT) have together developed a Telematics Cybersecurity Primer for Agencies. The guidelines cover protecting communications to and from the devices; protecting device firmware; protecting actions on the device through the “least privilege” principle; and protecting device integrity. [ www.dhs.gov: DHS, DOT Partner on Government Vehicle Telematics Cybersecurity Primer | www.scmagazine.com: DHS, DoT team up to secure federal vehicle fleets]

Telecom / TV

US – Pentagon Tightens Rules for Personal Mobile Devices

A US Defense Department policy memo released on May 22, 2018, says that all Pentagon personnel, contractors, and visitors are no longer permitted to have personal mobile devices in areas involved in “processing, handling, or discussion of classified information.” People who violate the policy could face loss or delay of security clearances, fines, and administrative discipline. The policy must be implemented within 180 days. [media.defense.gov: Memorandum for Chief Management Officer of the Department of Defense | fcw.com: Pentagon cracks down on personal mobile devices]

US Government Programs

US – Federal CyberSecurity Report Finds 2/3 of Agencies at Risk

The Office of Management and Budget issued a federal cybersecurity risk report on the performance of 96 agencies in accordance with: Executive Order 13800 on federal networks and critical infrastructure; and OMB Memorandum M-17-25 on federal networks and critical infrastructure. One quarter of federal agencies are sufficiently managing their cybersecurity risk, but almost 2/3 are at risk (some processes in place but gaps remain) and a minority are at high risk (key processes are not deployed); challenges include limited situational awareness, lack of standardized IT capabilities, limited network visibility, and lack of accountability for managing risks. [Executive Office of the President of the United States – Federal Cybersecurity Risk Determination Report and Action Plan]

US – Individualized Suspicion Required for Border Phone Search

The Court considered a request by Hamza Kolsuz to suppress evidence seized from his mobile phone, which was searched at the US border. The US Appeals Court confirmed that a forensic search of an individual’s phone by border officers was lawful; he did not have a license for firearm parts found in his luggage, had previously attempted to illegally export firearm parts, and there was reason to believe the phone search would reveal information related to other ongoing illegal export attempts. [USA v. Hamza Kolsuz – Decision – US Court of Appeals for the Fourth Circuit]

US – Feds Need to Do Better Job With EHR Data Security, Privacy: GAO

The U.S. federal government needs to do a better job at EHR data security and privacy, concluded a federal IT systems audit by the Government Accountability Office released last month [Highlights | Report]. The federal government also must ensure privacy is guaranteed when facial recognition systems are used and better protect the privacy of users’ data on state-based health insurance marketplaces, GAO concluded. To accomplish these goals and improve lax federal cybersecurity in general, agencies should implement the information security program mandated by the Federal Information Security Management Act (FISMA), GAO recommended. GAO said that it has made around 2,700 recommendations to federal agencies to improve their IT security since 2010, including measures required by FISMA. But as of May 2018, around 800 of its recommendations had not been implemented. [Health IT Security]

US Legislation

US – Federal Bill Introduces Deletion Rights for Minors

Senate Bill 2965, the Clean Slate for Kids Online Act of 2018 amending the Children’s Online Privacy Protection Act, was introduced in the US Senate and referred to the Committee on Commerce, Science and Transportation. If passed, COPPA would be amended to require website and online service operators to, upon request, delete all PI collected from children under 13 years and provide written confirmation; PI is exempt from deletion if necessary to respond to judicial process, or provide information for law enforcement investigations (however, the information cannot be used, shared or maintained for any other purpose). [S. 2965 – Clean Slate for Kids Online Act of 2018 – 115th Congress]

US – Federal Bill Amends COPPA

Senate Bill 2932, the Do Not Track Kids Act of 2018, was introduced in the United States Congress. The Act was previously introduced under Senate Bill 2187. If passed, the Do Not Track Kids Act of 2018 would make it illegal for an online operator to collect PI from a minor (unless it has adopted a Digital Marketing Bill of Rights for Minors), or use, disclose to third parties or compile PI for marketing purposes without verifiable parental consent, or the consent of the minor. [S.2932 – Do Not Track Kids Act of 2018 – United States Congress]

+++

6-19 May 2018

Biometrics

UK – Report Confirms Deep Flaws of Automated Facial Recognition Software

Big Brother Watch [here] has produced a report bringing together everything we know about the use by UK police of automated facial recognition software, and its deep flaws. The report supplements that information with analyses of the legal and human rights framework for such systems, and points out that facial recognition algorithms often disproportionately misidentify minority ethnic groups and women. Alongside its report, Big Brother Watch has launched the “Face Off“ campaign calling for the UK public authorities to stop using automated facial recognition software with surveillance cameras, and to remove the thousands of images of unconvicted individuals from the UK’s Police National Database. [TechDirt and at: Edgy Labs and Android Headlines]

UK – Cops’ Facial Recog Tech Slammed: Zero Arrests, 2 Matches, No Criminals

London cops’ facial recognition kit has only correctly identified two people to date – neither of whom were criminals – and the UK capital’s police force has made no arrests using it, figures published today revealed. According to information released under Freedom of Information laws, the Metropolitan Police’s automated facial recognition (AFR) technology has a 98% false positive rate. That figure is the highest of those given by UK police forces surveyed by the campaign group Big Brother Watch as part of a report [see PR here and 56 pg PDF report here] that urges the the police to stop using the tech immediately. And, despite cops’ insistence that it works, the report showed an average false positive rate – where the system “identifies” someone not on the list – of 91% across the country. The Met has the highest, at 98%, with 35 false positives recorded in one day alone, at the Notting Hill Carnival 2017. However, the Met Police claimed that this figure is misleading because there is human intervention after the system flags up the match. [The Register and coverage at: Siliconrepublic, BBC News, Metro.co.uk, Software Testing News, Nextgov, The Independent, The Washington Tomes and HuffPost UK]

UK – Sky News Will Use AI to ID Guests at Royal wedding

When Prince Harry and Meghan Markle said “I do” at their royal wedding, online viewers tuning into the Sky News stream did not have to guess the names of international celebrities and British nobility in attendance. Instead, the U.K. broadcaster used artificial intelligence to identify famous guests as they made their grand entrances at St. George’s Chapel at Windsor Castle — displaying the invitees’ names and details about how they are connected to the royal couple. Dubbed “Who’s Who Live,” Sky News announced the live-stream service in partnership with Amazon.com and several data and engineering firms. As the 600 guests entered the chapel, Sky News highlighted notable attendees using Amazon Rekognition, a cloud-based technology that can recognize and compare faces in images and video using artificial intelligence. Along with identifying the wedding guests, the live-stream service also showed facts about them, using captions and on-screen graphics through the company’s app. The data was displayed alongside the video of the procession into the chapel. The celebrity recognition feature’s debut could pave the way for its use at other high-profile events that often invite the audience to interact on social media. [Washington Post]

CA – $30B Facebook Privacy Suit Headed for Jury Trial

A $30 billion class action claiming Facebook harvested the facial data of up to 6 million Illinois residents without consent must be decided by a jury, a federal judge ruled . Facebook argued that its technology doesn’t scan users’ facial geometry in a way that violates a 2008 Illinois privacy law. U.S. District Judge James Donato found only a jury can answer that question. Lead plaintiff Nimesh Patel sued Facebook in 2015 in one of three consolidated class actions, claiming the social network harvested users’ facial data for its “Photo Tag Suggest” function, starting in 2011, without express permission from users. Under the Illinois Biometric Information Privacy Act of 2008, companies must obtain consent before collecting or disclosing biometric data, such as retina scans, fingerprints, voiceprints, hand scans or facial geometry. Facebook also argued that it should not be liable for any damages because it reasonably understood the Illinois privacy law as not applying to data harvested from photographs. Donato rejected that argument too, concluding that “ignorance of the law” has never been accepted as a valid excuse for breaking the law. The judge also scolded Facebook for continuing to cling to legal arguments that were rejected in prior rulings When he denied Facebook’s motion to dismiss in February [see here] and certified a class of up to 6 million Illinois Facebook users in April. The judge described Facebook’s refusal to accept his prior decisions as “troubling.” In an emailed statement, Facebook said: “We are reviewing the ruling. We continue to believe the case has no merit and will defend ourselves vigorously.” [Courthouse News and at: The Register, Business Insurance and Biometric Update]

Big Data / Artificial Intelligence / Data Analytics

EU – EU Commission Issues Artificial Intelligence Strategy

The EU Commission issued recommendations to take advantage of opportunities offered by artificial intelligence. Investments in AI should be increased to develop applications in key sectors (e.g. healthcare), facilitate data access for small and medium-sized companies, and ensure an appropriate framework is applied that promotes innovation, respects EU values, the GDPR, and ethical principles. [EU Commission – Artificial Intelligence for Europe]

WW – Google’s AI Sounds Like A Human on the Phone

It came as a total surprise: the most impressive demonstration at Google’s I/O conference was a phone call to book a haircut. Of course, this was a phone call with a difference. It wasn’t made by a human, but by the Google Assistant, which did an uncannily good job of asking the right questions, pausing in the right places, and even throwing in the odd “mmhmm” for realism. The crowd was shocked, but the most impressive thing was that the person on the receiving end of the call didn’t seem to suspect they were talking to an AI. It’s a huge technological achievement for Google, but it also opens up a Pandora’s box of ethical and social challenges. For example, does Google have an obligation to tell people they’re talking to a machine? Does technology that mimics humans erode our trust in what we see and hear? And is this another example of tech privilege, where those in the know can offload boring conversations they don’t want to have to a machine, while those receiving the calls (most likely low-paid service workers) have to deal with some idiot robot? As Google’s researchers explain, the feature, called Duplex, it can only converse in “closed domains” — exchanges that are functional, with strict limits on what is going to be said. “You want a table? For how many? On what day? And what time? Okay, thanks, bye.” Easy! Duplex works in just three scenarios at the moment: making reservations at a restaurant; scheduling haircuts; and asking businesses for their holiday hours. It will also only be available to a limited (and unknown) number of users sometime this summer [The Verge]

WW – RightsCon 2018 Conference Debates Resolution on Discrimination in Machine Learning

This week marked the opening in Toronto of the seventh RightsCon conference. Attendees will have a choice of 450 sessions on a wide range of rights topics related to the online world.: How to leverage blockchain as a force for good, the digital divide in Indigenous Communities in North America, content regulation, free speech and censorship, false news, online surveillance and Internet governance. One of the highlights will be a preparation of the “Toronto Declaration on Discrimination in Machine Learning” [see here & 11 pg PDF here], a step toward developing detailed guidelines for the promotion of equality and protection of the right to non-discrimination in machine learning. The Declaration will address necessary protections for companies and governments exploring and implementing the future of machine learning. The goal of the declaration is to encourage data scientists to think early when creating machine learning algorithms about implications of assumptions in their work. [IT World] See also: [The 7 Craziest IoT Device Hacks]

Canada

CA – Canada Has ‘Fallen Behind’ in Privacy Powers: Denham

The power made available to the Canadian privacy watchdog to investigate companies like Facebook and Cambridge Analytica have not kept pace with those granted to his counterparts around the world. That was the message brought by Elizabeth Denham, the United Kingdom’s information commissioner, to a House of Commons committee studying the breach of personal information harvested from 87 million Facebook accounts by British political profiling firm, Cambridge Analytica. “The Canadian privacy commissioner’s powers have fallen behind the rest of the world,” Denham told the committee members. Her observation comes as Canadian politicians struggle to catch up to other jurisdictions such as the European Union that have pursued stringent new privacy rules in recent years in light of concerns that tech giants like Facebook and Google are not doing enough to protect personal information. [Global News]

CA – Ontario Law Prohibits Inquiries into Compensation History

Ontario’s Bill 3, the Pay Transparency Act, 2018, related to disclosure of compensation for applicants and employees, receives Royal Assent and goes into effect January 1, 2019. Exemptions include a job applicant’s voluntary and unprompted disclosure of their compensation history, compensation ranges or aggregate compensation for comparable positions, or publicly available compensation history, and employers must submit and post pay transparency reports; a government compliance officer may enter a workplace without a warrant to assess the employer’s compliance with the law. [Bill 3 – Pay Transparency Act, 2018 – 41st Legislature, Ontario | Status]

CA – Canadian Government Reassures on Border Searches

The Minister of Public Safety and Emergency Preparedness reported to the Parliamentary Standing Committee on Access to Information, Privacy and Ethics on border privacy. The government believes that it is unnecessary to provide further preconditions for searches of electronic devices at the border in the Customs Act, (which could hinder an ability to respond to threats and contraventions); the recently signed Preclearance Act (which gives U.S. officers an ability to search in certain areas) requires U.S. border officials to comply with Canadian law. [Report: Protecting Canadians’ Privacy at the U.S. Border – Minister of Public Safety and Emergency Preparedness]

CA – Balsillie urges MPs to Regulate ‘Surveillance Capitalism’ of Facebook and Google

A group representing Canada’s tech CEOs told MPs that Facebook and Google represent a new form of “surveillance capitalism” and called for European-style regulation over the U.S.-based web giants. Jim Balsillie, chair of the Council of Canadian Innovators [here], told MPs that immediate government action is required to protect Canada’s commercial interests and the privacy of individuals. “Facebook and Google are companies built exclusively on the principle of mass surveillance,” he said. “Their revenues come from collecting and selling all sorts of personal data, in some instances without a moral conscience.” Mr. Balsillie, the former chair and co-CEO of Research in Motion (now BlackBerry Ltd.) made the comments while sharing a panel [see ETHI Parliamentary Committee meeting May 10, 2018 here] with Colin McKay, Google Canada’s head of public policy and government relations. Mr. McKay challenged Mr. Balsillie’s characterization of Google and told MPs that Google’s products “prioritize user privacy” and the company promotes a service called MyAccount that lets users manage their privacy and security. Earlier in the day, the committee heard from Elizabeth Denham, the United Kingdom’s Information Commissioner who is investigating the Cambridge Analytica issue, as well as Michael McEvoy, British Columbia’s Information and Privacy Commissioner, who is also conducting a related investigation. [G&M and at: Global News, CBC News, The Canadian Press (Via Ottawa Citizen) and National Observer]

CA – Former Elections Watchdog Says Liberals’ Bill C-76 Falls Short on Privacy

Marc Mayrand [wiki here], the man who ran Elections Canada from 2007 to 2016 [says] the federal government’s new election bill [the Elections Modernization Act – Bill C-76 – see PR here & Text here] falls seriously short of expectations when it comes to safeguarding Canadians’ private information In what Mayrand judged “a very small step,” the new bill will require political parties to post a policy on the treatment of people’s personal information: how they use, collect and protect it. Parties will be required to state how they train employees on safeguarding private data, and to provide contact information for a person to whom concerns can be addressed. The bill also states that parties must publish the circumstances under which personal information may be sold, although federal officials said they were unaware of any cases in which this had happened. The Elections Modernization Act, however, contains no independent verification measures and no penalties for violations. There are also, he noted, no assurances that Canadians will find out about breaches, nor avenues for them to request to see the information parties hold about them. The legislation is silent on whether parties can trade the data to anybody or whether they must obtain people’s consent to collect the data, he said. Teresa Scassa [here], Canada research chair in information law at the University of Ottawa, has also blasted the government [see here] for what she calls “an almost contemptuous and entirely cosmetic quick fix designed to deflect attention from the very serious privacy issues raised by the use of personal information by political parties.” [HuffPost]

CA – Alberta Privacy Commissioner Powerless to Investigate Political Parties’ Use of Voter Data

Alberta’s privacy commissioner is powerless to investigate how political parties are collecting and using voters’ personal information, but there’s little incentive for parties to change the status quo, observers say. Alberta’s Personal Information Protection Act (PIPA) [text here & overviews here] governs how companies are able to collect and use personal data, but exempts political parties — limiting the commissioner’s ability to investigate complaints of personal data misuse. The law can only be applied to political parties under exceptional circumstances related to commercial activity, such as selling, bartering or leasing of donor, fundraising or membership lists. University of Victoria Political scientist Colin Bennett has spent years researching privacy protection policies in Canada and abroad and studied how political parties accumulate voter data from social media sites, such as Facebook. “Essentially, individuals have no rights over their personal information that political parties capture” Bennett said provincial political parties don’t want to fall under privacy laws because it can limit campaigning abilities. “In a competitive electoral environment — and lord knows Alberta’s competitive — they’re not going to want to constrain their ability to campaign,” he said. “I would hope (privacy commissioner) Jill Clayton would be very forceful in advocating that Alberta political parties be covered under the Alberta legislation,” he said. “There’s no reason why Alberta should be any different from B.C.” British Columbia is unique among provinces in that its privacy commissioner has the authority to investigate and audit private companies and political parties suspected of skirting the law. [The Star]

CA – Trend of Police Secrecy Over Names in Homicides Raises Alarm

The names of the dead have not been released in a police-involved shooting in Nanaimo nor in a Victoria homicide. It’s becoming increasingly common practice among some agencies probing violent deaths not to release the identities. That’s because the RCMP, B.C. Coroners Service and the Independent Investigations Office, that probes fatal interactions, with police have all declined to identify the deceased. It’s an increasingly common, if inconsistent, practice across B.C. and other jurisdictions in Canada, in which agencies tasked with investigating violent deaths have, in some cases, stopped releasing victims’ names. Legal experts say it’s a trend that prevents Canadians from scrutinizing the criminal-justice system and the people who operate within it. Law-enforcement agencies and others argue that they’re simply obeying privacy laws and respecting grieving families. The Edmonton Police Service has taken a similar approach Steven Penney [here], law professor at the University of Alberta, said it’s a “troubling” practice that departs from Canada’s long-standing tradition of having an open, transparent and accountable criminal-justice system, adding “Our entire criminal-justice system is premised on the idea that when a serious crime occurs, it’s a crime against the entire society. And the entire society deserves to be informed about the implications of that crime and potentially become involved in scrutinizing the behaviour of all of those who are responsible for dealing with it.” [Victoria Times Colonist]

CA – Canada’s Privacy Commissioner Shares View on Autonomous Vehicles

Canada’s privacy commissioner Daniel Therrien presented his views on the privacy implications of autonomous and connected vehicles [remarks here] at a House of Commons transportation committee meeting on May 9th [see here]. Therrien appeared before the Standing Committee on Transport, Infrastructure and Communities (TRAN), in response to a study that was released in January of this year [see 78 pg PDF report here & Infographic here], which pointed to five key areas to help the government better prepare for a self-driving car-filled future including that the “government should put forward legislation to empower the Office of the Privacy Commissioner to proactively investigate and enforce industry compliance with privacy legislation.” He expressed concern with the fact that data flows in connected vehicles are very complex and, as a result of this fact, are not transparent. He touched on how his office has been looking to improve consent for users data by trying to find ways to give “individuals the ability to make decisions about their data.” Ideally, Therrien would like to see an amendment to the law that would allow the privacy office to “independently confirm that the principles in our privacy laws are being respected – without necessarily suspecting a violation of the law.” [MobileSyrup]

CA – Potential Privacy Class Action Against Ontario Auto Insurer

A class-action lawsuit was filed April 10 in Federal Court against The Personal over alleged use of credit scores in adjusting accident benefits claims. It’s not clear yet how many claimants there will be if the lawsuit is approved. Law firm Waddell Phillips Professional Corporation is asking Federal Court to certify the lawsuit as a class action on behalf of a specific class of Canadian auto insurance claimants [see PR here]. If approved by the court, that class would include people who made auto claims with The Personal Insurance Company after Jan. 18, 2012 “and who had their credit score information accessed by The Personal or its agents.” If the class action prevails in court] the insurer might have to pay up to $10,000 a claimant. In the lawsuit against The Personal, the plaintiffs are asking for an injunction prohibiting The Personal from “further using or accessing” personal credit scores for the purpose of adjusting auto accident benefits claims. They are asking Federal Court to award damages of $50 million, as well as aggravated, punitive or exemplary damages of $10 million. [Canadian Underwriter and at: Canadian Underwriter, The Insurance and Investment Journal and LowestRates.ca]

Consumer

CA – 3 out of 4 Facebook Users Still Active Despite Privacy Scandal: Poll

Three-quarters of Facebook users have remained as active, or even more active, on the platform since the company’s recent privacy scandal, a joint Reuters/Ipsos poll revealed. According to the survey, Facebook’s reputation has suffered little among users. The poll comprised over 2,000 American Facebook users over the age of 18, and found that half of those surveyed had not changed the way they used the site, and another quarter said they were using it more. Analyst Michael Pachter of Wedbush Securities told Reuters that Facebook is lucky the scandal revolves around data being used for political ads and not for “nefarious” purposes. “I have yet to read an article that says a single person has been harmed by the breach,” he said. “Nobody’s outraged on a visceral level.” In its first quarter financial results, Facebook said the number of monthly users in the United States and Canada rose to 241 million on March 31 from 239 million on Dec. 31, growth that was roughly in line with recent years. While many seem unaffected by the privacy concerns, a segment of Facebook users is taking action to protect their information. According to the poll, the one quarter of Facebook users whose activity hasn’t stayed the same or increased has either gone down or ceased entirely. Although user activity seems to be returning to normal a few months after the initial story broke, an Angus Reid Institute/Global News poll released in the middle of March told a very different story about users’ trust in Facebook’s platform. The poll revealed that almost three-quarters of Canadians would change the way they use Facebook as the massive data scandal plaguing the company continues to unfold. [Global News]

WW – ISO Incorporates PbD Guidelines for Consumer Goods and Services

A new ISO project committee, ISO/PC 317, “Consumer protection: privacy by design for consumer goods and services”, will develop guidelines that will not only enforce compliance with regulations, but generate greater consumer trust at a time when it is needed most [see PbD wiki here]. Dr Cavoukian pioneered the concept of “privacy by design”, a framework that seeks to proactively embed privacy into the design specifications of information technologies, networked infrastructure and business practices. In her video address at the ISO workshop “Consumer protection in the digital economy”, which took place in Bali, Indonesia, –the week of May 6–she said “Regulatory compliance alone is unsustainable as the sole model for ensuring the future of privacy Prevention is needed.” “Privacy by design” is now recognized as a core part of the EU General Data Protection Regulation (GDPR) [see Article 25 of GDRP here] and forms the basis of the ISO standardization work now underway. Implementing the standard will help companies comply with regulations and avoid potentially devastating data breaches that erode consumers’ confidence in online services. [ISO News and at: ACROFAN, SC Magazine]

E-Mail

CA – CRTC Fines Retailers $100,000 for Lack of Consent

The Canadian Radio-television and Telecommunications Commission fined Quebec Inc. 9118-9076 and 9310-6359 for violations of Canada’s Anti-Spam Legislation. Marketing text messages offered recipients an opportunity to receive future commercial offers, and did not include the prescribed information to enable recipients to easily identify and contact the sender; the joint retailers have agreed to put in place a compliance program that includes employee training, adequate disciplinary measures for non-compliance with internal procedures, and corporate policies to ensure compliance with CASL. [CRTC – Undertaking 9118-9076 and 9310-6359 – Quebec Inc.]

Encryption

CA – Citizen Lab Publishes Canadian Field Guide to Encryption

Shining a Light on the Encryption Debate: A Canadian Field Guide [see 107 pg PDF here] — co-authored by the Citizen Lab and the Canadian Internet Policy and Public Interest Clinic (CIPPIC) [here] — examines the parameters of the encryption debate, paying particular attention to the Canadian context. It provides critical insight and analysis for policymakers, legal professionals, academics, journalists, and advocates who are trying to navigate the complex implications of these technologies. The guide includes five sections: Section One provides a brief primer on key technical principles and concepts associated with encryption in the service of improving policy outcomes and enhancing technical literacy; Section Two explains how access to strong, uncompromised encryption technology serves critical public interest objectives; Section Three explores the history of encryption policy across four somewhat distinct eras, with a focus on Canada to the extent the Canadian government played an active role in addressing encryption; Section Four reviews the broad spectrum of legal and policy responses to government agencies’ perceived encryption “problem,” including historical examples, international case studies, and present-day proposals; and Section Five examines the necessity of proposed responses to the encryption “problem.” A holistic and contextual analysis of the encryption debate makes clear that the investigative and intelligence costs imposed by unrestricted public access to strong encryption technology are often overstated. [CitizenLab and at: BoingBoing]

EU Developments

EU – Eight Countries to Miss EU Data Protection Deadline

Eight EU states, Belgium, Bulgaria, Cyprus, the Czech Republic, Greece, Hungary, Lithuania and Slovenia will not be GDPR ready until far beyond the 25 May deadline. Vera Jourova, the European commissioner for justice, told reporters on Thursday (17 May) She would not hesitate to take the EU capitals to court in serious cases, noting that member states have had more than enough time to get their acts together. She blamed negligence and domestic debates for the delays. Some data authorities say they will still be able to impose sanctions and fines regardless of the missing national legislation. Only Austria, Germany, France, Croatia, the Netherlands, Sweden and Slovakia are ready with everyone else set to have their national acts passed by 25 May. Others like Spain, Italy, Portugal, Romania and Latvia are expected to be ready either end of May or beginning of June. [EU Observer]

EU – Article 29 Working Party Issues Final Guidelines on Consent

On 10 April 2018, the Article 29 Working Party (WP29) published revised guidelines [download 31 pg PDF] on consent under the General Data Protection Regulation (GDPR). Consent is one of the six GDPR bases for the lawful processing of personal data. WP29’s draft guidelines on consent were issued earlier this year. This article examines the differences between the draft and final guidelines [along the following lines]: 1) Conditions for valid consent – freely given; 2) Unambiguous indication of wishes; 3) Explicit consent; 4) Children; 5) Interaction between consent and other lawful grounds for processing; and 6) Re-consenting. [Technology Law Dispatch]

EU – Article 29 WP Adopts Finalized Guidelines on Transparency

The Article 29 Working Party (WP29) adopted, on 11 April 2018, finalized guidelines on transparency (the Guidelines) under the General Data Protection Regulation (Regulation (EU) 2016/679) (GDPR) [download 40 pg PDF here], following its public consultation. Draft guidance on transparency were issued earlier this year, so this blog focuses on the key issues and what is new in the final guidelines [along the following lines]: 1) Information being “intelligible”; 2) Informing data subjects about changes to transparency-related information; 3) Providing information to children; 4) Clear and plain language; 5) Changes to Article 13 and 14 information; and 6) Layered privacy statements and notices. [Technology Law Dispatch]

EU – WP29 Issues Position Paper on GDPR Record-Keeping Obligation

The Article 29 Working Party (WP29) has published a position paper on the scope of the derogation from the obligation to maintain records of processing activities. Article 30.5 [see here] provides that the record-keeping obligation does not apply to organisations with less than 250 employees in certain circumstances. The WP29 has stated that the position paper was published as a result of a high number of requests from companies received by national Supervisory Authorities. Despite the existence of the derogation, the WP29 encourages SMEs to maintain records of their processing activities, as it is a useful means of assessing the risk of processing activities on individuals’ rights, and identifying and implementing appropriate security measures to safeguard personal data. In light of the new accountability principle in the GDPR requiring organisations to be able to demonstrate how they comply with their GDPR obligations, it would certainly be prudent for all organisations, regardless of size, to maintain such records. [Ireland IP]

UK – Court Orders Government to Rewrite Investigatory Powers Act

A UK Court considered a request for judicial review of the retention provisions of the UK’s Investigatory Powers Act 2016, and ruled that the retention provisions of the Act are incompatible with fundamental rights in EU law (access to retained data is not limited to the purpose of combating “serious crime”, and access to retained data is not subject to prior review by a court or an independent administrative body), and the government must amend the Act by November 1, 2018. [The National Council for Civil Liberties (Liberty) v. Secretary of State for the Home Department & Secretary of State for Foreign and Commonwealth Affairs – [2018] EWHC 975 (Admin) – England and Wales High Court (Administrative Court) ]

Facts & Stats

UK – ICO Reports Data Incidents Spike 17%, Human Error Dominates

The number of data security incidents reported to the UK’s Information Commissioner’s Office (ICO) jumped 17% between the final three months of 2017 and the first quarter of 2018, according to new figures. In its last update [see here] before the EU GDPR takes effect, the privacy watchdog revealed a rise in incident reports from 815 to 957. Although cybersecurity-related incidents increased by 31% from the previous quarter, the first month-on-month increase since Q4 2016-17, human error dominated. In fact, over the 2017-18 financial year, 3325 reports were filed with the ICO, with the number one breach type “data emailed to incorrect recipient,” (13%) followed closely behind by “data faxed to wrong recipient” (13%). Also high was “loss or theft of paperwork” (13%). The healthcare sector accounted for by far the largest volume of reports (37%), although this figure is likely to be a result of mandatory reporting rules. After health came “general business” (11%), education (11%) and local government (10%). [[Infosecurity Magazine and at: ISBuzz News]

US – Equifax Reveals How Much Information Was Really Exposed in Data Breach

How bad was Equifax’s data breach? Bad. In a new filing with the Securities and Exchange Commission, the credit reporting agency broke down in detail the types of – and how much exactly – sensitive personal information was exposed to hackers in the breach. A statement for the record from Equifax included in the SEC filing breaks down what types of personal information and data was exposed in the September 2017 data breach. The disclosure comes at the urging of several Congressional committees. According to Equifax, the company recently sent a letter to several Congressional committees providing additional detail on the data that was exposed in the breach. In its letter, Equifax said that the names and dates of birth for approximately 146.6 million people were exposed, as well as 145.5 million Social Security numbers, the address information for 99 million people, the gender data for 27.3 million people, 20.3 million consumers’ phone numbers, 17.6 million driver’s license numbers, 1.8 million email addresses, 209,000 payment card numbers and expiration dates, 97,500 tax ID numbers, and the state information for 27,000 driver’s licenses. Additionally, Equifax noted that the hackers also gained access to images uploaded to the company’s online dispute portal by approximately 182,000 consumers, including: 38,000 driver’s licenses; 12,000 Social Security and tax ID cards; 3,200 passports and passport cards; and 3,000 other documents, including military and state IDs and resident alien cards. According to Equifax, it is releasing this information as part of its “commitment to transparency.” [Housingwire]

Filtering

CA – RTBF: PIPEDA Should Not Regulate Online Speech

The Stanford Center for Internet and Society comments on the Office of the Privacy Commissioner of Canada’s draft position on online reputation. Academics note that data protection laws lack well-developed standards that balance and protect expression rights, introduce unintended consequences (e.g. online platforms and search engines would be required to seek consent before processing user-generated information), and platforms would likely comply with abusive or mistaken notices to avoid litigation risks [Response to the OPC Consultation and Call for Comments on Draft Online Reputation Position Paper – Stanford Center for Internet and Society]

Finance

CA – OPC Concerned Bank Act Changes Could Open Door to More Data Abuses

Privacy Commissioner Daniel Therrien is expressing concern with new banking powers over customer data that are contained in the government’s latest budget bill, telling the Senate banking committee [here] that his office was never consulted on the Bank Act changes [see PR here & Commissioner’s remarks here]. Senators have heard conflicting testimony as to what the Bank Act changes [see Division 16 of Part 6 of Bill C-74, the Budget Implementation Act, 2018, No.1 here & Bill status here] would allow in practice. The Finance Department and the banking sector say they are simply about modernizing language to reflect the growth of financial technology firms, or fintechs. However critics warn that the changes would give banks new powers to sell customer data to fintechs, which are in many cases not subject to federal financial regulation. Mr. Therrien said more co-operation between banks and fintechs may be a good thing, but consumers should be able to approve how their data are used through a clear and understandable consent form. He said there is nothing in the bill that would ensure that is the case. [Globe & Mail]

FOI

CA – Government Won’t Appeal Decision in Star’s Challenge to Secrecy in Tribunals

The Ontario government will not be appealing a Toronto Star legal victory which should lead to more openness in the province’s tribunal system. Last month, the court ruled in favour of a constitutional challenge launched by the Star that sought greater access to records from such quasi-judicial bodies as the Human Rights Tribunal and the Landlord and Tenant Board. Justice Edward Morgan found [see Toronto Star v. AG Ontario, 2018 ONSC 2586 – here] that denying access to tribunal records was an “infringement” of the Charter of Rights and Freedoms that the provincial government had failed to justify he ruled this violates section 2(b) of the Charter – here]. The judge gave the province one year to make the tribunal system more accessible to journalists and the public. Morgan declared as “invalid” provisions of Ontario’s Freedom of Information and Protection of Privacy Act (FIPPA) that delay or block public access to tribunal records. [Toronto Star] [Ontario Court Finds FIPPA Blocks Public Access to Tribunal Records | Toronto Star Newspapers Ltd. v. AGO – 2018 ONSC 2586 CanLII – Ontario Superior Court of Justice]

CA – Transparency Study Shows Inadequate Access Processes

The Citizen Lab, a university research group, compared responses to data access requests made under PIPEDA, to 23: telecommunications companies; fitness trackers; and online dating services. Information provided by telecoms, fitness trackers and online dating services responses to access requests varied widely in types of data provided, specificity of questions answered, and clarity about disclosures to third parties, and there were barriers to access including identity verification procedures, secure data transfer requirements, fees charged, and companies stating they were not bound by Canadian privacy laws. [Approaching Access – A Comparative Analysis of Company Responses to Data Access Requests in Canada – The Citizen Lab | Coverage]

CA – CSIS Permitted to Refuse Access Request

The Federal Court of Canada reviewed the Canadian Security Intelligence Service’s response to an access request, pursuant to the Access to Information Act. The Federal Court of Canada upheld CSIS’ refusal to confirm or deny the existence of records identifying an individual; investigative records consist predominantly of sensitive national security information, and if such records did exist, they would likely be exempt from disclosure on the basis of protecting a CSIS investigation. [VB v. Canada Attorney General – 2018 FC 394 CanLII – Federal Court of Canada

Health / Medical

CA – Ontario Health Minister: People Have a Right to ‘As Much Transparency as Possible’ When It Comes to Doctors’ Pasts

Ontario Health Minister Helena Jaczek says the province’s medical watchdog should provide patients with “as full a picture” as possible of physicians’ disciplinary and criminal histories after a Toronto Star investigation found the public is being deprived of information about sanctions imposed in other jurisdictions. “Obviously, I’m in favour of as much transparency as possible,” Jaczek said in an interview at Queen’s Park. “I think that people have a right to know.” The Star’s 18-month investigation identified 159 disciplined doctors who have held licences on both sides of the Canada-U.S. border, and used public records to piece together their disciplinary histories across provincial, state and country lines. Ninety per cent of these doctors’ public profiles in Canada failed to fully report sanctions taken against them for a range of offences, including incompetence, improper prescribing, sexual misconduct and fraud, the investigation found. The College of Physicians and Surgeons of Ontario (CPSO), the self-regulating body that oversees the province’s doctors, recently amended its bylaws to allow it to post some information about discipline imposed in other jurisdictions on its physician profiles. However, the college only posts sanctions imposed on Ontario doctors outside the province after Sept. 1, 2015. Jaczek said the disciplinary information on the college’s website should be “retrospective.” NDP health critic France Gélinas said greater transparency by the CPSO should have been “mandated long ago.” “The health minister must demand the physicians’ college posts all disciplinary measures that have happened to any of their members, no matter what jurisdiction it’s from,” she said. “We know full well that physicians move. The CPSO is there to protect the public. People expect that. Let’s meet people’s expectations.” Earlier this week, Alberta’s health minister pledged to work with her province’s medical college to post information about sanctions imposed on its doctors by regulators in other jurisdictions. Sarah Hoffman also said she would review the college’s current practice of scrubbing all disciplinary details from doctors’ online profiles after five years. Unlike in the U.S., Canada has no national agency that collects and disseminates licensing and disciplinary information on doctors. The Star’s investigation found some Canadian physicians’ colleges keep secret basic information readily disclosed by other regulators. Quebec’s college, for example, told the Star that a physicians’ credentials — when and where they graduated from medical school — is confidential information. The secrecy of Canadian colleges is in sharp contrast to their counterparts in the U.S., where consumer legislation governs many medical boards and mandates openness. [Toronto Star]

CA – SK OIPC Calls on Health Authority to Fire Employee Who Breached 880 Patient Files

Saskatchewan’s Information and Privacy Commissioner is recommending the provincial health authority fire an employee of the former Sun Country Regional Health Authority who accessed the information of 880 home care clients without a “need to know.” Ronald Kruzeniski, in a report issued on April 30, also recommended the Saskatchewan Health Authority send its investigation file to the Ministry of Justice’s public prosecutions division to determine whether an offence occurred and whether charges should be laid under the Health Information Protection Act. The employee’s name was not disclosed in Kruzeniski’s report. [Saskatoon StarPhoenix]

US – OCR To Share HIPAA Data Breach Settlements With Victims

OCR is proposing to share a percentage of HIPAA data breach settlements with victims, as required by the HITECH law. In the HHS semiannual regulatory agenda [see RIN: 0945-AA04 here & full agenda here] OCR said it is soliciting the public’s view on establishing a methodology for those harmed by a data breach or other HIPAA violation to receive a percentage of any penalty or settlement resulting from the breach. The office plans to issue an advance notice of proposed rulemaking with the proposal in November. While this is an intriguing proposal, its implementation might be a huge challenge for OCR. “The devil is in the details. There are potential issues with this approach,” Marcus Christian, a cybersecurity and data privacy attorney with the law firm of Mayer Brown said. [Health IT Security and at: Bloomberg Law here & here]

US – HHS Distinguishes Between Risk and Gap Analyses

The HHS Office of Civil Rights issued guidance on safeguarding electronic protected health information by conducting risk and gap analyses. Entities subject to the HIPAA Privacy and Security Rule must conduct analyses of all potential risks to ePHI, including identifying all potential threats and vulnerabilities, assessing effectiveness of controls in place, and assigning risk levels. Gap analyses do not satisfy risk analysis obligations because they provide a high level overview of controls and do not thoroughly assess all ePHI risks. [HHS – Risk Analyses vs. Gap Analyses – What is the Difference]

Horror Stories

CA – Insider Threats: OIPC SK Finds Health Entity Had Insufficient Safeguards

This OIPC SK report investigates a complaint against the Saskatchewan Health Authority involving personal health information pursuant to the Health Information Protection Act. An employee admitted to inappropriately accessing PHI in a healthcare system despite having access only on a need-to-know basis; the employee had signed a confidentiality agreement (but 4 years prior) and had never received any privacy training (the employee had not heard of the Health Information Protection Act). [OIPC SK – Investigations Report 066-2018 – Saskatchewan Health Authority]

Identity Issues

CA – Ontario Issues First Non-Binary ‘X’ Birth Certificate

A Vancouver filmmaker and writer has received Ontario’s first non-binary birth certificate. Ontario-born Joshua M. Ferguson identifies as non-binary trans and uses the pronouns “they” and “them.” The birth certificate is marked with an “X” designation, indicating a non-binary person. Ferguson applied to Service Ontario for the document in 2017, and filed a human rights claim when it was not initially granted. The province issued new guidelines on gender designations last year. Ontario says it is the first jurisdiction in the world to implement a two-fold policy, allowing the selection of either male, female or non-binary, and allowing the option of not displaying such identification on a birth certificate. Ferguson has also fought for an “X” designation on BC Health Cards. They were also among the first to have their application for an “X” designation approved under new rules for passports and other documents issued by Immigration, Refugees and Citizenship Canada. Ferguson called the Ontario birth certificate a “victory,” both personally and for all trans Canadians. “This policy makes it clear that non-binary people exist,” they said. “We are Ontarians and Canadian citizens. [CTVNews]

Law Enforcement

UK – Amnesty Int’l Report Hits Met Police’s Gang Mapping Database

A secret police database aimed at tackling rising violence in London could lead to black families being evicted from their homes and as well as young people denied access to education or employment, according to a new report. An investigation by Amnesty International (called “Trapped in the Matrix” – see PR here and 54 pg PDF report here] into the Metropolitan Police’s gang mapping database which is called the Gang Violence Matrix, highlighted criticisms about the disproportionate number of young black males that feature on it. As well as the seemingly discriminatory nature behind how information is collated the report also raised serious concerns about how police officers share this data with housing associations, schools and job centres. [Voice Online and at: The Conversation UK, UKAuthority, CNBC and Apolitical]

Location

WW – Apple Reportedly Hits Back at Apps That Are Snooping on You

Apple is reportedly kicking third-party apps out of the App Store that are sharing users’ locations, as privacy remains in the spotlight in the wake of the Facebook/Cambridge Analytica scandal and pending regulation across the Atlantic. 9to5Mac reports that Apple has recently been removing apps that are sharing location data with third parties and sending the app developers a notice that the app is violating two different parts of the App Store Review Guidelines. The two sections in question are Legal 5.1.1 and Legal 5.1.2 which state: The app transmits user location data to third parties without explicit consent from the user and for unapproved purposes. 9to5Mac noted that Apple also wants developers to explain what the location data is used for, how it is shared, as well as asking for permission. [Fox News]

CA – OPC Looking into Reports Bell, Telus, Rogers Shared Location Data

Privacy officials in Canada plan to look into reports that Canadian telecom companies share location data on subscribers with third-parties, a practice that, in at least one case, appears to have allowed similar data on Americans to be accessed by police without a warrant. Bell, Rogers and Telus were named in an article on ZDNet.com as among the North American telecom companies selling real-time location data on subscribers to a company called LocationSmart. A spokesperson from the office of Privacy Commissioner said there were few details to share right now, but that the office would be looking into the matter. Telus did not respond to a request for comment but spokespersons for Bell and Rogers said the location data in question is not directly shared by them. Instead, it is done by a joint venture owned by all three telecom companies called Enstream. One of its partners is LocationSmart. Enstream is described on its website as providing identity verification services for third-party applications. It operates as a sort of hub of information held by the Canadian telecom companies and others can buy access to the data to do things like verify mobile subscriber identity, allow a roadside assistance company to locate a caller, or verify credit card information used in mobile payment systems. Enstream has launched a security review of its relationship with LocationSmart in light of the reports. [Global News and at: Krebs on Security, CNNTech, Motherboard, WIRED and Reuters | Globe & Mail | The Star | The Star]

Online Privacy

WW – Period Tracking Apps Monetizing Your Menstrual Cycle

Women who use menstruapps are sharing information about their health, sex life and social behaviours that may be sold to advertisers. Whether it’s Clue, FitBit or Eve, there has been a surge in popularity for period tracker apps in recent years. According to researchers at Columbia University, ‘menstruapps’ are now the fourth most popular app category among adults and second most popular among adolescent women in the ‘health apps’ category. From inputting information about moods, pain, cervical fluid and forms of contraception, the apps can be used to better inform women about their sexual health and indicate potential health issues. However, many of us won’t be considering that the apps also track and store vast amounts of personal information, which have the potential to making companies some serious money. Chupadados, a Brazil-based cyber security guide powered by women-led think tank Coding Rights recently delved into what menstruapp users sign up for when they agree to an apps’s terms and conditions. In studying the companies’s privacy policies, Chupadados found that ‘all of the apps rely on the production and analysis of data for financial sustainability’. In other words, the apps make money by sharing users’ personal information and activity on the apps with other businesses, target users for advertisements and product sales. In addition, users’ digital footprints help inform marketing strategies and business models. ‘Every piece of information that we put online becomes something valuable for companies, making our online activities a key component of their economic survival strategies,’ the website explains. ‘Feeding on our data, these tools serve as laboratories for observing physiological and behavioural patterns from period frequency and associated symptoms to users’ buying and Internet navigation habits.’ ‘Monitoring your cycle using a menstruapp means telling the app regularly if you went out, drank, smoked, took medication, got horny, had sex, had an orgasm and in what position, what your poop looked like, if you slept well, if your skin is clear, how you feel, and if your vaginal discharge is green, has a strong odour or looks like cottage cheese,’ Chupadados notes on its website. [Elle] See also: How Worried Should Parents Be About Apps and Websites Collecting Children’s Data?

WW – Data on 3 Million Facebook Users Exposed, Report Says

Researchers at the University of Cambridge uploaded user data from 3 million Facebook users onto a shared portal. They locked the data with a username and password. But students later posted the login credentials online. That exposed the data to anyone who did a quick web search to find the username and password, according to a report from New Scientist. In the new data exposure incident revealed by New Scientist, a different set of researchers collected user information with consent through a personality app, called myPersonality, and then made it available through a web portal. About four years ago, students with access to the data set posted the username and password online on the data sharing website GitHub. While the data was anonymized, privacy experts told the publication that it would be easy to associate data in the collection with the person who originally posted it on Facebook. The myPersonality app has been suspended since April 7. Facebook is aware that the login credential was published on GitHub; the issue was flagged in the company’s program for fielding information about potential misuse or abuse of Facebook user data. [CNET]

UK – “Safari Workaround” Class Action Could Cost Google $4.3 Billion

Google appeared in a UK court to argue against a privacy case brought by the group Google You Owe Us, representing 4.4 million iPhone users that could lead to the search giant paying up $4.3 billion if it loses. Each of class members could receive about $1,000. A lawsuit, filed in July, alleged the tech company violated their privacy from 2011 to 2012 through the “Safari Workaround” [see here]. While Apple’s iOS devices have default privacy settings on its Safari browser, Google was able to bypass it and collect browser data without people’s consent, according to the allegations. The workaround was first discovered in 2012 by a Stanford University researcher. Google agreed to pay $17 million to 37 states and Washington, DC, in a 2013 settlement. The company also agreed to pay a $22.5 million fine from the Federal Trade Commission over the data-tracking practice. [CNET and at: Bloomberg, The Guardian, The Inependent and AppleInsider]

Privacy (US)

US – Suspicionless Border Searches of Electronic Devices Unconstitutional

The U.S. Court of Appeals for the Fourth Circuit’s May 9 ruling in U.S. v. Kolsuz is the first federal appellate case after the Supreme Court’s seminal decision in Riley v. California (2014) to hold that certain border device searches require individualized suspicion that the traveler is involved in criminal wrongdoing. Two other federal appellate opinions this year—from the Fifth Circuit and Eleventh Circuit—included strong analyses by judges who similarly questioned suspicionless border device searches. EFF filed an amicus brief in Kolsuz arguing that the Supreme Court’s decision in Riley supports the conclusion that border agents need a probable cause warrant before searching electronic devices EFF has long argued that border agents need a warrant from a judge, based on probable cause of criminality, to conduct electronic device searches of any kind. The Supreme Court’s pre-Riley case law, however, permits warrantless and suspicionless “routine” searches of items like luggage that travelers carry across the border, a rule known as the border search exception to the Fourth Amendment’s warrant requirement. Based on these pre-Riley cases, the government claims it has the power to search and confiscate travelers’ cell phones, tablets, and laptops at airports and border crossings for no reason or any reason, and without judicial oversight. While we would have liked to see the Fourth Circuit go further by expressly requiring a warrant for all border device searches, we’re optimistic that we can win such a ruling in our civil case with ACLU against the U.S. Department of Homeland Security, Alasaad v. Nielsen, challenging warrantless border searches of electronic devices. [EFF and at: ACLU Blog, Reuters and Reason]

US – Justices Rule Unanimously for Driver in Rental-Car Case

The Fourth Amendment protects us from (among other things) a warrantless search of a place – such as our homes – that we can reasonably expect to remain private. Today the Supreme Court ruled that a driver who has permission to use a rental car is generally entitled to the same protections under the Fourth Amendment as the driver who rented the car. The court’s decision came in the case of Terrence Byrd [see all court docs for Byrd v. United States here], a New Jersey man who was driving a car rented by Latasha Reed, his fiancée (or former girlfriend, depending on whose account you are reading), when he was pulled over by a state trooper in Pennsylvania. The trooper gave him a warning for driving in the left lane and then searched the car, believing that he didn’t need Byrd’s consent because Byrd was not listed as an authorized driver on the rental agreement. The troopers found body armor and 49 bricks of heroin in the trunk, leading to federal charges against Byrd. In a unanimous decision by Justice Anthony Kennedy [see Byrd v. United States – 21 pg PDF decision here], the justices rejected the federal government’s argument that a driver who is not listed on the rental agreement can never have a reasonable expectation of privacy in the car, because the rental company has not given him permission to use it. That rule, the justices concluded, “rests on too restrictive a view of the Fourth Amendment’s protections.” Under the Supreme Court’s cases, the justices explained, whether someone has an expectation of privacy in a car shouldn’t hinge on whether the person who gave them permission to drive it owns the car or rented it. [SCOTUS Blog and at: Reason, ABA Journal, JURIST, The Associated Press (via WP) and Bloomberg]

US – Children and Minors: Updated Meaning of PI Benefits COPPA Safe Harbor

The Electronic Privacy Information Center responded to the FTC’s request for public comment into the Entertainment Software Rating Board’s COPPA safe harbor program. In response to an FTC public consultation, advocates urge the adoption of an enhanced definition of personal information to address changes in technology and prevent online operators from alleging an exemption to the scope of COPPA; geographic limitations should be removed so that its clear COPPA applies to all web operators regardless of a child’s residency or nationality, and risk assessments and self-assessments are critical for necessity and proportionality. [EPIC – Comments to the FTC on COPPAs Entertainment Software Rating Boards Safe Harbor Program Application to Modify Program Requirements]

Security

US – Banks Adopt Military-Style Tactics to Fight Cybercrime

Cybercrime is one of the world’s fastest-growing and most lucrative industries. At least $445 billion was lost last year, up around 30% from just three years earlier, a global economic study found, and the Treasury Department recently designated cyberattacks as one of the greatest risks to the American financial sector. For banks and payment companies, the fight feels like a war — and they’re responding with an increasingly militarized approach. Former government cyberspies, soldiers and counterintelligence officials now dominate the top ranks of banks’ security teams. They’ve brought to their new jobs the tools and techniques used for national defense: combat exercises, intelligence hubs modeled on those used in counterterrorism work and threat analysts who monitor the internet’s shadowy corners. At least a dozen banks have opened fusion centers (a concept originally developed by the US DHS see here) in recent years, and more are in the works. Having their own intelligence hives, the banks hope, will help them better detect patterns in all the data they amass. Cybersecurity has, for many financial company chiefs, become their biggest fear, eclipsing issues like regulation and the economy. [NY Times]

UK – 41% of Cyber-Security Apps Contain High-Risk Open Source Vulnerabilities

According to the 2018 Open Source Security and Risk Analysis (OSSRA) report from Black Duck by Synopsys and published today, open source adoption in the enterprise is growing fast. Unfortunately, the statistics regarding vulnerabilities in open source codebases are equally high. Analysing anonymised data from more than 1,100 commercial codebases, the researchers found that 96% of the applications audited across 2017 contained open source components. Representing industries from automotive to healthcare, financial services to manufacturing, and even cyber-security, the report reckons this reflects a 75% growth in open source adoption over the previous year. Indeed, the research suggests that most applications now contain more open source code than they do proprietary code. Which is all good news for fans of open source. The less good news is that 78% of the audited codebases contained at least one open source vulnerability. More worrying is that 54% of these vulnerabilities were considered to be high-risk, and 17%  were very well publicised ones such as Freak, Heartbleed and Poodle. While the most vulnerable open source components were found within the Internet and Software Infrastructure vertical, with 67% of applications containing high-risk vulnerabilities, the cyber-security industry also fared badly with 41% of apps having them as well. [SC Magazine UK | Synopsis ]

CA – Apps and Websites Collecting Children’s Data

In recent months, the Cambridge Analytica scandal has raised discussion over the privacy risks associated with online data collection. These risks apply to everyone, including young children, says Florian Martin-Bariteau [see here], assistant professor and director of the Centre for Law, Technology and Society at the University of Ottawa. He says websites and apps that are aimed at children may obtain more personal information about them than they, or their parents, realize. And there are concerns about how all this data may be used – whether it’s for personalized advertising, potentially accessed by hackers, or used by organizations aiming to influence users’ attitudes. Last month, a study in the journal Proceedings on Privacy Enhancing Technologies found thousands of popular children’s apps potentially violate U.S. privacy rules. Researchers found 73% of the 5,855 apps they analyzed transmitted confidential data over the internet, and nearly 20% of them collected identifiers or other personally identifiable information using software development kits (SDKs) that are not intended to be used for apps aimed at children. These findings echo the results of a 2015 global privacy sweep that found many websites and apps that were popular among children collected, and sometimes shared, personal data, including full names, genders and hometowns. That sweep, conducted by the Global Privacy Enforcement Network [see here], which included the Office of the Privacy Commissioner of Canada (OPC), found 62% of websites and apps popular in Canada said they may share users’ personal information with third parties, while only 29 per cent sought parental consent before collecting children’s data. (Since then, the OPC has reported that some apps and websites had responded, including five targeted sites that said they had made changes, such as asking for a parent or guardian’s full name and contact details instead of the child’s.) [Globe & Mail]

Smart Cars

WW – The Vehicles Record Everything Around Them—And Can Be Used To Profile Pedestrians

According to officials at Waymo, the company developing Google’s self-driving cars, its autonomous vehicles are months away from reaching everyday people. Since January 2017, the organization has sent test cars to motor around cities including Atlanta, Austin, Detroit, and Phoenix. Driving more than 2.7 million miles without human input, the vehicles have only been involved in one accident—a fact that’s prompted Waymo’s chief executive John Krafcik to announce that its fleet could ferry ordinary Phoenix residents as soon as next year. In short—self-driving cars have arrived. There are, however, huge risks. Hacking, software failures, and letting computers make life-and-death decisions inspire unease among individuals. But for one industry expert, the biggest issues will be around data collection and privacy. “The technology works through a number of sensors. The principal one is lidar—a radar that uses infrared light to give a very accurate 3D picture. Then there’s radar for longer-distance detection, and ultrasonic sensors for things that are close—similar to the back-up warnings in a regular vehicle. There are also cameras with machine vision that check for traffic lights, road signs, and other obstacles. It sees 360-degrees around itself, 10 to 20 times a second. That’s a lot of data.” “These vehicles aren’t just going to have data on your journey,” he says. “They see everything from the road. Even if you don’t sign up to their services, if you’re out on the streets, it’s possible to see you regularly, profile you—even without facial recognition—and learn things about your habits. “These cars are basically mobile sensors that gather data,” he continues. “We can use them to wherever we want data to be collected. So overnight, between the hours of midnight and seven in the morning, most people aren’t looking for a ride. If a company owns a fleet of vehicles, they can offer those cars to businesses or factories that want external security. The vehicle can be parked outside the premises, and companies would pay to have it monitor the surroundings. There are whole new business models that could be based on the sensors on these vehicles.” [Straight]

CA – Smart Cars: Meaningful Consent Plays Vital Role

The OPC Canada appeared before the Standing Committee on Transport, Infrastructure and Communities regarding their study of automated and connected vehicles in Canada. The OPC Canada believes drivers do not necessarily need to control how information is used for road safety purposes and proper functioning of the vehicle, but many other situations should be subject to individual choice (e.g., collection and use of biometric or health data); in complex situations, consent should be supported by industry codes of practice, organizational accountability and privacy by design. [OPC Canada – Appearance Before the Standing Committee on Transport Infrastructure and Communities in Relation to its Study of Automated and Connected Vehicles in Canada.]

Surveillance

US – Spy Agency NSA Triples Collection of U.S. Phone Records: Official Report

The U.S. National Security Agency collected 534 million records of phone calls and text messages of Americans last year, more than triple gathered in 2016, a U.S. intelligence agency report [see PR here & 41 pg PDF here] released on Friday said. This occurred during the second full year of a new surveillance system established at the spy agency after U.S. lawmakers passed a law in 2015 that sought to limit its ability to collect such records in bulk. The 2017 call records tally remained far less than an estimated billions of records collected per day under the NSA’s old bulk surveillance system, exposed Edward Snowden in 2013. The records collected by the NSA include the numbers and time of a call or text message, but not their content. The report also showed a rise in the number of foreigners living outside the United States who were targeted under a warrantless internet surveillance program, known as Section 702 of the Foreign Intelligence Surveillance Act, that Congress renewed earlier this year. [Reuters and at: Forbes, ZDNet, CSO Online, Common Dreams and GIZMODO]

Telecom / TV

CA – Ontario Bill Prohibits Unsolicited Phone Calls

Bill 27, the Stop the Calls Act, 2018, has been introduced for first reading in the Ontario Legislature. The Act would come into force two months after receiving Royal Assent. If passed, prior consent (orally, in writing, or other affirmative action) must be obtained for calls selling or advertising a product or service; contracts entered into based on an unsolicited call will be void (consumers are entitled to repayment for the product or service, and reasonable costs incurred for uninstalling and returning the product), and violations can result in fines ranging from $5,000 to $25,000. [Bill 27 – An Act to Prohibit Unsolicited Phone Calls for the Purpose of Selling, Leasing, Renting or Advertising Prescribed Products or Services – Legislative Assembly of Ontario | Bill Status | Bill Text

+++

 

24 April – 05 May 2018

Biometrics

US – Legal Ambiguity Surrounds Biometric Authentication

A recent report involving a police attempt to use a dead man’s fingerprint to unlock his phone is a reminder of the problems with biometric security and the legal protections for users of the technology. Additional reporting shows the prevalence of the practice among law enforcement is “relatively common,” raising the legal ambiguity associated with biometric authentication. Although there may be vulnerabilities with passwords, the article states that current legislation does not extend the same protections to biometric authentication as it does to traditional passwords. [TechRepublic]

US – Facial Recognition May Be Coming to a Police Body Camera Near You

Axon, the maker of Taser electroshock weapons and the wearable body cameras now used by most major American city police departments, has voiced interest in pursuing face recognition for its body-worn cameras. It convened a corporate board devoted to the ethics and expansion of artificial intelligence, a major new step toward offering controversial facial-recognition technology to police forces nationwide. The technology could allow officers to scan and recognize the faces of potentially everyone they see while on patrol. A growing number of surveillance firms and tech start-ups are racing to integrate face recognition and other AI capabilities into real-time video. A group of 42 civil rights, technology and privacy groups, including the ACLU and the NAACP, sent board members a letter voicing “serious concerns with the current direction of Axon’s product development.” The letter urged an outright ban on face recognition, which it called “categorically unethical to deploy” because of the technology’s privacy implications, technical imperfections and potentially life-threatening biases. [WashPost and at: NBC News, PCMag, Fortune, Engadget and The Verge]

Canada

CA – OPC Canada Calls for Commitment to Privacy in Smart Cities

The OPC Canada and a number of provincial and territorial counterparts send an open letter to the federal government regarding personal information handling in smart cities: the government recently launched a competition for submissions of proposals for smart city designs. Data that smart technologies collect and use can come from many sources, often without knowledge, consent or an opportunity to opt-out; privacy impact and threat risk assessments must be conducted, data governance and privacy management programs put in place (appointing a privacy lead, breach response, and monitoring compliance), and full transparency of information practices provided OPC Canada – Joint Letter to the Minister of Infrastructure and Communities on Smart Cities Challenge | Press Release]

CA – Court Finds Tribunal Secrecy Unconstitutional

Ontario Superior Court declared as “invalid” provisions of Ontario’s Freedom of Information and Protection of Privacy Act that delay or block public access to tribunal records [he ruled they violate section 2(b) of the Charter]. The province has one year to consider how to make its tribunal system more open and accessible to journalists and the public. Ontario’s network of provincial tribunals rule on matters as important as human rights, workplace safety and police conduct, and have been operating well outside the spirit and practice of an open court system for far too long. Tribunals were born of the court system and designed to hive off specialized matters and relieve overburdened courts. They were not created to drop a veil of secrecy over important matters of public interest. But that, unfortunately, is what’s been happening far too often in Ontario. Toronto Star v. AG Ontario, 2018 ONSC 2586 | Ontario Court says FOI statute fails in providing access to administrative tribunal records | Toronto Star | Ontario’s tribunals ‘fundamentally different’ from courts, province argues | Ontario says tribunals should not be as open as courts]

CA – Ontario Law Firm Files Class Action Suit Against Facebook

A London, Ontario-based law firm [Siskinds LLP] has launched a class action lawsuit against Facebook and its Canadian subsidiary for the social network’s role in the Cambridge Analytica data privacy scandal. The filing was submitted to the Ontario Superior Court of Justice on May 2nd, 2018 — the same day that Cambridge Analytica announced that it would be ceasing its operations in the U.K. The class action seeks $62,216,100 CAD in damages as well as an additional $1,000 for all Canadian Facebook users affected by the breach. Facebook reported that 622,160 Canadians were affected by the Cambridge Analytica privacy scandal. While the class action mentions Cambridge Analytica, it’s important to note that the lawsuit doesn’t seek damages from the U.K.-based data analytics company. Instead, the suit specifically outlines Facebook and Facebook Canada as the sole defendants. Siskinds lawyer Sajjad Nematollahi said that the “class action concerns the fundamental privacy rights of hundreds of thousands of Canadians, and engages the interests of Canadians at large in protecting the privacy of their affairs.” [Mobilesyrup and at: The Toronto Star]

CA – OIPC SK Permits Disclosure of Termination Letter

This OIPC SK report investigated the Northern Lights School Division No. 113’s disclosure of personal information pursuant to: The Local Authority Freedom of Information and Protection of Privacy Act; and The Local Authority Freedom of Information and Protection of Privacy Regulations. A school board whose former employee was assigned by a new employer to work in the board had the authority to disclose the letter to the new employer; the letter was a concise expression of the reasons why the board did not want the individual working in their schools or with their students, and the board respected the data minimization principle by redacting an irrelevant paragraph containing the individual’s PI. [OIPC SK – Investigation Report 296-2017 – Northern Lights School Division No. 113]

CA – IPC ON Upholds Utility’s Refusal to Confirm or Deny FOI Records

Ontario IPC order reviewed the response by Toronto Hydro Corporation to a request for records pursuant to the Municipal Freedom of Information and Protection of Privacy Act. Confirmation or denial of records concerning possible privatization would constitute an unlawful act by the utility under another governing provincial statute, and the utility has not made any public statement on the matter (the mayor may have, but not the utility itself). [IPC ON – Order MO-3575 – Appeals MA16-132 and MA16-133 – Toronto Hydro Corporation]

Consumer

US – Citizens Do Not Trust Tech Companies to Protect Their Data: Study

A survey conducted by HarrisX found U.S. citizens do not trust tech companies to protect their information. Of the respondents polled within 24 hours of Facebook CEO Mark Zuckerberg’s testimony on Capitol Hill, 83% said tougher regulations and penalties are needed for privacy breaches, while 67% said they support privacy legislation, such as the EU General Data Protection Regulation. However, 38% believes the federal government is not capable of regulating large tech companies. When asked about specific tech companies, 44% said they do not believe Facebook cares about privacy, with Twitter having the next highest number at 33%. [Axios]

E-Government

CA – OIPC Ontario Says Smart Cities Privacy ‘Must Be Front and Centre’

Ontario’s information and privacy commissioner Brian Beamish believes that privacy and security need to be part of the discussions surrounding smart city projects in the province. In an April 26th, 2018 media release [see PR here also see 5 pg PDF letter to Minister], Beamish wrote that “privacy and security of citizens must be front and centre in smart city projects.” Beamish’s statement comes amidst a series of smart city ventures taking place across Ontario — most notably, the Sidewalk Toronto venture between Waterfront Toronto and Alphabet’s Sidewalk Labs urban development firm. Mobilesyrup| Critics seek more details from Sidewalk Labs about proposed Toronto neighbourhood, The Economist | Smart Cities Are The Next Frontier In The Data Protection Debate | Data solutions present ‘smart’ way for cities to grow, says Surrey | Sidewalk Labs proposal stirs fears, raises hopes |

CA – Ottawa Sees Internet Data Cloud as Alternative to Computer Systems

The federal government is willing to accept the privacy and security risks of storing data in the internet cloud as an alternative to its own aging computers that are “at risk of breaking down,” says an internal policy paper. The federal paper on “data sovereignty,” obtained through the Access to Information Act, fleshes out the government’s plan to embrace the cloud as a solution to its file management woes. Privately run cloud companies provide customers, such as federal departments, with virtual computer services — from email systems to vast storage capacity — using software, servers and other hardware hosted on the company’s premises. The government sees the cloud as a way to meet the needs of Canadians in an era of increasing demand for online services. However, the paper says, “a number of concerns” related to data control, protection and privacy have been raised within the government. [CBC and at: 6 Ways Cloud Computing Technology Is Changing | How cloud technology can help seal cyber loopholes | Box CEO Aaron Levie talks Canada, AI and the future of cloud computing]

CA – Electoral Reform Bill Lacks Voter Privacy Detail: Professor

According to professor Colin Bennett of the University of Victoria, the guidelines laid out in the Liberal’s omnibus bill on electoral reform [the Elections Modernization Act – Bill C-76 – see PR here] about the use of public information during elections is “pretty minimal. And is not much more than what the parties already say in their privacy policies, which he has analyzed and thinks have a lot of shortcomings. Bennett’s research on privacy rights, surveillance, social networks and their impact on democratic values led him to the House of Commons ethics committee last week. There, he stressed that the committee acknowledge the urgent need to “bring our political parties within Canada’s regime of privacy protection law.” He noted that there’s a severe lack of knowledge among the Canadian public, and within government, about how much data is gathered by political parties, with little to no accountability. How social media fits into the conversation and whether data gathered on those platforms is included in the proposed privacy provisions is also not listed in Bill C-76’s current form. Canada’s current privacy protection laws applied to political parties are scattered across institutions and laid out in a variety of regulations. PIPEDA’s mandate covers part of this issue, the Privacy Act, the CRTC, and the Canada Elections Act cover other aspects. [iPolitics, CBC News, CTV News, The Globe and Mail, IT World Canada and Liberal elections bill looks to make voting easier, tighten rules on privacy, spending

US – Waze Announces Data-Sharing Agreement with Traffic Analytics Startup

Waze announced a data-sharing agreement with artificial intelligence–based traffic management startup Waycare. Part of the company’s Connected Citizens Program, Waycare will collaborate with Waze to combine anonymized navigation data crowdsourced from drivers who use Waze with Waycare’s traffic analytics, including proprietary deep learning algorithms to figure out how to improve traffic and road conditions. While the partnership is active in Nevada, Florida, California and Nevada, there are plans to expand coverage over the next year. [TechCrunch]

E-Mail

CA – Canadian Privacy Commissioner Investigates Rogers, Yahoo Over Email Terms of Service Issue

The OPC has confirmed that it is investigating both Rogers and Oath over recent changes to Oath’s terms of service agreement. Additionally, the OPC confirmed that the company [Oath] responsible for providing email services to Rogers email customers has removed the clause related to “personal data of friends and contacts” from its terms of service, as it was deemed unnecessary. The Rogers email service is powered by Yahoo, which was acquired by U.S.-based telecommunications service provider Verizon in 2017. In turn, Verizon — which also owns AOL — merged both Yahoo and AOL into a new company called Oath in 2017. [Mobile Syrup and at: