Biometrics
US – Amazon Facial Recognition Once Again Identifies Lawmakers as Criminals
Jeff Bezos must really be getting tired of these headlines coming up all the time. It seems that their facial recognition software (known as Rekognition) has been subjected to yet another test and come up a little short. Or a lot short, particularly if you happen to be one of the more than two dozen state lawmakers who showed up as hits matching them against a database of known criminals. The “test” in question was performed by the American Civil Liberties Union (ACLU) [read blog post here] Of all the facial recognition software out there that we’ve looked at, Amazon’s seems to be the one that winds up producing the most spectacular (and frequently hilarious) epic fails when put to independent testing. In that light, perhaps the ACLU wasn’t off the mark. Of course, the ACLU isn’t looking to improve the technology. This test was run so they can continue their campaign to prevent law enforcement from using the software. Democratic Assembly member Phil Ting of San Francisco (who was tagged as a felon) is quoted as saying, “While we can laugh about it as legislators, it’s no laughing matter if you are an individual who is trying to get a job, for an individual trying to get a home. If you get falsely accused of an arrest, what happens? It could impact your ability to get employment.” These types of scare tactics are all too common and should be derided. I’ve asked multiple times now and am still waiting for an answer to one simple question. Does anyone have evidence of even a single instance where someone was misidentified by facial recognition and gone on to be prosecuted (or persecuted, as Ting suggests) because the mistake wasn’t discovered? I’ve yet to hear of a case. Did the police show up and arrest Ting after he was misidentified? I somehow doubt it. Look, the technology is still in its infancy and it’s got a few bugs in it. They’re working them out as they go. Eventually, they’ll get it up to speed and the error rates should drop down to acceptable levels. And if this software can help catch a suspect in a violent crime in a matter of minutes or hours rather than days or weeks after they were spotted by a security camera, that’s a tool that the police need to have. [Hot Air
US – Industry Groups Criticize Facial Recognition Hysteria
Some in the tech industry are pushing back on claims made recently by advocates of bans of face recognition technology. The Security Industry Association [SIA] has published a report to combat misconceptions and provide perspective on facial biometrics [see: Face Facts: Dispelling Common Myths Associated With Facial Recognition Technology here] while the Information Technology & Innovation Foundation [ITIF] has specifically addressed recent claims by the American Civil Liberties Association [read ACLU Blog post here] that one in five California legislators were misidentified by with default settings of Amazon’s Rekognition system [here] with a statement panning the organization’s methodology. “The ACLU is once again trying to make facial recognition appear dangerous and inaccurate. But independent testing from the federal government has consistently shown that facial recognition technology is highly accurate. It now exceeds the accuracy of humans at identifying faces,” comments ITIF Vice President Daniel Castro [here]. “This is the second time the ACLU has released misleading findings. Last year, it used dubious methods to claim that facial recognition had high levels of inaccuracy, but it generated false matches by setting an artificially low confidence threshold of 80 percent instead of 99 percent. The ACLU claimed at the time that companies like Amazon were not clear about what the threshold should be. That wasn’t true then, and it isn’t true now. In the past year, Amazon has repeatedly stated that any sensitive application of facial recognition, such as for law enforcement purposes, should only be using high confidence thresholds. So, for the ACLU to repeat this kind of test a year later, while apparently not changing its methods—and still refusing to share its data—is disingenuous and misleading. Claims that are not observable, testable, repeatable, and falsifiable are not science. It’s agenda-driven public relations, and policymakers should ignore it.” The pushback also comes as Big Brother Watch [here & wiki here] has published a report calling the use of facial recognition in UK shopping centers, museums, and conference venues an “epidemic” [read PR here] Biometric Update
US – Biometrics, Facial Recognition, Privacy, Security and The Law
A recent article in the L.A. Times indicated that facial recognition software proposed to be used for police bodycams falsely indicated that about 20% of California legislators were criminals (insert political joke here), just as a previous study of members of Congress showed 28 legislators “matched” a database of criminals. The use of facial recognition software on massive databases like those of bodycams or dashcams has been challenged on the basis that such software is inaccurate and might lead to the wrongful arrest or even shooting of individuals based on incorrect identification. Indeed, while many states are banning the use of such body cam facial recognition, some states such as Illinois generally prohibit the collection and use of biometric information without a written policy and informed consent. [Security Boulevard ]
US – Presidential Candidate Sanders Vows to Ban Facial-Recognition Technology
Sanders’ presidential campaign website [here], in detailing his criminal justice reform plans, proposes “banning the use of facial-recognition software for policing” in order to ensure law enforcement accountability and robust oversight policing. The criminal justice reform plan emphasises the need to “place a moratorium on the use of the algorithmic risk assessment tools in the criminal justice system until an audit is completed”. The plan also states: “We must ensure these tools do not have any implicit biases that lead to unjust or excessive sentences”. It also foresees a ban on federal programs that provide military equipment to local police forces. [Engineering and Technology]
WW – Is Webcam Facial Recognition Secure Enough?
The corporate conference room is a place for confidences, a place where the leaders of an organization should feel free to throw around ideas about a company’s future, its response to a crisis, and its plans for innovation. It should be a venue where thoughts are expressed safely in the knowledge that their contents won’t leave the room. Yet, as digital communications technologies take a more central position in these high-level discussions and remote colleagues are beamed into meetings via video conference, we are encountering new questions around privacy. Advanced features such as webcam facial recognition represent the next generation of biosecurity online, but recent video conferencing-related data breaches have raised doubts over the security of facial recognition in video conferencing technology. [VC Daily]
US – Facial Recognition: Will Passenger Scepticism Jeopardise Its Future?
Widespread distrust of the technology has begun to grip major cities in the US – mainly due to a lack of clarity about how officials use it. This led to San Francisco becoming the first city in the country to ban the use of facial recognition by city authorities and police in May 2019. Although airports run by the US Transportation Security Administration (TSA) – a federal agency – have been exempted from this ban, public scepticism of the technology is rapidly expanding to the aviation sector, leaving the industry to wonder: is facial recognition’s future in jeopardy? [Airport Technology | Facial Recognition Technology: Here Are The Important Pros And Cons]
EU – European Commission Crafting Facial-Recognition Regulations
The European Commission is exploring potential regulations focused on giving EU citizens rights regarding facial-recognition data. A commission official said “the indiscriminate use of facial recognition technology” by companies or in public would be curbed with any regulations, and people would know when any data is being used. The decision to draw up regulations comes after the U.K. Information Commissioner’s Office opened an investigation into the use of facial-recognition software at the King’s Cross development site. The regulations would also follow the EU’s commitment to create ethically based laws to govern artificial intelligence. [The Financial Times]
US – Grocery Company Claims Ill. BIPA Is Unconstitutional
The legality of the Illinois Biometric Information Privacy Act is being challenged in a Cook County Circuit Court by grocery company Albertsons. Albertsons has filed a motion claiming the law is unconstitutional, arguing that it sets up many private employers for huge judgments, while the government, its contractors and financial institutions are exempt from such issues. “If the BIPA was truly enacted to protect Illinoisans’ biometric data, to leave some of the biggest employers in the state unregulated, and thus their employees unprotected, and to allow those entities the benefit of not having to comply with the BIPA is nothing short of arbitrary,” Albertsons wrote in its motion. [Cook County Record]
Canada
CA – OPC Launches Investigation into CBSA’s Use of License-Plate-Reader System
The Office of the Privacy Commissioner of Canada has launched an investigation into the Canada Border Services Agency’s use of a compromised license-plate-reader system. The license plate system used by the CBSA was targeted in a cyberattack that impacted U.S. Customs and Border Protection. “Our office has continued to engage with CBSA and has initiated an investigation into the breach with respect to CBSA records,” OPC Spokesperson Vito Pilieci wrote in a statement. The CBSA confirmed it is in the midst of its own review to determine whether Canadian citizens were affected by the incident. [CBC.ca | CBSA launches investigation after licence plate reader linked to U.S. hack | Border agency still using licence plate reader linked to U.S. hack | US Customs and Border Protection says traveler images were taken in cyberattack | CBP says photos of U.S. travelers, license plate images were stolen in data breach | CBP says traveler photos and license plate images stolen in data breach]
CA – Police Licence-Plate Readers in Charlottetown Under Scrutiny
A Charlottetown man expressed concerns over privacy after he was hand-delivered a ticket for not having renewed his vehicle registration [read coverage]. The ticket came because a photo was taken by a license-plate reader. These concerns are echoed by David Fraser [here & blog here], a privacy lawyer at McInnes Cooper in Halifax. He’s telling Charlottetown residents they should be asking questions about how police use their personal information obtained from licence-plate readers saying people need to ask questions whenever police deploy a new technology that collects information, especially by automated means. He says “In other jurisdictions using licence-plate recognition technology has kind of come under attack from privacy regulators, for example for retaining the information for longer than is necessary” and other jurisdictions have used the information for secondary purposes like tracking individuals. Any camera system that is networked and any information stored in a database can determine a car’s movements, Fraser said. Because of that, if the database is maintained it could be used to find out where people live and work, he said. “The police tend not to be all that transparent about everything they do, and they’ll say it is for law enforcement purposes and try to end the conversation there,” Fraser said. He said similar technology has been used by private companies to do things like repossess cars and sometimes that information is sold to third parties. The readers automatically scan plates and compare them to information in a database to determine if there are any violations connected to the vehicle. Charlottetown police said all scanned plates are stored on a server for two weeks and any found in violation are stored for three months. Charlottetown police have been using automatic licence-plate readers on two of their vehicles for the last year. Charlottetown Coun. Bob Doiron, who also chairs the protective services committee, said he doesn’t think Charlottetown police using automatic licence-plate readers is a privacy issue, but is open to having a discussion about the devices. [CBC News | Police licence-plate readers don’t concern Charlottetown councillor after privacy complaint | Police licence-plate readers raise privacy concerns for Charlottetown driver]
CA – Political Parties Yet to Take Privacy Measures Beyond Bill C-76
How have political parties responded since Bill C-76 went into effect? The bill requires political parties to post privacy policies onto their websites that state what types of information they collect and how it is collected and protected. All the parties have adhered to Bill C-76; however, they also have not implemented any of Privacy Commissioner of Canada Daniel Therrien’s voluntary privacy measures, such as giving citizens’ access to their data and being more transparent about how personal data is used. Privacy and Access Council of Canada President Sharon Polsky said political parties are only paying lip service to the notion of protecting privacy by only following the requirements of Bill C-76. [The Canadian Press | Federal parties subject to B.C. privacy laws: watchdog | Canada’s Political Parties Won’t Say What They Know about You | Federal parties’ privacy policies meet bare minimum required by new law | Canada’s political parties don’t meet voters’ privacy expectations: OpenMedia report | What’s in your file? Federal political parties don’t have to tell you]
CA – Federal Parties Subject to B.C. Privacy Laws: BC OIPC
Federal political parties may soon face scrutiny over how they collect and use personal information about Canadian voters. In a recent ruling, B.C. privacy commissioner Michael McEvoy [read Order P19-02, 30 pg PDF here] declared he has jurisdiction to investigate how two B.C. residents’ private emails ended up on the federal NDP’s mailing list. There are no rules or oversight into how federal parties collect, store and analyze Canadians personal information. While federal parties have repeatedly downplayed the level of details they collect on individual citizens, Canadians can only take them at their word; there is no independent oversight. The major exception is B.C., where the province’s privacy commissioner can investigate provincial parties’ use of personal information. According to the ruling, McEvoy’s office received a complaint from two residents in the B.C. riding of Courtenay-Alberni in April 2018. The complainants were concerned after they received an email invitation to a meet-and-greet with Jagmeet Singh in their riding. They asked the party how it had obtained their email addresses. Eight months later, the NDP responded. McEvoy’s ruling only addressed the question of whether he had jurisdiction to investigate the federal NDP’s data operations, not the substance of the complaint, itself. The ruling can also be appealed. McEvoy’s office said they could not comment on the ruling, as their investigation is continuing. It’s unlikely that any final ruling from McEvoy’s office will come before the upcoming federal election campaign. While basic information such as email addresses and phone numbers may seem innocuous, political campaigns are increasingly focused on developing data and digital operations to refine their political outreach and advertising. [The Star | Decision paves the way for federal riding associations in BC to be subject to BC’s data protection laws | Federal Political Parties Must Follow BC’s Privacy Law, Commissioner Rules | NDP leader invite spurred privacy complaint | Federal politicians could soon face B.C. privacy watchdog over party databases]
CA – OPC Publishes New Privacy Activity Sheets for Kids
The Office of the Privacy Commissioner of Canada, in collaboration with its provincial and territorial counterparts, has produced a new series of activity sheets to help young Canadians understand various privacy issues by presenting them in a visually appealing, easy-to-understand format [download here]. It is important that youth become savvy digital citizens who are able to enjoy the benefits of being online. Young people need to be equipped with the knowledge necessary to navigate the online world and participate in the digital domain while protecting their privacy. Because children go online earlier than ever before, parents and guardians should start talking to them about the digital world and online privacy much sooner than they used to [Here is a sample of four of the nine activity sheets]: 1) Privacy Snakes and Ladders [see 2 pg PDF here] is a twist on the classic children’s game that helps players learn how to make smart privacy choices by climbing up a ladder when they make a good decision or sliding down a snake because they have shared a password with a friend, for example; 2) Connect the Dots [see 2 pg PDF here] has kids complete the picture of a family with a checklist of rules they can use at home to practice good online privacy; 3) Learning About Passwords and Colour the Tablet [read 2 pg PDF here] challenges kids to create their own strong, eight-character password by filling in the blanks. It also asks them to draw a lock on a tablet, representing how password protects an electronic device; and 4) Word Search [read 2 pg PDF here] introduces children to privacy vocabulary by having them comb through a puzzle to find words such as “post,” “click” and “footprint.” To download the activity sheets or for more activities and information, visit www.youthprivacy.ca .[ News and announcements (Office of the Privacy Commissioner of Canada) | ‘Kids these days lead parallel online lives’: Alberta unveils online privacy lessons for kids]
CA – NS OIPC: Energy Dept Violated ‘Almost Every Provision’ of Access Law
Nova Scotia Information and Privacy Commissioner Catherine Tully said the province’s Energy Department violated “almost every provision” of the Freedom of Information and Protection of Privacy Act after releasing her office’s findings on an FOI request. Tully criticized the department for the length of time it took to complete the request. A citizen first made the inquiry into records on a pair of companies back in 2014. The commissioner also pointed to the department withholding 832 pages of documents when it finally fulfilled the probe. “This may have been a failure to conduct an adequate search or it may have been a failure to respond openly, accurately and completely,” Tully wrote in the report. “In either case, it was not in compliance with the law.” [CBC News]
Consumer
WW – Sharing Pet Photos Can Reveal Personal Information
While people are becoming more vigilant about sharing personal information about themselves, particularly on social media, they routinely forget to block their contact information when sharing pet photos. Sharing a pet photo is innocuous unless the pet owner’s phone number and address that appear on the pet tag is visible. That phone number can be used to reset online passwords, and it is a key identifier in public databases containing relevant information about the pet owner, including name, address and even the names of family members. [Gizmodo]
E-Government
CA – B.C. Auditor General Rates Provincial Government’s Cyber Security
If your client has a formal process to disable computer network access for employees and contractors who no longer work there, that client’s cybersecurity is better in at least one respect than some of British Columbia government departments. The province’s Office of the Auditor General [Carol Bellringer: see here] recently audited five government departments on how well they follow controls set by the Office of the Chief Information Officer’s (OCIO) to restrict unauthorized access to computer data, the Auditor General’s office said in the report released Aug. 13 [read PR here, watch 4:48 min Video Statement here & 27 pg PDF report here]. With its Internal Directory and Authentication Service (commonly known as IDIR), the B.C. government gives user accounts to employees and contractors so they can log on to workstations and access online services. The audit found that for 538 IDIR accounts still in use, the corresponding user’s employment status was “non-active.” The audit did not go so far as to look for inappropriate use of accounts or actual security breaches that could result from improper accounts. The audit asked whether the ministries were formally reviewing employees’ and contractors’ IDIR access rights at regular intervals to ensure their access rights are current and valid. The answer for all – except the corporate accounting services branch of the Ministry of Finance – was no. “Users that should no longer have access may still have access to government computer resources and information. This could result in unauthorized access and sensitive information being used for fraudulent activities,” the Office of the Auditor General said in the report. “Keeping electronic data safe requires a robust method for identifying users, determining what they can access and then controlling access appropriately” The B.C. government collects sensitive information such as personal health records, social insurance numbers, birth records, and personal and government financial information. “Even a single poorly managed IDIR account could lead to fraud or to compromised government information and systems,” the Office of the Auditor General wrote. [Canadian Underwriter | B.C. government information controls inconsistent: auditor general | Government accepts all recommendations of OAG audit of internal directory account management]
WW – Report Shows Spike in Unlawful Data Use, Access by Chinese Apps
A new half-year study in China has revealed that a large portion of mobile apps are illegally using and accessing personal information. National Computer Network Emergency Response Technical Team/Coordination Center of China analyzed 1,000 Chinese apps, each requiring an average of 25 permissions, while 30% of apps demand access to call logs despite that data being unrelated to their operations. The apps also averaged 20 collected data items relating to individuals or their devices, including chat logs and location data. [Caixin Global]
Electronic Records
WW – IAB Releases Second Iteration of Transparency and Consent Framework
IAB Europe and IAB Tech Lab have published the second version of the Transparency and Consent Framework. The latest iteration comes after the Interactive Advertising Bureau sought feedback on the framework in April. A group of 55 organizations and 10 national IAB chapters worked to draft the new version of the guide. The second TCF increases the number of purposes publishers and vendors can process personal data from five to ten and updates the legitimate interest for processing personal data. “It was essential that the evolution of the framework was handled sensitively, with the final specifications able to be adopted in a manner consistent with differing business models in a wide range of operational markets,” IAB Europe CEO Townsend Feehan said. [The Drum]
Encryption
CA – Canada’s New and Irresponsible Encryption Policy (Opinion)
[This well researched six thousand word essay documents] how the Government of Canada’s new encryption policy threatens Charter rights, cybersecurity, economic growth, and foreign policy. The Government of Canada has historically opposed the calls of its western allies to undermine the encryption protocols and associated applications that secure Canadians’ communications and devices from criminal and illicit activities. In particular, over the past two years the Minister of Public Safety, Ralph Goodale, has communicated to Canada’s Five Eyes allies that Canada will neither adopt or advance an irresponsible encryption policy that would compel private companies to deliberately inject weaknesses into cryptographic algorithms or the applications that facilitate encrypted communications. This year, however, the tide may have turned, with the Minister apparently deciding to adopt the very irresponsible encryption policy position he had previously steadfastly opposed. To be clear, should the Government of Canada, along with its allies, compel private companies to deliberately sabotage strong and robust encryption protocols and systems, then basic rights and freedoms, cybersecurity, economic development, and foreign policy goals will all be jeopardized. This article begins by briefly outlining the history and recent developments in the Canadian government’s thinking about strong encryption. Next, the article showcases how government agencies have failed to produce reliable information which supports the Minister’s position that encryption is significantly contributing to public safety risks. After outlining the government’s deficient rationales for calling for the weakening of strong encryption, the article shifts to discuss the rights which are enabled and secured as private companies integrate strong encryption into their devices and services, as well as why deliberately weakening encryption will lead to a series of deeply problematic policy outcomes. The article concludes by summarizing why it is important that the Canadian government walk back from its newly adopted irresponsible encryption policy. [Transparency and Accountability (The Citizens Lab) SEE ALSO: Australia’s data encryption laws an oppression of freedom: Joseph Carson | U.K. Home Secretary warns about Facebook potentially encrypting Messenger | Five Eyes nations demand access to encrypted messaging | Privacy concerns over Five Eyes plan to open up private messages | Five Eyes alliance calls for access to encrypted Facebook messages | Facebook is threatening to hinder police by increasing encryption, warns Priti Patel | Calls for backdoor access to WhatsApp as Five Eyes nations meet | WhatsApp And Other Encryption Under Threat After ‘Five Eyes’ Demand Access | ‘Illegitimate’ internet use under the microscope at Five Eyes meeting: Goodale]
EU Developments
EU – Spanish Supreme Court Deems Electric-Use Info to Be Personal Data
The Contentious-Administrative Chamber of the Spanish Supreme Court ruled information gathered via an individual’s use of electricity constitutes personal data. The court determined data is protected by the Organic Law 3/2018, of December 5, on the Protection of Personal Data and the Guarantee of Digital Rights when it is accessed by a third party, such as an employee tasked with the measurement of electrical activity. The ruling stems from an appeal made by the electric utility company Iberdrola against a Secretary of State for Energy resolution that gave staff members the ability to transfer billing and liquidation information. (Original article is in Spanish.) [Confidencial Judicial]
EU – Facebook Confirms EU Citizens’ Data Transcribed in Audio Capture
EU regulators may open new privacy investigations into Facebook after EU citizens’ data showed up in the social network’s audio transcriptions. Facebook initially reported no EU users were involved in the transcriptions, but it has now revealed 48 EU users had audio messages collected and transcribed by hundreds of third-party contractors. Such nonconsensual data collections may violate the EU General Data Protection Regulation. “All EU supervisory authorities in whose jurisdiction data protection violations against persons who have used Facebook Messenger have occurred are responsible for investigating the respective violations,” Hamburg Commissioner for Data Protection Johannes Caspar said, adding that cases will be taken over by respective national data protection authorities. [Politico]
UK – ICO Discusses Data Minimization, Privacy-Preserving Tactics for AI
The U.K. Information Commissioner’s Office has published a blog post to better inform the public on data minimization and privacy-preserving techniques related to artificial intelligence systems. The post from AI Research Fellow Reuben Binns and Technology Policy Adviser Valeria Gallo is part of the ICO’s call for feedback on its AI Auditing Framework. In the piece, Binns and Gallo break down what organizations may face when adopting AI systems, as well as provide the techniques to meet data minimization requirements set forth in the framework. [Source]
Facts & Stats
WW – Data Breaches Hit Record High in First Half of 2019
The number of exposed data records this year has 2019 on track “to be the worst year on record for data breach activity.” In a recent survey, 2019 MidYear QuickView Data Breach Report, conducted by Risk Based Security, 4.1 billion records have been exposed in data breaches so far this year, with 3,813 incidents publicly reported — up 54% from this time last year. Three breaches have made the top 10 list of largest breaches of all time and affected more than 100 million records. Email addresses and passwords were exposed in approximately 70% and 65% of reported breaches, respectively. [Threatpost]
Finance
EU – EU Regulators Launch Investigation into Libra Cryptocurrency
Regulators in the European Union have launched an investigation into Facebook’s Libra cryptocurrency. The European Commission sent questionnaires as part of a preliminary information-gathering exercise to those involved with the project, per a pair of sources close to the situation. According to documents seen by Bloomberg, the commission “currently investigating potential anti-competitive behaviour” over concerns of “possible competition restrictions” through the use of consumer data. The investigation comes after regulators from around the world asked Facebook for answers on the privacy concerns surrounding the cryptocurrency. The commission and Facebook did not comment on the probe. [Financial Times]
FOI
CA – ON OIPC Orders Government of Ontario to Share Mandate Letters
Ontario’s freedom of information law [here, here & wiki here] is based on the principle that every individual has a right to access government information. This right exists to ensure the public has the information it needs to participate meaningfully in the democratic process, and that politicians and bureaucrats remain accountable to the public. There are, understandably, some necessary exceptions to the law. Those exceptions, written into the Freedom of Information and Protection of Privacy Act as “exemptions,” are designed to strike a balance between Ontarians’ fundamental right to know and the privacy and safety of individuals. They are also meant to be limited and specific. Labour relations, solicitor-client, and certain law enforcement records are examples of information that may be exempt from disclosure. The law also allows (rightly so) for the Premier and his cabinet to engage in free discussion of sensitive issues, in private. As such, cabinet documents cannot be disclosed if they reveal the substance of deliberations of the Executive Council or its committees. Order PO-3973, which I issued on July 15 [see 37 pg PDF here], dealt with a request for the mandate letters sent by Premier Ford [here & wiki here] to all Ontario government ministers. Cabinet Office denied access to the letters based on the premise that, as cabinet documents, they are automatically exempt from disclosure. Mandate letters have become common across Canada as a means to provide direction to ministers of incoming governments. They are frequently made public. After reviewing the mandate letters, I determined that they do not reveal government deliberations, the substance of any meetings, discussions, or any other options considered by the Premier’s Office. That is why I found that the exemption did not apply, and in Order PO-3973, I directed Cabinet Office to disclose the letters by August 16. The purpose of our freedom of information law is to support the public’s ‘right to know.’ Unless government records are exempt, they should be disclosed to the public. In this case, the mandate letters do not qualify for exemption as cabinet documents. I ordered their release because Ontarians have a right to know what the government’s policy priorities are. On August 14, my office received notice that the government intends to challenge my decision in court and prevent the release of the letters. Because it is now subject to a judicial review, I will not comment further on Order PO-3973, except to say that I stand by my decision, and hope to see a swift resolution. [Information and Privacy Commissioner of Ontario Blog | Ontario fights order to release documents outlining cabinet minister priorities | Ford government sues privacy commissioner to block release of cabinet letters]
CA – Names of City Staff Who Get Bonuses Should Be Public: NS OIPC
Halifax Regional Municipality should release the names of employees who receive bonuses and the amounts of the awards, says provincial Information and Privacy Commissioner Catherine Tully [see here]. The municipality had argued that disclosure of the names of the employees and their corresponding bonus amounts would be an “unreasonable invasion” of the employees’ personal privacy. Tully concluded in her August 22 report [read REVIEW REPORT 19-07 – 8 pg PDF here] that “The public has the right to know the amount of bonuses paid to individuals even though the disclosure reveals personal information of those individual employees.” Tully said the law is intended to both protect personal privacy and to promote transparency and accountability. Tully’s report also says: “The law includes rules on how to evaluate the balance between these two interests. With respect to performance-based payments to municipal employees, the law makes clear that the balance falls in favour of accountability and transparency. I find the annual individual salary adjustment increases based on performance are bonuses or rewards and as such fall within the meaning of remuneration. HRM has provided no argument or evidence that the (bonus) payments do not fit this definition.” However Tully also found that disclosure of the individual salary adjustment increases “would not be an unreasonable invasion” of employees’ personal privacy. … The municipality could appeal to the Nova Scotia Supreme Court. [The ChronicleHerald]
CA – Yukon OIPC Rules PSC Correctly Handled Access-to-Info Request
Yukon Information and Privacy Commissioner Diane McLeod-McKay ruled the Public Service Commission correctly handled an access-to-information request. An applicant sought information about a PSC employee from between Nov. 20, 2017, and June 30, 2018. The PSC refused to confirm or deny the existence of the records under Section 13(2)(c) of the Access to Information and Protection of Privacy Act, a decision McLeod-McKay found to be the correct one. “Looking at the [ATIPP Act] as a whole, and its purposes, it is clear that exceptions to access to information are carefully crafted to limit access only as much as necessary to protect certain interests,” McLeod-McKay said in an interview on the decision. [Yukon News]
Genetics
US – Genetic Privacy in Question with Law Enforcement’s Use of DNA Tests
Privacy advocates are taking issue with FamilyTreeDNA allowing law enforcement to use the 1.5 million records in the company’s genetic database without a warrant or proper consent from users. “Taking a DNA test does not just tell a story about me. DNA tests inevitably reveal information about many other people too, without their consent,” University of Maryland Francis King Carey School of Law Associate Professor Natalie Ram said. “Should genetic databases be allowed to make up the rules as they go along?” Meanwhile, Government Technology reports on the growing privacy concerns with increased use of facial-recognition software in U.S. airports. [The Wall Street Journal]
Health / Medical
US – ONC Working with Congress, White House on Health Care App Privacy
The U.S. Office of the National Coordinator for Health Information Technology is working with Congress and the White House on app privacy for patients. Health care groups have raised concerns about patients using third-party apps not protected by the Health Insurance Portability and Accountability Act as their data may be used in ways they are not aware of, “such as by monetizing it or using it to target advertisements,” according to the report. ONC CEO Donald Rucker said they are working with “a number of folks on better ways of doing consent” to ensure patients are aware of possible secondary uses of data, adding “most patients are actually going to be as protective of their medical information as they are of their banking information.” [Modern Healthcare]
US – Survey: Patients Trust Health Care Agencies the Most to Protect Data
A recent survey from Harvard T.H. Chan School of Public Health and Politico found that Americans trust health care and banking institutions to protect their personal data. Of the 1,009 respondents, 75% ranked health care organizations the highest when it comes to protecting personal data, despite 32 million breached health care data records this year, while social media companies and internet search engines ranked last with 10%. “Broadly, while many Americans express serious misgivings about data privacy when it comes to social media sites and internet search engines, they report substantially more trust that their private health information will remain secure,” the researchers wrote. [HealthITSecurity]
Identity Issues
EU – Irish DPC Orders End to Processing of 3.2M Citizens’ Data Tied to Public Services Cards
The Irish Data Protection Commission has ordered the Department of Social Protection to stop all processing of the personal information of 3.2 million citizens in connection to its issuance of Public Services Cards, where PSCs are issued solely for individual transactions with other public bodies. The DPC found there is no legal basis for State agencies to require citizens to have a PSC to access services, such as renewing a driver’s license or obtaining a passport. The Irish Times reports Commissioner Helen Dixon anticipates her report will give rise to public questions about the PSC card. The Fianna Fáil party has called on the DPC to release all its findings from the investigation. [DataProtection.ie]
WW – Noise-Exploitation Attack May Break Through Differential Privacy Methods
Researchers from Imperial College London and Université Catholique de Louvain discovered a noise-exploitation attack to break through query-based databases that use aggregation and noise to mask personal data. Imperial College London Assistant Professor and co-author of the research paper Yves-Alexandre de Montjoye said a party could exploit differential privacy should they send enough queries to eventually figure out “every single thing that exists in the database because every time you give me a bit more information. We start from one query and then we do a variation of it and by studying the differences between the queries we know that some of the noise will disappear, some of the noise will not disappear and by studying noise that does not disappear basically we figure out the sensitive information.” [TechCrunch]
US – More Ill. Employers Accused of Violating BIPA With Fingerprint Scans
Two class-action suits have been filed in Cook County Circuit Court alleging employers violated the Illinois Biometric Information Privacy Act by requiring fingerprint scans for employees. The first case involves Bolingbrook-based D&D Manufacturing, which is accused of installing the biometric time clock without due notice to employees, who also did not authorize the use of their fingerprints. The second suit against Whole Foods Market Group alleges that the identities of employees at a River Forest location were at risk from the use of a similar biometric time clock. [The Cook County Record]
US – Defendants Seek Dismissal of Privacy Suit Over EHRs
Google and the University of Chicago Medical Center have filed a motion to dismiss a class-action suit over allegations related to deidentified electronic health records. In their motion to a federal court in Illinois, Google and the medical center argued their collaboration and patient data sharing were in compliance with the Health Insurance Portability and Accountability Act. They also claimed that their data sharing did not cause the plaintiffs any harm. The initial complaint by the plaintiffs alleged the data sharing involved date stamps of when patients checked in and out of the hospital, which Google could trace back to patients. [GovInfoSecurity]
Intellectual Property
WW – Developers Accuse Apple of Anti-Competitive Behavior With Its Privacy Changes in iOS 13
A group of app developers have penned a letter to Apple CEO Tim Cook, arguing that certain privacy-focused changes to Apple’s iOS 13 operating system [see here & wiki here] will hurt their business. Evidently the developers accused Apple of anti-competitive behavior when it comes to how apps can access user location data. With iOS 13, Apple aims to curtail apps’ abuse of its location-tracking features as part of its larger privacy focus as a company. Today, many apps ask users upon first launch to give their app the “Always Allow” location-tracking permission. Users can confirm this with a tap, unwittingly giving apps far more access to their location data than is actually necessary, in many cases. In iOS 13, however, Apple has tweaked the way apps can request location data. There will now be a new option upon launch presented to users, “Allow Once,” which allows users to first explore the app to see if it fits their needs before granting the app developer the ability to continually access location data. This option will be presented alongside existing options “Allow While Using App” and “Don’t Allow.” The “Always” option is still available, but users will have to head to iOS Settings to manually enable it. (A periodic pop-up will also present the “Always” option, but not right away.) The app developers argue that this change may confuse less-technical users, who will assume the app isn’t functioning properly unless they figure out how to change their iOS Settings to ensure the app has the proper permissions The letter was signed by Tile CEO CJ Prober; Arity (Allstate) president Gary Hallgren; CEO of Life360, Chris Hullsan; CEO of dating app Happn, Didier Rappaport; CEO of Zenly (Snap), Antoine Martin; CEO of Zendrive, Jonathan Matus; and chief strategy officer of social networking app Twenty, Jared Allgood. It’s another example of how erring on the side of increased user privacy can lead to complications and friction for end users. One possible solution could be allowing apps to present their own in-app Settings screen, where users could toggle the app’s full set of permissions directly — including everything from location data to push notifications to the app’s use of cellular data or Bluetooth sharing. [TechCrunch
Developers Call Apple Privacy Changes Anti-Competitive ]
Internet / WWW
CN – Chinese Regulator Says Apps Are Collecting Excessive Personal Data
China’s National Computer Network Emergency Response Technical Team has told app operators to reevaluate the types and amounts of personal data being collected from users. The regulator noted in its half-year report that apps were found to be over-collecting personal data, which is a problem that needs immediate attention. “A large number of apps exhibit abnormal behavior, such as detecting other apps or reading and writing user device files, posing a potential security threat to the user’s information security,” the report stated. The call for rectification comes as China’s digital servicescontinues to grow and becomes more susceptible to data privacy issues. [Tech in Asia]
WW – Google Shutters Mobile Insights Service Over Privacy Concerns
Google has shut down its Mobile Network Insights service over privacy concerns. The service was used to show wireless carriers the strength of their signals around the world through the use of data collected via Android devices. Google only used data from users who opted into sharing location data and did not contain any identifying information; however, the tech company still decided to shutter the service over concerns of regulatory scrutiny, according to sources close to the decision. Google also released the results of a survey in which it found hackers use “password-spraying” attacks due to online patrons’ continued use of the same password, even when it was compromised in a previous instance. [Reuters]
WW – Apple to Make App Store Changes to Protect Children’s Privacy
Apple plans to implement new rules for its App Store in an effort to protect children’s privacy. The tech company plans to ban any app targeted to children from using external analytics software to monitor who interacts with an app and how. Developers have raised issues about the impact the changes will have on their business models, as well as whether it will expose children to more adult apps. In response to those concerns, Apple said it will delay the rule change for now. “We aren’t backing off on this important issue, but we are working to help developers get there,” Apple Spokesman Fred Sainz said in a statement. [The Washington Post]
Law Enforcement
US – Calif. Supreme Court Expands Rules on Police Officer Disclosures
Police officers will see their right to privacy lose weight in court cases following a decision by the California Supreme Court. Justices overruled a lower court decision that the Los Angeles County Sheriff’s Department was prohibited from giving prosecutors the names of deputies accused of improper conduct. California has previously been tight on officer privacy, but a new law requires more public disclosure of police misconduct. Law enforcement unions have been unsuccessful in arguing that the law shouldn’t be retroactive to include previous cases of misconduct prior to the new law taking effect. [CBS San Francisco]
Online Privacy
WW – Malicious Websites Secretly Hackrf into iPhones for Years, Says Google
Security researchers at Google say they’ve found a number of malicious websites which, when visited, could quietly hack into a victim’s iPhone by exploiting a set of previously undisclosed software flaws. Google’s Project Zero said in a deep-dive blog post published late on Thursday that the websites were visited thousands of times per week by unsuspecting victims, in what they described as an “indiscriminate” attack. “Simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant,” said Ian Beer, a security researcher at Project Zero. He said the websites had been hacking iPhones over a “period of at least two years.” The researchers found five distinct exploit chains involving 12 separate security flaws, including seven involving Safari, the in-built web browser on iPhones. The five separate attack chains allowed an attacker to gain “root” access to the device — the highest level of access and privilege on an iPhone. In doing so, an attacker could gain access to the device’s full range of features normally off-limits to the user. That means an attacker could quietly install malicious apps to spy on an iPhone owner without their knowledge or consent. Google privately disclosed the vulnerabilities in February, giving Apple only a week to fix the flaws and roll out updates to its users. That’s a fraction of the 90 days typically given to software developers, giving an indication of the severity of the vulnerabilities. [TechCrunch] | Sources say China used iPhone hacks to target Uyghur Muslims | Apple Just Gave 1.4 Billion Users A Reason To Quit Their iPads, iPhones | Why the latest iPhone hack should worry you no matter what phone you use | iPhone Hackers Caught By Google Also Targeted Android And Microsoft Windows, Say Sources]
WW – Facebook Rolls Out Tool to Limit Data Collection from Other Companies
Facebook unveiled a feature designed to allow users to limit the data collected by businesses and applications that are then sent to the tech company. The tools to control “Off-Facebook Activity” give users the opportunity to remove shopping habits, web-browsing histories and other activities that are used for targeted ads from their accounts. Facebook officials said the feature will first be available to those in Spain, Ireland and South Korea, with more countries to be added in the coming months. Facebook Product Manager David Baser called the effort “the most powerful and comprehensive tool ever launched in the industry for this kind of data.” Meanwhile, developers have sent a letter to Apple CEO Tim Cook over privacy changes made in iOS 13. [The Washington Post | Facebook’s New Privacy Feature Comes With a Loophole | Facebook’s New Privacy Feature Comes With a Loophole | Facebook unveils new tools to control how websites share your data for ad-targeting | Facebook Begins Rolling Out New Tool to See Which of Its Data Pals Are Monitoring You | Facebook launches long-awaited privacy tool to clear your browsing history | Facebook’s Clear History privacy tool finally begins rolling out in three countries]
WW – Google Announces New Privacy Strategy, Will Limit Chrome Tracking
Google has proposed a new privacy initiative that aims to curb tracking by digital marketers and advertisers. Google’s plan focuses on a privacy budget, which websites can use to pull user information from a given browser and put it into a larger group of anonymous data. The privacy budget would allow users to retain anonymity and put a limit on application programming interface calls a website can make to a browser. Google Engineering Director on Chrome Security and Privacy Justin Schuh said the new proposal aims to “have the same kind of big, bold vision for how we think privacy should work on the web, how we should make browsers and the web more private by default.” [TechCrunch | Google proposes new privacy standards to protect web browsing data | Google proposes new privacy and anti-fingerprinting controls for the web | Google Chrome proposes ‘privacy sandbox’ to reform advertising evils | As browser rivals block third-party tracking, Google pitches ‘Privacy Sandbox’ peace plan]
WW – Facebook Releases Document on Cambridge Analytica Timeline
Facebook and the District of Columbia attorney general released a document to the public about Cambridge Analytica. The document states Facebook employees made a request for an investigation into Cambridge Analytica’s data practices in September 2015. According to the tech company, it did not find out app developer Aleksandr Kogan sold user data to Cambridge Analytica until December 2015. Facebook Vice President and Deputy General Counsel Paul Grewal wrote in a blog post the document was made public to remove confusion over two separate issues. “One involved unconfirmed reports of scraping — accessing or collecting public data from our products using automated means — and the other involved policy violations by Aleksandr Kogan, an app developer who sold user data to Cambridge Analytica,” Grewal writes. “This document proves the issues are separate; conflating them has the potential to mislead people.” [NBC News]
EU – Dutch DPA Asks Irish DPC to Look into Microsoft’s Data Collection Practices
The Dutch data protection authority, the Autoriteit Persoonsgegevens, said Microsoft has remotely collected data from those who use Windows Home and Windows Pro. The DPA announced it discovered the practices as it tested the privacy protection it asked the tech company to implement back in 2017. “Microsoft has complied with the agreements made,” the Dutch authority said. “However, the check also brought to light that Microsoft is remotely collecting other data from users. As a result, Microsoft is still potentially in breach of privacy rules.” As a result, the DPA has asked the Irish Data Protection Commission to take on the probe. Microsoft said in a statement it is committed to protecting privacy and that it welcomes “the opportunity to improve even more the tools and choices we offer to these users.” [Reuters] | Microsoft’s lead EU data watchdog is looking into fresh Windows 10 privacy concerns]
EU – Hamburg DPA Lays Out Legal Requirements for Google to Resume Audio Transcriptions
The Hamburg Commissioner for Data Protection and Freedom of Information met with representatives from Google to discuss its audio transcribing practices. The tech company is not allowed to transcribe audio recordings from its Google Assistant devices unless it meets requirements laid out by the commissioner. Google must receive informed consent from users in order to transcribe any audio, and it must be transparent about instances when a device is incorrectly activated. Hamburg Commissioner for Data Protection and Freedom of Information Johannes Caspar said that should the tech company violate the EU General Data Protection Regulation after it resumes its practices, “urgent measures can be taken at any time to protect the privacy rights of the users.” [Datenshutz]
Other Jurisdictions
IN – Supreme Court Warns Indian Government on Connecting Aadhaar with Social Media
India’s Supreme Court has heard from social media platforms on the possibility of the government linking the country’s Aadhaar identification system to social media accounts. After hearing pleas from Facebook and WhatsApp, Justice Deepak Gupta said such a connection would infringe on citizens’ privacy, adding that the court will eventually have to balance fundamental rights to privacy and security. Talk of the grouping of Aadhaar and social media accounts together began when Attorney General KK Venugopal opined that such a move would boost preventative measures against crime and terrorism. The court will take more responses from stakeholders before a hearing on the matter Sept. 13. [The Economic Times]
Privacy (US)
US – Lawmakers Ask 50 Companies About Student Data Collection
U.S. Sens. Richard Blumenthal, D-Conn., Edward Markey, D-Mass., and Richard Durbin, D-Ill., have asked more than 50 companies about the student information they have gathered and how it is used. The three senators signed two different letters, one that went to education technology companies and another that was sent to data analytics firms. “Education technologies (EdTech) can be important learning tools that allow teachers to follow student progress and facilitate collaboration,” the letter to edtech companies reads. “However, this technology may put students, parents and educational institutions at risk of having massive amounts of personal information stolen, collected, or sold without their permission.” [The Washington Post]
US – Bail Bondsman Obtains Location Data With Fake Calls to Carriers
A Colorado bail bondsman coaxed Sprint, T-Mobile and Verizon into providing him with the location data of bail jumpers through illegal phony calls. Matthew Marre posed as law enforcement when he contacted the phone carriers, which were told Marre was a member of the Colorado Public Safety Task Force dealing with an emergency that required location data on certain individuals. U.S. Sen. Ron Wyden, D-Ore., a privacy critic of phone carriers, took issue with the carriers’ part in Marre’s case. “If true, these allegations would mark a new low in the ongoing scandal of wireless carriers sharing Americans’ location data without our knowledge or consent,” Wyden said in a statement. [The Daily Beast]
US – State Sen. Plans to Reintroduce Biometric Privacy Law to Fla. Lawmakers
U.S. State Sen. Gary Farmer, D-Fla., has plans to propose a previously rejected biometric privacy bill when the state legislature reconvenes. “[Sen.] Farmer does plan on filing it again. He sees it as an issue that we not only expect to face in the future, but in many respects are facing now,” said Jay Shannon, a legislative assistant in Farmer’s office. Farmer’s bill is said to draw parallels to the Illinois Biometric Information Privacy Act. Congressional critics say Farmer’s proposal could expose businesses to costly lawsuits that would be especially negative for small businesses. [The Florida Record]
US – FTC Reaches $30M Settlement with Company Over Deceptive Use of Lead Generators
The U.S. Federal Trade Commission announced it has reached a $30 million settlement with Career Education Corporation over claims it used lead generators in a deceptive manner. The agency alleged CEC took sales leads from lead generators to tell consumers they were associated with the military in order to market different schools. The FTC claimed the company used this misdirection to entice individuals to turn over information in order to help them find jobs or obtain benefits. CEC was also accused of violating the Telemarketing Sales Rule when it reached out to people on the National Do Not Call Registry. In addition to the $30 million, CEC is ordered to investigate the complaints filed against it related to the lead generators. [FTC.gov]
Privacy Enhancing Technologies (PETs)
WW – Using AI, Researchers Mask Emotions from Other AI-Based Products
Researchers from Imperial College London have devised new artificial intelligence software that helps users hide their emotions from other AI-based voice assistants. The technology filters emotional speech into “normal” speech to add a layer between a user and the device they’re using. Lead researcher Ranya Aloufi said the new software may be one of the few forms of data protection there is against voice assistants’ emotion sensors, which may “significantly compromise their privacy.” [Vice]
Security
US – US Seeing Surge of Ransomware Attacks
More than 40 municipalities have been hit with ransomware attacks over the last year. This particular type of cyberattack is not new, but the success of ransomware has led hackers to seek further research and development for more precise attacks. “The business model for the ransomware operators for the past several years has proved to be successful,” Department of Homeland Security Cybersecurity and Infrastructure Security Agency Director Chris Krebs said. “Years of fine-tuning these attacks have emboldened the actors, and you have seen people pay out — and they are going to continue to pay out.” The latest instance of ransomware activity came this week when 22 Texas agencies were attacked. [The New York Times]
WW – Cybersecurity Analysts Say Human Error Responsible for Most Cloud Breaches
Cybersecurity researchers have found most cloud data breaches occur due to a lack of proper data protection and security measures. IT service management company Gartner estimates up to 95% of cloud breaches stem from human errors. The recent Capital One data breach is an example, with a flawed firewall implementation opening access for the hacker. “I still report on average one or two misconfigured [Amazon] S3 buckets per month and the data there is not encrypted. I haven’t seen any encrypted data within an S3 bucket for a long time,” said Bob Diachenko, cyberthreat intelligence director of consulting firm Security Discovery. [The Wall Street Journal]
US – Ill. Passes Bill to Improve Student Data Protection
Gov. JB Pritzker, D-Ill., has signed off on amendments to the Illinois Student Online Personal Protection Act. With the changes, parents now hold more control over their children’s data as the law now requires parents be notified about the details of student data collection, including what kind of data is being collected and why the data is being collected. Parents must also be notified within 30 days if the school suffered a data breach and 60 days if a third party is responsible for a breach. The amendments will take effect July 1, 2021. [WBBM Newsradio]
WW – Data Breach Affects Cloud Firewall Users
Cybersecurity and distributed-denial-of-service migration firm Imperva notified customers of a data breach affecting its Cloud Web Application Firewall product, previously known as Incapsula. The data breach only affected customers who had accounts with Cloud WAF through Sept. 17, 2017. Exposed data included email addresses, hashed and salted passwords, along with application programming interface keys and customer-provided secure sockets layer certificates for a select number of customers. Meanwhile, Presbyterian Healthcare Services has notified 183,000 patients their personal information was exposed due to a phishing scam. [ZDNet]
Smart Cities and Cars
WW – Smart Cities May Be Vulnerable to Cyberattacks
Cities may be under-funding their investment in digital security in their quest to install networks that will further smart-city development. Cities are projected to spend more than half of the $135 billion allotted for digital infrastructure to protect against cyberattacks on other sectors such as financial and IT and defense sectors in 2024. This will leave cities “woefully underfunded and incredibly vulnerable to cyberattacks,” according to ABI Research officials. [GCN]
WW – Carmaker Use of Tracking Sensors Raises Privacy Concerns
Mercedes-Benz is using sensors to track and repossess vehicles. According to a spokeswoman for Mercedes, drivers agree to location tracking when they purchase the car. Privacy advocates have raised concerns about the practice citing it may expose the information to hackers or exploitation. As of March 2018, all new cars built in the European Union must come with location sensors that can transmit data to emergency services if an accident occurs. Meanwhile, MediaPost reports two Colorado residents dropped their location privacy lawsuits “without prejudice” against Google and Apple but not the suit filed against Facebook. [CNN]
HK – Hong Kong Protesters Take Down Smart Lampposts
Protesters in Hong Kong have torn down 20 smart lampposts over fears of government surveillance. The government previously noted the cameras in the lampposts were used for traffic monitoring and other simple monitoring functions, not for keeping tabs on citizens. “August 24 was a dark day for Hong Kong’s innovation and technology,” Secretary for Innovation and Technology Nicholas Yang Wei-hsiung said. “Some people ignored facts and used conspiracy theories to claim smart lamp posts are a privacy risk. We have been clear and transparent from the start, but in return, we get damage. We are rather disappointed.” [The South China Morning Post]
+++