1-15 October 2018


US – Feds Force Suspect to Unlock an Apple iPhone X With Their Face

A child abuse investigation unearthed by Forbes [PDF] includes the first known case in which law enforcement used Apple Face ID facial recognition technology to open a suspect’s iPhone. That’s by any police agency anywhere in the world, not just in America. It happened on August 10, when the FBI searched the house of 28-year-old Grant Michalski, a Columbus, Ohio, resident who would later that month be charged with receiving and possessing child pornography [see August 24 DoJ PR]. With a search warrant in hand, a federal investigator told Michalski to put his face in front of the phone, which he duly did. That allowed the agent to pick through the suspect’s online chats, photos and whatever else he deemed worthy of investigation. Whilst the feds obtained a warrant, and appeared to have done everything within the bounds of the law, concerns remain about the use of such tactics. “Traditionally, using a person’s face as evidence or to obtain evidence would be considered lawful,” said Jerome Greco, staff attorney at the Legal Aid Society. “But never before have we had so many people’s own faces be the key to unlock so much of their private information.” Thus far, there’s been no challenge to the use of Face ID in this case or others. But Fred Jennings, a senior associate at Tor Ekeland Law, said they could come thanks to the Fifth Amendment, which promises to protect individuals from incriminating themselves in cases. [Forbes  Additional coverage at: Naked Security (Sophos), The Verge and Ars Technica]


CA – Draft Guidance Released Regarding Mandatory Breach Reporting Under PIPEDA

On September 17, 2018, the Office of the Privacy Commissioner of Canada (OPC) released draft guidance regarding PIPEDA’s new mandatory security and privacy breach notification requirements, which come into force on November 1, 2018. This guidance contains helpful information regarding how and when to report breaches of security safeguards to the OPC, the corresponding notice that must be provided to individuals, and record-keeping obligations associated with such breaches. Of particular note, this guidance provides the following key pieces of information and clarification:

  • Not all breaches must be reported to the OPC. Only those breaches that create a “real risk of significant harm” to an individual are the subject of mandatory reporting obligations;
  • Reporting should commence as soon as possible once the organization determines that a breach creates a real risk of significant harm;
  • The obligation to report resides with the organization in control of the personal information that is the subject of the breach;
  • A report made to the OPC must contain information regarding the date of the breach, the circumstances of the breach, personal information involved, number of individuals affected;
  • When a breach creates a real risk of significant harm, the individuals whose personal information was the subject of the breach must also be notified of the breach;
  • If a breach may also be mitigated or the risk of harm reduced via notification of other government institutions or organizations, then notification of these bodies must also occur; and
  • The obligation to maintain records regarding breaches is not limited to only those breaches that are reportable to the OPC.

The draft guidance includes a PIPEDA breach report form, which can be used by organizations to report security and privacy breaches to the OPC following the effective date of the breach notification requirements. The draft guidance and breach report form are consultation documents, and as such, the OPC invited stakeholders to provide feedback on both documents by October 2, 2018. The final versions of both documents will be published in time for November 1, 2018. [Mondaq]

CA – OPC Seeks Federal Court Determination on Key Issue for Canadians’ Online Reputation

The Office of the Privacy Commissioner of Canada (OPC) is turning to the Federal Court to seek clarity on whether Google’s search engine is subject to federal privacy law when it indexes web pages and presents search results in response to queries of a person’s name. The OPC has asked the court to consider the issue in the context of a complaint involving an individual who alleges Google is contravening PIPEDA [OPC guidance] by prominently displaying links to online news articles about him when his name is searched. The complainant alleges the articles are outdated, inaccurate and disclose sensitive information about his sexual orientation and a serious medical condition. By prominently linking the articles to his name, he argues Google has caused him direct harm. Google asserts that PIPEDA does not apply in this context and that, if it does apply and requires the articles to be de-indexed, it would be unconstitutional. Following public consultations, the OPC took the view [see position paper] that PIPEDA provides for a right to de-indexing – which removes links from search results without deleting the content itself – on request in certain cases. This would generally refer to web pages that contain inaccurate, incomplete or outdated information. However, there is some uncertainty in the interpretation of the law. In the circumstances, the most prudent approach is to ask the Federal Court to clarify the law before the OPC investigates other complaints into issues over which the office may not have jurisdiction if the court were to disagree with the OPC’s interpretation of the legislation. A Notice of Application [see here], filed today in Federal Court, seeks a determination on the preliminary issue of whether PIPEDA applies to the operation of Google’s search engine. In particular, the reference asks whether Google’s search engine service collects, uses or discloses personal information in the course of commercial activities and is therefore subject to PIPEDA. It also asks whether Google is exempt from PIPEDA because its purposes are exclusively journalistic or literary. While Google has also raised the issue of whether a requirement to de-index under PIPEDA would be compliant with s. 2(b) of the Charter, the OPC has decided not to refer this issue to the Court at this stage. The Charter issue may not need to be addressed depending on how the reference questions are answered. The Charter issue is also highly fact based and would require an assessment of the facts of the complaint, making it inappropriate for a reference. Investigations into complaints related to de-indexing requests will be stayed pending the results of the reference. The Privacy Commissioner’s office will also wait until this process is complete before finalizing its position on online reputation. [Office of the Privacy Commissioner of Canada] | Coverage at: Will Canadians soon have the ‘right to be forgotten’ online? Here’s what you need to know | Privacy czar asks Federal Court to settle ‘right to be forgotten’ issue | Privacy watchdog asks Federal Court to rule on Google de-indexing question]

CA – B.C. Political Parties Face Personal Data Collection Investigation

How British Columbia’s political parties harvest and use personal information from social media will be subject to an Office of the Information and Privacy Commissioner investigation within the next month, Commissioner Michael McEvoy said Sept. 28 in his comments in Vancouver to B.C. Information Summit 2018 delegates. McEvoy said reviews of how parties use information has already led to auditing in the United Kingdom, where he has assisted the work of that country’s information commissioner, his B.C. predecessor. “That is something we are going to be doing in British Columbia,” he said. “Politicians realize that uses, misuses and abuses of data in a personal context can change elections,” University of Victoria political science professor Colin Bennett [here] said. “Political affiliation is something that should only be captured with individual consent.” He said political parties “are the major organizations that fall between the cracks of a privacy regime that is either federal or provincial or is corporate or government.” Political parties identifying their voter bases can vacuum up personal information shared on social media. And that can start with something as simple as an election voters’ list readily available to political parties. Bennett said use of the list is excluded from no-phone-call regulations of the Canadian Radio-television and Telecommunications Commission designed to prevent nuisance calls. As well, Bennett explained, parties are not covered by federal anti-spam legislation. He said the proposed federal Election Modernization Act [Bill C-76 here] sections supposed to deal with privacy are “basic and incomplete.” Further, Bennett said, parties do have privacy policies but those are vague and don’t necessarily mesh with each other. Other speakers said greater oversight is needed over how Canadian political parties collect and use voters’ personal information. [Kamloops Matters]

US – U.S.-Mexico-Canada Pact Covers Data Privacy, Local Storage Rules

The U.S., Canada, and Mexico would have to adopt data protection measures under a deal aimed at replacing the North American Free Trade Agreement. Those measures should include provisions on data quality, collection restrictions, and transparency, according to text of the U.S.-Mexico-Canada Agreement released by the U.S. Trade Representative’s Office. Under the deal, governments would have to publish information on how businesses can comply with the rules and the remedies that individuals can pursue. The agreement reflects an increased awareness of data protection issues following the EU’s adoption of new privacy rules and the Cambridge Analytica scandal involving Facebook Inc. data. It would direct the three countries’ governments to exchange information on data protection policies and work together to promote digital trade. The agreement also would ban rules requiring data to be stored locally and prohibit restrictions on data flows for business purposes. Lawmakers in all three countries must approve the deal for it to take effect. Tech industry groups supported the pact’s digital trade and data privacy provisions. [Bloomberg BNA See also: Key takeaways from the new U.S.-Mexico-Canada Agreement

CA – USMCA Falls Short on Digital Trade, Data Protection and Privacy: Geist

The United States-Mexico-Canada Agreement (USMCA) is more than just an updated version of the North American Free Trade Agreement. With the inclusion of a digital trade chapter, the deal sets a new standard for e-commerce that seems likely to proliferate in similar agreements around the world. The chapter raises many concerns, locking in rules that will hamstring online policies for decades by restricting privacy safeguards and hampering efforts to establish new regulation in the digital environment. For example, the USMCA includes rules that restrict data localization policies that can be used to require companies to store personal information within the local jurisdiction. Jurisdictions concerned about lost privacy in the online environment have increasingly turned to data localization to ensure their local laws apply. These include the Canadian provinces of British Columbia and Nova Scotia, which have data localization requirements to keep sensitive health information at home that may be jeopardized by the agreement. It also bans restrictions on data transfers across borders. That means countries cannot follow the European model of data protection that uses data transfer restrictions as a way to ensure that the information enjoys adequate legal protections. In fact, countries could find themselves caught in a global privacy battle in which Europe demands limits on data transfers while the USMCA prohibits them. The chapter fails to reflect many global e-commerce norms, and may ultimately restrict policy flexibility on key privacy issues will have been quietly established as the go-to international approach. [The Washington Post | Experts say USMCA frees Canadian data — but with unknown risks


WW – Privacy Advocates Face Negative Stereotyping Online

New research from HideMyAss! has revealed that people around the world perceive privacy advocates as untrustworthy, paranoid, male loners with something to hide despite their own views towards privacy.[PR, blog post & report] The security software firm partnered with Censuswide to survey 8,102 people from the UK, US, France and Germany to compile its new report. Even though two fifths of those surveyed (41%) agreed that privacy is an indispensable human right, 80% believed their online history could be accessed without their knowledge by governments, hackers, police and partners. The research also highlighted a general apathy towards protecting privacy as more than one in five admitted they take no action to protect it. Of those who do take action, 78% rely on some form of password protection as their many privacy measure. More than half (56%) of respondents claim to never share their password with anyone and 22% do not save passwords on their browsers or devices. HideMyAss! also found that while there is overwhelming support for people using the Internet privately for legal actives (74%), 26% of respondents believe that people who aren’t willing to divulge what they do online have something to hide with 24% expecting them to be untrustworthy and more than a fifth (22%) of the opinion they are more likely to have a criminal record. When it comes to the particular traits of privacy advocates, respondents said they could be paranoid (52%), loners (37%) or people partial to spying on their neighbours (36%).  TechRadar


US – DOJ Releases “Best Practices for Victim Response and Reporting of Cyber Incidents,” Version 2.0

On September 27, 2018, the U.S. Department of Justice Computer Crime and Intellectual Property (CCIPS) Cybersecurity Unit released Version 2.0 of its “Best Practices for Victim Response and Reporting of Cyber Incidents“ [PDF] Originally issued in 2015, the updated guidance seeks to help organizations better equip themselves to be able to respond effectively and lawfully to cyber incidents. The updated version distills insights from private and public sector experts, incorporating new incident response considerations in light of technical and legal developments in the past three years. While the guidance is designed to mostly be applicable to small- and medium-sized businesses, it may be useful to larger organizations as well. Similar to Version 1.0 [PDF] (see previous analysis here), the updated guidance is divided into several parts, advising companies on steps to take before, during, and after a cybersecurity incident. While the document is not intended to have any regulatory effect, the guidance is a useful tool for organizations seeking to make sure their data security policies align with today’s best practices. [Privacy & Data Security Blog (Alston & Bird)]

Electronic Records

CA – Clinical Trial Data Not Quite Confidential: Federal Court

On July 9, 2018, the Federal Court released its decision ordering Health Canada to provide the results of certain clinical trials, including participant level datasets, to an American researcher: Doshi v Canada (Attorney General), 2018 FC 710 [PDF]. Health Canada requires researchers to sign a standard confidentiality agreement in order to release clinical trial data for the purpose of research. On the basis of the researcher’s refusal to sign the standard confidentiality agreement, Health Canada unsuccessfully attempted to keep confidential the requested reams of clinical trial data. .At issue was the interpretation of subsection 21.1(3) of the Protecting Canadians from Unsafe Drugs Act (“Vanessa’s Law”) [Overview & FAQ]. The case is interesting not only because it was the first time the court was called upon to apply Vanessa’s Law, but also because the court was required to decide other important ancillary issues, such as the confidential nature of clinical trial data and the bearing such nature may have on freedom of expression under section 2(b) of the Canadian Charter of Rights and Freedoms. In light of administrative law principles concerning the exercise of discretionary powers, Justice Grammond held that it was unreasonable for Health Canada to impose a confidentiality requirement as a condition for the disclosure of the data requested by Dr. Doshi (para 87). Following the Federal Court decision, Health Canada indicated that it is working on regulations to publicly release a large amount of information in clinical trial reports for a wide range of medications. Stakeholders should watch out for new developments on this front. [CyberLex Blog (McCarthy Tetrault)]

EU Developments

EU – CNIL Publishes Initial Assessment on Blockchain and GDPR

Recently, the French Data Protection Authority (“CNIL“) published its initial assessment of the compatibility of blockchain technology with the EU General Data Protection Regulation (GDPR) and proposed concrete solutions for organizations wishing to use blockchain technology when implementing data processing activities [see 11 pg PDF in French]. The CNIL made it clear that its assessment does not apply to (1) distributed ledger technology (DLT) solutions and (2) private blockchains. In its assessment, the CNIL first examined the role of the actors in a blockchain network as a data controller or data processor. The CNIL then issued recommendations to minimize privacy risks to individuals (data subjects) when their personal data is processed using blockchain technology. In addition, the CNIL examined solutions to enable data subjects to exercise their data protection rights. Lastly, the CNIL discussed the security requirements that apply to blockchain. The CNIL made a distinction between the participants who have permission to write on the chain (called “participants”) and those who validate a transaction and create blocks by applying the blockchain’s rules so that the blocks are “accepted” by the community (called “miners”). According to the CNIL, the participants, who decide to submit data for validation by miners, act as data controllers when (1) the participant is an individual and the data processing is not purely personal but is linked to a professional or commercial activity; and (2) the participant is a legal personal and enters data into the blockchain. According to the CNIL, the exercise of the right to information, the right of access and the right to data portability does not raise any particular difficulties in the context of blockchain technology (i.e., data controllers may provide notice of the data processing and may respond to data subjects’ requests of access to their personal data or data portability requests.) However, the CNIL recognized that it is technically impossible for data controllers to meet data subjects’ requests for erasure of their personal data when the data is entered into the blockchain: once in the blockchain system, the data can no longer be rectified or erased. The CNIL considered that the security requirements under the GDPR remain fully applicable in the blockchain.  In the CNIL’s view, the challenges posed by blockchain technology call for a response at the European level. The CNIL announced that it will cooperate with other EU supervisory authorities to propose a robust and harmonized approach to blockchain technology. [Privacy & Information Security Law Blog (Hunton Andrews Kurth) with coverage at: JDSUPRA and PaymentsCompliance]

Facts & Stats

WW – Data Breaches Compromised 4.5 Billion Records in the First Half of 2018

According to the latest figures from the Gemalto Breach Level Index, 4.5 billion records were compromised in just the first six months of this year [PR, infographic & download report] . The US comes out the worst, with 3.25 billion records affected and 540 breaches — an increase of 356% in the last month and 98% over the same period in 2017. A total of six social media breaches accounted for over 56% of total records compromised. Of the 945 data breaches, 189 (20% of all breaches) had an unknown or unaccounted number of compromised data records. Europe was well behind America seeing 36% few incidents, but there was a 28% rise in the number of records breached indicating growing severity of attacks. The United Kingdom was the worst hit in its region suffering 22 data incidents. [Information Age | Disclosure laws lead to spike in reported data breaches: Gemalto | A Massive Bump In Data Breaches Is Stoking Bot-Driven Attacks On Retailers | What Drives Tech Internet Giants To Hide Data Breaches Like The Google+ Breach


CA – More Than a Dozen Federal Departments Flunked Credit Card Security Test

The Canada Revenue Agency, the RCMP, Statistics Canada and more than a dozen other federal departments and agencies have failed an international test of the security of their credit card payment systems. Altogether, half of the 34 federal institutions authorized by the banking system to accept credit-card payments from citizens and others have flunked the test — risking fines and even the revocation of their ability to accept credit and debit payments. Those 17 departments and agencies continue to process payments on Visa, MasterCard, Amex, the Tokyo-based JCB and China UnionPay cards, and federal officials say there have been no known breaches to date. These institutions all fell short of a global data-security standard PCI DSS, for “Payment Card Industry Data Security Standards.” It was established by five of the big credit-card firms. That’s meant to foil fraud artists and criminal hackers bent on stealing names, numbers and codes for credit and debit cards. Federal departments must self-assess against the standard annually. CBC News obtained the briefing note, to the deputy minister of Public Services and Procurement Canada (PSPC), under the Access to Information Act. The document suggests the main culprit is Shared Services Canada (SSC), the federal IT agency created in 2011 that operates and maintains data systems for 13 of the 17 non-compliant institutions. Eleven of the 13 SSC clients who fell short of the credit card security standard say the agency itself has not fixed the security problems. The institutions that failed the credit card security checks are: Health Canada, RCMP, Industry Canada, Transport Canada, National Research Council, Canada Border Services Agency, Natural Resources Canada, Immigration Refugees and Citizenship, Statistics Canada, Fisheries and Oceans, Canada Revenue Agency, Canada Food Inspection Agency and Library and Archives Canada, all of which depend on SSC for their IT. The Library of Parliament, National Defence, the National Film Board of Canada and the Canadian Centre for Occupational Health and Safety are also non-compliant, but are responsible for the security of their own IT systems. [CBC News]


CA – Bowing to Pressure, Feds Urge Senate to Change Access to Information Bill

After pushback from Indigenous groups and the information commissioner, the federal government is backing down on a number of changes proposed to the Access to Information Act that critics have called “regressive” that part of Bill C-58 that required access to info requesters to describe the document time period, subject, and type. Witnesses had warned that level of detail, particularly with First Nations attempts to get land-claim records, would limit access to records where such detail is not known and almost certainly lead to departments denying requests. Information commissioner Caroline Maynard also successfully convinced the government to give her order-making power when the bill reaches royal assent and is formally approved, rather than a year after the bill becomes law, as it’s currently written. Critics hav also raised alarms about adding the ability for government departments and agencies to decline “vexatious,” or overly broad requests. At a Senate committee Oct. 3, Treasury Board President Scott Brison closed the door on removing that power from the bill, noting the government had already accepted changes from the House Ethics Committee to address fears it would limit access and “address any concerns” of “inappropriate” use. The House passed the changed bill in December 2017. Now, agencies won’t be able to give a request that label unless they have approval from the information commissioner at the beginning of the process. The Access to Information Act lets Canadians pay $5 to request government documents, but critics for years have said it’s dysfunctional, too slow, and allows for big loopholes that limit the information released. [The Hill Times]

CA – Privileged Records and Access to Information Reviews: When to Produce?

Solicitor-client privilege is intended to foster candid conversation between a client and legal counsel in order to ensure that the client receives appropriate legal advice and can make informed decisions. It protects the solicitor-client relationship. By comparison, litigation privilege attaches to records that are created for the dominant purpose of preparing for litigation. It offers protection for clients to investigate and prepare their case. Both privileges are vital to an effective legal system. Enter access to information legislation. Legislation in each Atlantic province provides some form of exception to disclosure for privileged records. In New Brunswick, see The Right to Information and Protection of Privacy Act, SNB 2009, c R-10.6 at s 27 [here]; in Newfoundland and Labrador, see Access to Information and Protection of Privacy Act, 2015, SNL 2015 c A-1.2 at s 30 [here]; in Nova Scotia, see Freedom of Information and Protection of Privacy Act, SNS 1993, c 5 at s 16 [here]; and in Prince Edward Island, see Freedom of Information and Protection of Privacy Act, RSPEI 1988, c 15.01 at s 25 [here]. But a public body’s application of access to information legislation is overseen by a statutory office in every jurisdiction. What happens when the public body’s application of the exception for privileged records is challenged? That question gave rise to the Supreme Court of Canada’s well-known decision in Alberta (Information and Privacy Commissioner) v University of Calgary [here] In that case, a delegate of the Alberta Information and Privacy Commissioner issued a notice to the University to produce records over which the University had claimed solicitor-client privilege. The majority of the Court agreed with the University and determined that the University was not obligated to produce solicitor-client privileged records to the delegate for review. The University of Calgary decision received a great deal of attention when it was released. But little attention has been paid to the Majority’s closing comments regarding the appropriateness of the Alberta OIPC’s decision to seek production of records over which solicitor-client privilege was claimed the Supreme Court emphasized that “even courts will decline to review solicitor-client documents to ensure that privilege is properly asserted unless there is evidence or argument establishing the necessity of doing so to fairly decide the issue” [see note 2 at para 68 here]. The Court was mindful of the fact that the University had identified the records in accordance with the practice in civil litigation in the province, and found that in the absence of evidence to suggest that the University had improperly claimed privilege, the delegate erred in determining that the documents had to be reviewed. While civil litigation practice can – and does – vary from province to province, should you find yourself in a positon where the Commissioner is seeking review of records over which you have claimed solicitor-client or litigation privilege, the Supreme Court’s commentary and the Alberta approach may provide a means by which to have the Commissioner resolve the claim without risking privilege and requiring production of the records in issue. [Mondaq]


WW – How Researchers Are Using DNA to Create Images of People’s Faces

Advancements in facial recognition and DNA sequencing technology have allowed scientists to create a portrait of a person based on their genetic information [A process called DNA phenotyping – wiki]. A study published last year and co-authored by biologist Craig Venter [wiki], CEO of San Diego-based company Human Longevity, showed how the technology works. The research team took an ethnically diverse sample of more than 1,000 people of different ages and sequenced their genomes. They also took high-resolution, 3D images of their faces and measured their eye and skin color, age, height and weight. This information was used to develop an algorithm capable of working out what people would look like on the basis of their genes. Applying this algorithm to unknown genomes, the team was able to generate images that could be matched to real photos for eight out of ten people. The success rate fell to five out of ten when the test was restricted to those of a single race, which narrows facial differences. The authors of the paper said the research has ‘significant ethical and legal implications on personal privacy, the adequacy of informed consent, the potential for police profiling and more’. Researchers have already produced images of faces based on genetic material or genome. For example, earlier this year, investigators in Washington State unveiled an image of a suspect created from DNA in the 30-year-old murder case of young Victoria (BC)-area couple Tanya Van Cuylenborg, 18, and Jay Cook, 20. [coverage here] And in Calgary in February police released a high-tech image they said was a likeness of the mother of a baby girl found dead in a dumpster on Christmas Eve. [CTV News]

Health / Medical

US – Fitbit Data Leads to Arrest of 90-Year-Old in Stepdaughter’s Murder

On Saturday, 8 September, at 3:20 pm, Karen Navarra’s Fitbit recorded her heart rate spiking. Within 8 minutes, the 67-year-old California woman’s heart beat rapidly slowed. At 3:28 pm, her heart rate ceased to register at all. She was, in fact, dead. Two pieces of technology have led the San Jose police police to charge Ms. Navarro’s stepfather, Anthony Aiello, with allegedly having butchered her. Besides the Fitbit records, there are also surveillance videos that undercut Aiello’s version of the events. When police compared the dead woman’s Fitbit data with video surveillance from her home, they discovered that Aiello’s car was still there at the point when her Fitbit lost any traces of her heartbeat. Later, police found bloodstained clothing in Aiello’s home. If Aiello turns out to be guilty, he certainly won’t be the first to learn a harsh lesson in how much of the quotidian technology that surrounds us these days can be used to contradict our version of events. One example was in April 2017, when a murder victim’s Fitbit contradicted her husband’s version of events. In another case, we’ve seen pacemaker data used in court against a suspect accused of burning down his house. The title of a paper by Nicole Chauriye says it all: Wearable devices as admissible evidence: Technology is killing our opportunity to lie. [Naked Security (Sophos) coverage at: The Mercury News, The New York Times, The Independent and Los Angeles Times]

US – Despite Patient Privacy Risks, More People Use Wearables for Health

Despite the patient privacy risks that collecting health data on insecure wearable devices could pose, the number of US consumers tracking their health data with wearables has more than doubled since 2013, according to the Deloitte 2018 Survey of US Health Care Consumers [PR – also see blog post]. The use of wearables and other tools for measuring fitness and health improvement goals jumped from 17 percent in 2013 to 42% in 2018. Of those who used wearables in the past year, 73 percent said they used them consistently. Sixty percent of the 4,530 respondents said they are willing to share PHI generated from wearable devices with their doctor to improve their health. 51% of respondents are comfortable using an at-home test to diagnose infections before seeing a doctor. More than one-third (35%) of respondents said they are interested in using a virtual assistant to identify symptoms and direct them to a caregiver. Close to one-third (31%) are interested in connecting with a live health coach that offers text messaging for nutrition, exercise, sleep, and stress management. “For health systems that are collecting this information, it is important that they safeguard the privacy of that information,” Sarah Thomas, managing director of Deloitte’s Center for Health Solutions, told HealthITSecurity.com. “If it is about their personal health, then it is clear that the information needs to be safeguarded and subject to HIPAA” [wiki here] she added. [HealthIT Security Additional coverage at: Health Populi, For The Record and Patient Engagement HIT]

WW – Study Finds Medical Records Are Breached Worryingly Often

A new study by two physicians from Massachusetts General Hospital has concluded that breaches to people’s health data are alarmingly frequent and large scale. Writing in the Journal of the American Medical Association [Temporal Trends and Characteristics of Reportable Health Data Breaches, 2010-2017], Dr Thomas McCoy Jr and Dr Roy Perlis state that 2,149 breaches comprising a total of 176.4 million records occurred between 2010 and 2017. Their data was drawn from the US Health and Human Services Office for Civil Rights breach database [last 24 months here & archive of earlier brecahes], where all breaches of American patient records must be reported under US law. With the except of 2015, the number of breach events has increased every year during that period paper and film-based information were the most commonly compromised type of medical record, with 510 breaches involving 3.4 million records, but the frequency of this type of breach went down across the study period and the largest share of breached records – 139.9 million – came from infiltration into network servers storing electronic health records (EHRs). The frequency of hacking-based breaches went up during the study period. The majority of breaches occurred due to the actions of health care providers, though compromised systems in health plan companies accounted for more total records infiltrated. The authors write that “Although networked digital health records have the potential to improve clinical care and facilitate learning [in] health systems, they also have the potential for harm to vast numbers of patients at once if data security is not improved” [IFLScience! Additional coverage at: Reuters and Healthcare Infomatics]

US – Eight Healthcare Privacy Incidents in September

Eight privacy incidents at healthcare organizations captured public attention last month. While media outlets reported on the following breaches in September, healthcare organizations experienced breaches as early as 2014. Here are the eight incidents presented in order of number of patients affected: 1) The Fetal Diagnostic Institute of the Pacific in Honolulu notified 40,800 patients about a potential data breach after it fell victim to a ransomware attack in June; 2) Blue Cross Blue Shield of Rhode Island notified 1,567 members that an unnamed vendor responsible for sending members’ benefits explanations breached their personal health information; 3) An employee at Kings County Hospital’s emergency room stole nearly 100 patients’ private information and sold it through an encrypted app on his phone; 4) Claxton-Hepburn Medical Center in Ogdensburg, N.Y., terminated an undisclosed number of employees after hospital officials identified breaches of patient health information during a recent internal investigation; 5) Reliable Respiratory in Norwood, Mass., discovered unusual activity on an employee’s email account in July, which may have allowed hackers to access an undisclosed number of patients’ protected health information; 6) Independence Blue Cross in Pennsylvania notified an undisclosed number of plan members about a potential compromise of their protected health information after an employee uploaded a file containing personal data to a website that was publicly accessible for three months; 7) Nashville, Tenn.-based Aspire Health lost some patient information to an unknown cyberattacker who gained access to its internal email system in September, federal court records filed Sept. 25 show; and 8) Lutheran Hospital in Fort Wayne, Ind., canceled all remaining elective surgeries Sept. 18 after its IT team discovered a computer virus on its systems. [Becker’s Hospital Review]

Horror Stories

WW – Google Exposed User Data, Feared Repercussions of Disclosing to Public

Google exposed the private data of hundreds of thousands of users of the Google+ social network and then opted not to disclose the issue this past spring, in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage, according to people briefed on the incident and documents reviewed by The Wall Street Journal. As part of its response to the incident, the Alphabet Inc. unit on Monday announced [see blog post] a sweeping set of data privacy measures that include permanently shutting down all consumer functionality of Google+. A software glitch in the social site gave outside developers potential access to private Google+ profile data including: full names, email addresses, birth dates, gender, profile photos, places lived, occupation and relationship status between 2015 and March 2018, when internal investigators discovered and fixed the issue. A memo prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica. Chief Executive Sundar Pichai was briefed on the plan not to notify users after an internal committee had reached that decision. The question of whether to notify users went before Google’s Privacy and Data Protection Office, a council of top product executives who oversee key decisions relating to privacy. In weighing whether to disclose the incident, the company considered “whether we could accurately identify the users to inform, whether there was any evidence of misuse, and whether there were any actions a developer or user could take in response. None of these thresholds were met here” a Google spokesman said in a statement During a two-week period in late March, Google ran tests to determine the impact of the bug, one of the people said. It found 496,951 users who had shared private profile data with a friend could have had that data accessed by an outside developer. Some of the individuals whose data was exposed to potential misuse included paying users of G Suite, a set of productivity tools including Google Docs and Drive. G Suite customers include businesses, schools and governments. In its contracts with paid users of G Suite apps, Google tells customers it will notify them about any incidents involving their data “promptly and without undue delay” and will “promptly take reasonable steps to minimize harm.” That requirement may not apply to Google+ profile data, however, even if it belonged to a G Suite customer. [The Wall Street Journal | Google exposed data for hundreds of thousands of users | Google+ shutting down after data leak affecting 500,000 users | Google+ Is Shutting Down After a Security Bug Exposed User Info | Google did not disclose security bug because it feared regulation, says report | Laughing at the Google+ bug? You’re making a big mistake | Here’s how to quickly check if you have a Google+ account — and delete it]

Online Privacy

WW – Instagram Prototypes Handing Your Location History to Facebook

Instagram has been spotted prototyping a new privacy setting that would allow it to share your location history with Facebook. That means your exact GPS coordinates collected by Instagram, even when you’re not using the app, would help Facebook to target you with ads and recommend you relevant content. The geo-tagged data would appear to users in their Facebook Profile’s Activity Log, which include creepy daily maps of the places you been. This commingling of data could upset users who want to limit Facebook’s surveillance of their lives. A Facebook spokesperson tells TechCrunch that “To confirm, we haven’t introduced updates to our location settings. As you know, we often work on ideas that may evolve over time or ultimately not be tested or released. Instagram does not currently store Location History; we’ll keep people updated with any changes to our location settings in the future.” That effectively confirms Location History sharing is something Instagram has prototyped, and that it’s considering launching but hasn’t yet. Delivering the exact history of where Instagram users went could assist Facebook with targeting them with local ads across its family of apps. If users are found to visit certain businesses, countries, neighborhoods, or schools, Facebook could use that data to infer which products they might want to buy and promote them. It could even show ads for restaurants or shops close to where users spend their days. Just yesterday, we reported that Facebook was testing a redesign of its Nearby Friends feature that replaces the list view of friends’ locations with a map. Pulling in Location History from Instagram could help keep that map up to date. [TechCrunch | Facebook tests Snapchat-like map for Nearby Friends

WW – Google’s New Chrome Extension Rules Improve Privacy and Security

Google has announced several rules aimed at making Chrome extensions safer and more trustworthy. Many extensions request blanket access to your browsing data, but you’ll soon have the option to whitelist the sites they can view and manipulate, or opt to grant an extension access to your current page with a click. That feature is included in Chrome 70, which is scheduled to arrive later this month and includes other privacy-focused updates.  Developers can no longer submit extensions that include obfuscated code. Google says 70% of malicious and policy-violating extensions use such code. More easily accessible code should speed up the review process too. Developers have until January 1st to strip obfuscated code from their extensions and make them compliant with the updated rules. Additionally, there will be a more in-depth review process for extensions that ask you for “powerful permissions”, Google says. The company is also more closely monitoring those with remotely hosted code. Next year, developers will need to enable two-step verification on their Chrome Web Store accounts. Google also plans to introduce an updated version of the extensions platform manifest, with the aim of enabling “stronger security, privacy and performance guarantees.” Google says half of Chrome users actively employ extensions, so the changes could make browsing the web more secure for millions of people. [engadget – additional coverage at: TechCrunch, CNET News and VentureBeat]

US – Tim Cook Chides Tech Companies for Collecting Personal Data -But Apple Does it Too (Opinion)

Apple CEO Tim Cook took aim at the tech industry’s privacy practices. In an interview with Vice News, he said, “The narrative that some companies will try to get you to believe is, ‘I’ve got to take all your data to make my service better.’ Well, don’t believe that. Whoever’s telling you that, it’s a bunch of bunk.”  Is this a case of the kettle calling the pot black? Apple has cultivated and established a reputation for concern over privacy. There’s a privacy webpage that lists the steps the company takes to safeguard user information and what it refrains from doing. And then there’s the legal privacy policy page that lists the things Apple can and does do with your information. Reading it is enlightening. The page, updated May 22, 2018, “covers how we collect, use, disclose, transfer, and store your personal information.” The details are important. The main one is the first definition: “Personal information is data that can be used to identify or contact a single person.” Is information about a person, such as activities on a website, personal in the sense of being able to identify an individual? No, but it is associated with personal information to become useful. According to Goldman Sachs analyst Rod Hal, Google pays Apple $9 billion a year to remain Safari’s default search engine [coverage]. At the very least, there is a financial incentive for Apple to allow Google access to all the search information. Here is a partial list of “non-personal information” that Apple collects, according to its posted terms: a) Occupation, language, ZIP code, area code, unique device identifier, the URL where your browser was previously, your location and time zone when you used the Apple product; b) product name and device ID; c) details of how you use Apple services, including search queries; d) data stored in Apple log files includes “Internet protocol (IP) addresses, browser type and language, internet service provider (ISP), referring and exit websites and applications, operating system, date/time stamp, and clickstream data”; and e) Apple and its partners “may collect, use, and share precise location data, including the real-time geographic location of your Apple computer or device.”  Perhaps Apple is more concerned with privacy than other companies. Certainly, there’s been no news of a Facebook-style fiasco. Don’t necessarily assume that means you get real privacy. [Inc.com] Coverage at: Apple’s Tim Cook: ‘Don’t believe’ tech companies that say they need your data  | ‘It’s a Bunch of Bunk.’ Apple CEO Tim Cook on Why Tech Firms Don’t Need All Your Data—and Why Apple Expelled Alex Jones | Apple’s Tim Cook is sending a privacy bat-signal to US lawmakers | Apple chief says firm guards data privacy in China | Tim Cook: Don’t Get Hung Up on Where Apple Stores iCloud Data | Tim Cook to talk consumer privacy and data ethics at European data protection conference later this month

WW – Privacy Search Engine Duckduckgo Up 50% Searches in a Year

Privacy-focused search engine DuckDuckGo [wiki] which has just announced it’s hit 30 million daily searches a year after reaching 20M — a year-on-year increase of 50% [see traffic stats]. Hitting the first 10M daily searches took the search engine a full seven years, and then it was another two to get to 20M. DDG’s search engine offers a pro-privacy alternative to Google search that does not track and profile users in order to target them with ads. Instead it displays ads based on the keyword being searched for at the point of each search — dispensing with the need to follow people around the web, harvesting data on everything they do to feed a sophisticated adtech business, as Google does. Google handles least 3BN+ daily searches that daily. This year it expanded from its core search product to launch a tracker blocker to address wider privacy concerns consumers have by helping web users keep more of their online activity away from companies trying to spy on them for profit. [TechCrunch | Privacy: A Business Imperative and Pillar of Corporate Responsibility | DuckDuckGo, the privacy-focused search engine, grows daily searches by 50% to 30 million]

Other Jurisdictions

CA – APEC Cross-Border Privacy Rules Enshrined in U.S.-Mexico-Canada Trade Agreement

On September 30, 2018, the U.S., Mexico and Canada announced a new trade agreement (the “USMCA”) aimed at replacing the North American Free Trade Agreement. Notably, the USMCA’s chapter on digital trade will require the U.S., Canada and Mexico to each “adopt or maintain a legal framework that provides for the protection of the personal information of the users” includeing key principles such as: limitations on collection, choice, data quality, purpose specification, use limitation, security safeguards, transparency, individual participation and accountability. Article 19.8(2) directs the Parties to consider the principles and guidelines of relevant international bodies, such as the APEC Privacy Framework overview here] and the OECD Recommendation of the Council concerning Guidelines Governing the Protection of Privacy and Transborder Flows of Personal Data, and Article 19.8(6) formally recognizes the APEC Cross-Border Privacy Rules (the “APEC CBPRs”) [here] within their respective legal systems. In addition, Article 19.14(1)(b) provides that “the Parties shall endeavor to cooperate and maintain a dialogue on the promotion and development of mechanisms, including the APEC Cross-Border Privacy Rules, that further global interoperability of privacy regimes.” The USMCA must still pass the U.S. Congress, the Canadian Parliament, and the Mexican Senate. [Privacy & Information Security Law Blog (Hunton Andrews Kurth coverage at: Womble Bond Dickinson via National Law Review, The Washington Post, Michael Geist Blog, Private Internet Access blog | Data localization concerns in USMCA may be overblown

Privacy (US)

US – FTC Continues to Enforce EU-U.S. Privacy Shield

The U.S. Federal Trade Commission (FTC) recently settled enforcement actions [PR] against four companies accused of misleading consumers about their participation in the European Union-United States Privacy Shield framework [see here, here & wiki here], which allows companies to transfer consumer data from EU member states to the United States in compliance with EU law. These collective actions demonstrate the FTC’s ongoing commitment under new Chairman Joseph Simons to enforce U.S. companies’ filing obligations with the U.S. Department of Commerce as part of their efforts to comply with the Privacy Shield. These actions are also consistent with a recent statement [coverage here] by Gordon Sondlan, U.S. Ambassador to the European Union, that the U.S. is complying with EU data protection rules. Key Takeaways:

  • The FTC will continue to hold companies accountable for the promises they make to consumers regarding their privacy policies, including participation in the Privacy Shield;
  • Companies participating in the Privacy Shield should re-evaluate their privacy procedures and policies regularly to ensure compliance with the various requirements of the Privacy Shield framework;
  • Once a company initiates the Privacy Shield certification process, it must complete that process to claim participation in the Privacy Shield framework; and
  • Companies looking to participate in the Privacy Shield or a similar privacy program should consult counsel to ensure the program is the best option for their particular business needs.

[Dechert LLP Blog | FTC continues aggressive enforcement of Privacy Shield Additional coverage at: Privacy & Information Security Law Blog (Hunton Andrews Kurth), Privacy and Cybersecurity Perspectives (Murtha Cullina), Legal News Line]

US – Google Faces Mounting Pressure from Congress Over Google+ Privacy Flaw

In March, Google discovered a flaw in its Google+ API that had the potential to expose the private information of hundreds of thousands of users. Officials at Google opted not to disclose the vulnerability to its users or the public for fear of bad press and potential regulatory action [in an internal memo first reported here]. Now, lawmakers are asking to see those communications firsthand. Republican leaders from the Senate Commerce Committee are demanding answers from Google CEO Sundar Pichai about a recently unveiled Google+ vulnerability, requesting the company’s internal communications regarding the issued in a letter [PR & PDF]. Some of the senators’ Democratic counterparts on the committee reached out to the Federal Trade Commission to demand that the agency investigate the Google+ security flaw, saying in a letter [3 pg PDF here] that if agency officials discover “problematic conduct, we encourage you to act decisively to end this pattern of behavior through substantial financial penalties and strong legal remedies.” Google has until October 30th to respond to the senators’ inquiries, just weeks before Pichai is scheduled to testify in front of the House Judiciary Committee following the November midterm elections. An exact date for that hearing has yet to be announced. [The Verge | Senators demand Google hand over internal memo urging Google+ cover-up | Senators Demand Memo Behind Google+ Privacy Debacle Cover-Up | Google Draws Bipartisan Criticism Over Data Leak Coverup | Senator Blumenthal Wants FTC To Investigate Google Over Data Leak | Google+ vulnerability comes under fire in Senate hearing | Google facing scrutiny from Australian regulator over Google+ data breach | Google+ Glitch Revelation Sparks German Probe | U.S., European regulators investigating Google glitch

US – Privacy Advocates Tell Senators What They Want in a Data Protection Law

Privacy advocates and tech giants like Google, Amazon and Apple all want a federal privacy law. But while tech companies essentially want a federal privacy bill to be a ceiling that would limit how far states could go with their own privacy rules, privacy advocates want it to be more of a floor that states can build on. During the Oct 10 hearing before the Senate Committee on Commerce, Science and Transportation, privacy advocates stressed the need for a federal privacy law that could work in tandem with state laws instead of overwriting them. Representatives included Andrea Jelinek, the chair of the European Data Protection Board [statement]; Alastair Mactaggart, the advocate behind California’s Consumer Privacy Act [statement]; Laura Moy, executive director of the Georgetown Law Center on Privacy and Technology [statement]; and Nuala O’Connor, president of the Center for Democracy and Technology [statement]. [CNET News | Privacy Groups Urge Congress To Create New National Privacy Law | CDD to Senate: Privacy Legislation Should Be Tough, Comprehensive, Enforceable | Lawmakers Push to Rein In Tech Firms After Google+ Disclosure | Senator calls for FTC investigation into Google+ data exposure

US – Facebook Accused of Violating Children’s Privacy Law

Several US groups advocating public and children’s health have urged the FTC to take action against social media giant Facebook for allegedly violating children’s privacy law. The 18-member group led by the Campaign for a Commercial-Free Childhood (CCFC) have filed a complaint asserting that Facebook’s Messenger Kids, a controversial messaging application for children as young as five, collects kids’ personal information without obtaining verifiable parental consent [PR & Complaint] Messenger Kids is the first major social platform designed specifically for young children, but the complaint argues that] Facebook’s parental consent mechanism does not meet the requirements of the Children’s Online Privacy Protection Act (COPPA) because any adult user can approve any account created in the app and “even a fictional ‘parent’ holding a brand-new Facebook account could immediately approve a child’s account without proof of identity.” The complaint further accused Facebook of disclosing data to unnamed third parties for “broad, undefined business purposes.” In January the CCFC on behalf of the advocacy groups sent Facebook CEO Mark Zuckerberg a letter signed by over 100 experts and advocates asking him to remove Messenger Kids from its platform. Critics have been skeptic about Facebook’s Messenger Kids security measures in protecting children’s privacy, and have been pushing for its closure since its debut last year [see CCFC petition]. [Financial Express]

Privacy Enhancing Technologies (PETs)

WW – Blockchain’s Role as a Privacy Enhancing Technology

Many of us hear the word “blockchain” [wiki & beginner’s guide], mentally file it under “something to do with Bitcoin,” and then swiftly move on. But there is more to this new technology than the cryptocurrencies. Top of mind is blockchain’s potential to enable greater data privacy and data security, says Florian Martin-Bariteau, who runs the University of Ottawa’s Blockchain Legal Lab [here], a research team investigating the practical uses of the technology — and the legal issues those uses raise. He’s also on a panel at the forthcoming CBA Access to Information and Privacy Law Symposium in Ottawa (Oct. 19 and 20) that will compare uses of blockchain in other industries. “The blockchain technology is actually a protocol for information or asset exchange, and an infrastructure for data storage and management,” he says. “It is literally a chain of blocks of information which are interlinked in a secure way.” It was conceived as a kind of secure spreadsheet — a way to timestamp documents in a ledger that could not be edited or tampered with.  Martin-Bariteau describes it as a digital notary system. The technology has since developed to become “a secure, immutable database shared by all parties in a distributed network.” Its utility where privacy is an issue is plain to see.  But part of the attraction of blockchain — the notion that data can’t be edited, altered or erased — is also part of the challenge it creates. For example, in the European Union and elsewhere, GDPR compliance includes the right to erasure. This has enormous implications for any system that requires registered users as part of its design. Martin-Bariteau is clear about the risks involved. “You need to be very careful about the information you register on an immutable ledger,” he notes. “You want to avoid including any personal information, so you need to design your implementation, or advise your clients to design it, in a way that it can use personal information without storing it.” [CBA National and see also: CNIL Publishes Initial Assessment on Blockchain and GDPR

RFID / Internet of Things

US – NIST Seeks Public Comment on Managing Internet of Things Cybersecurity and Privacy Risks

The U.S. Department of Commerce’s National Institute of Standards and Technology recently announced that it is seeking public comment on Draft NISTIR 8228, Considerations for Managing Internet of Things (“IoT”) Cybersecurity and Privacy Risks (the “Draft Report”). The document is to be the first in a planned series of publications that will examine specific aspects of the IoT topic. The Draft Report identifies three high-level considerations with respect to the management of cybersecurity and privacy risks for IoT devices as compared to conventional IT devises: (1) many IoT devices interact with the physical world in ways conventional IT devices usually do not; (2) many IoT devices cannot be accessed, managed or monitored in the same ways conventional IT devices can; and (3) the availability, efficiency and effectiveness of cybersecurity and privacy capabilities are often different for IoT devices than conventional IT devices. The Draft Report also identifies three high-level risk mitigation goals: (1) protect device security; (2) protect data security; and (3) protect individuals’ privacy. Comments are due by October 24, 2018 [download the NIST Comment Template for submitting your comments] [Privacy & Information Security Law Blog (Hunton Andrews Kurth)]


WW –Two-Thirds of Data Security Pros Looking to Change Jobs

Nearly two-thirds of security pros are looking to leave their current jobs. That is one of the findings of a new study on IT security trends by staffing firm Mondo [PR & report] which says that 60% of these workers can be easily hired away. Lack of growth opportunities and job satisfaction are tied as the top reasons to leave a job, according to the survey. The study found several other top reasons why IT security experts leave a job. They include: 1) Unhealthy work environment (cited by 53%); 2) Lack of IT security prioritization from C-level or upper management (cited by 46%); 3) Unclear job expectations (cited by 37%); and 4) Lack of mentorship (cited by 30%). To help retain IT security experts, the study recommends that organizations offer the following benefits, based on responses from security pros: 1) Promoting work-life balance; 2) Taking worker security concerns seriously; 3) Sponsorship of certifications or courses; 4) Increased investment in emerging tech; and 5) CISO leadership/defined ownership of security needs Mondo gathered this data by surveying more than 9,000 IT security professionals and decision-makers. [Information Management]

Smart Cars / Cities

WW – Google’s Plans for First Wired Urban Community Raise Data-Privacy Concerns

A unit of Google’s parent company Alphabet is proposing to turn a rundown part of Toronto’s waterfront into what may be the most wired community in history — to “fundamentally refine what urban life can be.” [see overview here] Sidewalk Labs [here] has partnered with a government agency known as Waterfront Toronto [here] with plans to erect mid-rise apartments, offices, shops and a school on a 12-acre site — a first step toward what it hopes will eventually be an 800-acre development. But some Canadians are rethinking the privacy implications of giving one of the most data-hungry companies on the planet the means to wire up everything from streetlights to pavement. And some want the public to get a cut of the revenue from products developed using Canada’s largest city as an urban laboratory. “The Waterfront Toronto executives and board are too dumb to realize they are getting played,” said former BlackBerry Chief Executive Jim Balsillie who also said the federal government is pushing the board to approve it. “Google knew what they wanted. And the politicians wanted a PR splash and the Waterfront board didn’t know what they are doing. And the citizens of Toronto and Canada are going to pay the price,” Balsillie said. Julie Di Lorenzo, a prominent Toronto developer who resigned from the Waterfront Toronto board over the project [see coverage], said data and what Google wants to do with it should be front and center in the discussions. She also believes the government agency has given the Google affiliate too much power over how the project develops. “How can (Waterfront Toronto), a corporation established by three levels of democratically elected government, have shared values with a limited, for-profit company whose premise is embedded data collection?” Di Lorenzo asked.  Bianca Wylie, an advocate of open government, said it remains deeply troubling that Sidewalk Labs still hasn’t said who will own data produced by the project or how it will be monetized. Google is here to make money, she said, and Canadians should benefit from any data or products developed from it. “We are not here to be someone’s research and development lab,” she said, “to be a loss leader for products they want to sell globally.” Ottawa patent lawyer Natalie Raffoul said the fact that the current agreement leaves ownership of data issues for later shows that it wasn’t properly drafted and means patents derived from the data will default to Google. [The Seattle Times]


US – That Sign Telling You How Fast You’re Driving May Be Spying on You

According to recently released US federal contracting data, the Drug Enforcement Administration will be expanding the footprint of its nationwide surveillance network with the purchase of “multiple” trailer-mounted speed displays “to be retrofitted as mobile License Plate Reader (LPR) platforms.” The DEA is buying them from RU2 Systems Inc., a private Mesa, Arizona company. For overviews of LPRs see EFF] Two other, apparently related contracts, show that the DEA has hired a small machine shop in California, and another in Virginia, to conceal the readers within the signs. An RU2 representative said the company providing the LPR devices themselves is a Canadian firm called Genetec. DEA expects to take delivery of its new license plate-reading speed signs by October 15. The DEA launched its National License Plate Reader Program in 2008; it was publicly revealed for the first time during a congressional hearing four years after that. The DEA’s most recent budget describes the program as “a federation of independent federal, state, local, and tribal law enforcement license plate readers linked into a cooperative system, designed to enhance the ability of law enforcement agencies to interdict drug traffickers, money launderers or other criminal activities on high drug and money trafficking corridors and other public roadways throughout the U.S.” What is a game-changing crime-fighting tool to some, is a privacy overreach of near-existential proportion to others. License plate readers, which can capture somewhere in the neighborhood of 2,000 plates a minute, cast an astonishingly wide net that has made it far easier for cops to catch serious criminals. On the other hand, the indiscriminate nature of the real-time collection, along with the fact that it is then stored by authorities for later data mining is highly alarming to privacy advocates. [QUARTZ | How roadside speed signs in the U.S. could be tracking you using Canadian-made tech]





Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: