16–30 September 2018


US – Use of Facial-Recognition Technology Fuels Debate at Seattle School

RealNetworks is offering schools a new, free security tool “Secure, Accurate Facial Recognition — or SAFR, pronounced “safer” — is a technology that the company began offering free to K-12 schools this summer. It took three years, 8 million faces and more than 8 billion data points to develop the technology, which can identify a face with near perfect accuracy. The software is already in use at one Seattle school, and RealNetworks is in talks to expand it to several others across the country. But as the technology moves further into public spaces, it’s raising privacy concerns and calls for regulation — even from the technology companies that are inventing the biometric software. Privacy advocates wonder if people fully realize how often their faces are being scanned, and advocates and the industry alike question where the line is between the benefits to the public and the cost to privacy. “There’s a general habituation of people to be tolerant of this kind of tracking of their face,” said Adam Schwartz, a lawyer with digital privacy group Electronic Frontier Foundation. “This is especially troubling when it comes to schoolchildren. It’s getting them used to it.” School security is a serious issue, he agreed, but he said the benefits of facial recognition in this case are largely unknown, and the damage to privacy could be “exceedingly high.” Clare Garvie, an associate at the Center on Privacy and Technology at Georgetown Law Center, but she finds the lack of transparency into how the technology is being used and the lack of federal laws troubling. Garvie was on a team that conducted a widespread study that found 54% of U.S. residents are in a facial-recognition database accessible by law enforcement [see PR here & study report here] — usually in the form of a driver’s license photo. “It is unprecedented to have a biometric database that is composed primarily of law-abiding citizens,” Garvie said. “The current trajectory might fundamentally change the relationship between police and the public,” she said. “It could change the degree to which we feel comfortable going about our daily lives in public spaces.” Alessandro Acquisti [here & here], a professor of information technology and public policy at Carnegie Mellon University pointed out that facial recognition can be used for good — to combat child trafficking — and for bad — to track law-abiding citizens anywhere they go. That doesn’t mean it’s neutral, he said. Anonymity is becoming more scarce with the proliferation of photos on social media and the technology that can recognize faces. [Seattle Times, See also: Are You on Board with Using Facial Recognition in Schools? | Is Facial Recognition in Schools Worth the High Price?

Big Data / Analytics

WW – ‘Predictive Policing’: Law Enforcement Revolution or Spin on Old Biases?

Los Angeles has been put on edge by the LAPD’s use of an elaborate data collection centre, a shadowy data analysis firm called Palantir, and predictive algorithms to try to get a jump on crime. Los Angeles isn’t the only place where concerns are flaring over how citizens’ data is collected and used by law-enforcement authorities. Police forces across the U.S. are increasingly adopting the same approach as the LAPD: employing sophisticated algorithms to predict crime in the hope they can prevent it. Chicago, New York City and Philadelphia use similar predictive programs and face similar questions from the communities they are policing, and even legal challenges over where the information is coming from and how police are using it. A sophisticated program called PredPol, short for predictive policing, is used to varying degrees by 50 police forces across the United States. The genesis of the program came from a collaboration between LAPD deputy chief Sean Malinowski and Canadian Jeff Brantingham, an anthropology professor at UCLA. Canadian police forces are very aware of what their U.S. counterparts are doing, but they are wary of jumping in with both feet due to concerns over civil liberties issues. Sarah Brayne, a Canadian sociologist, spent two years inside the LAPD studying its use of predictive policing. She says the LAPD has been using predictive policing since 2012, and crunching data on a wide range of activities — from “where to allocate your resources, where to put your cars, where to put your personnel, to helping investigators solve a crime. And even for some risk management, like tracking police themselves, for performance reviews and different accountability reasons.” But PredPol is just one of the police systems that community watchdogs are concerned about. The Rampart division of the LAPD uses another program to pinpoint individuals who are at risk of committing crimes in the future. This is known as person-based predictive policing. … The program is called Los Angeles Strategic Extraction and Restoration (LASER). At the moment it generates a list of approximately 20 “chronic offenders” that is updated monthly. LAPD documents show how LASER gives people specific scores, which increase with each police encounter. You get five points if you are a gang member. Five points if you are on parole or probation. Five point for arrests with a handgun. And one point for every “quality” police contact in the past two years, which includes what the LAPD calls “Field Interviews.” In Canada, field interviews are called “carding,” referring to the cards police use to record information about the people they have stopped — even when there are no grounds to think they’ve committed an offence. On the chronic offender bulletin there are names, addresses, scores ranging from six to 28, dates of birth and gang affiliations (Crazy Riders, Wanderers, 18th Street, and so on). The police try to track down the people on the bulletin and hand-deliver an “At Risk Behaviour” letter to each one — if they can find them. Officers are given instructions to contact the offenders on the list every month “to check their status” and to remind them to use the community services. They are also encouraged to door-knock on adjacent residences to “spark interest and gather info.” [CBC News]

CA – Q&A: Data Ownership Conundrum in the Data Driven World

Modern society is increasingly reliant upon data and driven by data gathering and data analytics. This leads to many questions that need to be unraveled relating to privacy, data rights and smart cities. One person well-placed to tackle these issues is Teresa Scassa [University of Ottawa law professor & fellow at the Waterloo-based Centre for International Governance Innovation]. In her latest research paper, Data Ownership, Scassa describes how in most jurisdictions the ownership of data is often based in copyright law or protected as confidential information. In Europe, database protection laws also play a role. However, there are limitations and major areas where laws fall short. For example, “Copyright protection requires a human author. Works that are created by automated processes in which human authorship is lacking cannot, therefore, be copyright protected. This has raised concerns that the output of artificial intelligence processes will not be capable of copyright protection,” warns Scassa. To discuss these important issues further, Digital Journal recently asked Teresa Scassa the following questions: 1) How important has data become for businesses?; 2) Are consumers too willing to provide personal data?; 3) How concerned should people be about what is done with personal data?; 4) How about data security issues. How secure is most personal data that is held by companies?; and 5) How are new technologies, like artificial intelligence, affecting data privacy? [Digital Journal] In a follow up interview, Teresa Scassa discusses data privacy laws, considering the recent changes affecting Europe and the possible implications for the U.S. [here]


CA – OPC Publishes Draft Guidelines for Mandatory Breach Reporting

On September 17, 2018, the Office of the Privacy Commissioner of Canada (OPC) published draft guidelines on mandatory breach reporting under the “Personal Information Protection and Electronic Documents Act” (PIPEDA). The guidelines are intended to assist organizations in meeting their breach reporting and record-keeping obligations under PIPEDA’s mandatory breach reporting regime, which comes into force on November 1, 2018. Organizations have until October 2, 2018 to provide feedback on these draft guidelines In April 2018, the federal government published the Breach of Security Safeguards Regulations setting out the requirements of the new regime, and announced that the Regulations would come into force on November 1, 2018 …organizations will be required to notify the OPC and affected individuals of “a breach of security safeguards” involving personal information under the organization’s control where it is reasonable in the circumstances to believe that the breach creates a “real risk of significant harm” to affected individuals. Other organizations and government institutions must also be notified where such organization or institution may be able to mitigate or reduce the risk of harm to affected individuals. Organizations must also keep and maintain records of all breaches of security safeguards regardless of whether they meet the harm threshold for reporting. Failure to report a breach or to maintain records as required is an offence under PIPEDA, punishable by a fine of up to C$100,000. The draft guidelines are intended to assist organizations in meeting their breach reporting and record-keeping obligations under PIPEDA. Unfortunately for stakeholders, much of the information in the draft guidelines is simply a reiteration of the legal requirements as set out in PIPEDA and the Regulations. However, the draft guidelines provide additional guidance in certain areas, including: 1) Who Is Responsible for Reporting a Breach?; 2) When Does a Breach Create a Real Risk of Significant Harm?; 3) Form of Report; and 4) What Information Must Be Included in a Breach Record? [Business Class (Blakes) Additional coverage at: BankInfo Security]

CA – Upcoming Canadian Breach Notification Requirements Still in Flux

Canada’s national breach notification requirements are coming online November 1st, meaning companies experiencing a data breach will soon have new reporting obligations. These requirements were created in 2015 by the Digital Privacy Act, which amended the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada’s main privacy statute. In April 2018, in preparation for the national implementation of the new law, the Office of the Privacy Commissioner of Canada (OPC), with authority to issue promulgating regulations under PIPEDA, issued Regulations that establish detailed requirements regarding the content and methodology of breach notifications to the OPC and affected individuals. After issuing those Regulations, the OPC continued to receive requests for further clarity and guidance regarding the breach notification requirements under PIPEDA and the OPC Breach Regulations. In response to those further requests for guidance, the OPC announced that it would issue further guidance (“What You Need To Know About Mandatory Reporting Of Breaches Of Security Safeguards”) on breach notification and reporting. On September 17th, the OPC invited public feedback on the draft guidance. The OPC will accept feedback until October 2, 2018. Comments can be sent to OPC-CPVPconsult2@priv.gc.ca and must be either in the body of the email or attached as a Word or PDF document. The OPC will publish the final guidance soon after the October 2nd deadline to ensure guidance is in place when the amendment becomes effective in November. … the OPC’s September 17th announcement indicates there is still uncertainty around what exactly will be required of companies that experience a breach. Companies that hold or control information on Canadian residents have one more opportunity to impact the final requirements or pose questions for clarity in the OPC’s guidance, and should submit their views before the October 2nd deadline. [Eye on Privacy (SheppardMullin) and at: BankInfo Security]

CA – OPC Denounces Slow Progress on Fixing Outdated Privacy Laws

Federal Privacy Commissioner Daniel Therrien’s annual report to Parliament was tabled. [see here, Commissioner’s Message here &103 pg PDF here] It outlines the work of the Office of the Privacy Commissioner of Canada (OPC) as it relates to both the Personal Information Protection and Electronic Documents Act (PIPEDA), Canada’s federal private sector privacy law and the Privacy Act, which applies to the federal public sector. It covers important initiatives over the last year, including key investigations, work on reputation and privacy, new consent guidance as well as work on national security and Bill C-59 [here]. In his report, Therrien also reiterated calls for the government to increase his office’s resources. “My office needs a substantial budget increase to keep up our knowledge of the technological environment and improve our capacity to inform Canadians of their rights and guide organizations on how to comply with their obligations,” he says. “Additional resources are also needed meet our obligations under the new breach reporting regulations that come into force in November.” [see here] Under the regulations, companies will be required to report all privacy breaches presenting a real risk of significant harm. While imperfect, Therrien calls the regulations “a step in the right direction.” As breach notification regulations come into force on the private sector side, serious concerns have also emerged about the federal government’s ability to prevent, detect and manage privacy breaches within its own institutions. An OPC review of privacy breach reporting by federal government institutions found thousands of breaches occur annually, and while some go unreported, others likely go entirely unnoticed at many institutions. Therrien [also] warns privacy concerns are reaching crisis levels and is calling on the federal government to take immediate action by giving his office new powers to more effectively hold organizations to account. “Unfortunately, progress from government has been slow to non-existent … There’s no need to further debate whether to give my office new powers to make orders, issue fines and conduct inspections to ensure businesses respect the law. It’s not enough for the government to ask companies to do more to live up to their responsibilities. To increase trust in the digital economy, we must ensure Canadians can count on an independent regulator with the necessary tools to verify compliance with privacy law. If my Office had order making powers, our guidelines would be more than advice that companies can choose to ignore. They would become real standards that ensure real protection for Canadians.” Therrien says. [Office of the Privacy Commissioner of Canada Also see the OPC’s “Alert” Key lessons for public servants from the 2017-18 Annual Report Coverage: Canada’s privacy laws ‘sadly falling behind’ other countries: Privacy commissioner | Privacy commissioner slams ‘slow to non-existent’ federal action in light of major data breaches | Watchdog says Ottawa moving too slowly on privacy threats | Watchdog slams government’s ‘slow to non-existent’ action to protect Canadians’ privacy | Time of ‘self-regulation’ is over, privacy czar says in push for stronger laws]

CA – ‘Right to Be ForgottenCould Trigger Battle Over Free Speech in Canada

A push by some for a “right to be forgotten” for Canadians is setting up what could be a landmark battle over the conflict between privacy and freedom of expression on the internet. In his annual report issued September 27 – PR, Report, Commissioner’s Message &103 pg PDF] Privacy Commissioner Daniel Therrien served notice he intends to seek clarity from the Federal Court on whether existing laws already give Canadians the right to demand that search engines remove links to material that is outdated, incomplete or incorrect, a process called “de-indexing.” Following a round of consultations he launched in 2016, Therrien concluded in a draft report earlier this year that Canadians do have that right under PIPEDA. Google disagrees — and warns that a fundamental charter right is being threatened. [Section 2 (b) — expression & press freedom, wiki here, Charter here, guidance here] “The right to be forgotten impinges on our ability to deliver on our mission, which is to provide relevant search results to our users,” said Peter Fleischer [here], Google’s global privacy counsel. “What’s more, it limits our users’ ability to discover lawful and legitimate information.” University of Ottawa law professor Michael Geist, also blog posts here & here], who specializes in internet and e-commerce law, said “Given the complexity, given the freedom of expression issues that arise out of this, I think the appropriate place is within Parliament to explicitly go through the policy process and decide what’s right for Canada on this” Internet lawyer Allen Mendelsohn [blog posts here & here] worries about the “slippery slope” implied in a right to be forgotten. With no easy answers on how to move forward, he said it’s Parliament’s duty to debate the concept and decide on appropriate standards. “Parliament represents the people, and if the will of the people think this is a good thing to do, then there’s no good reason why they shouldn’t go ahead and do it,” he said. Google argues that freedom of expression is a fundamental human right. While the European court upheld the right to be forgotten, Chile, Colombia and the U.S. have all rejected it. According to Peter Fleischer “As the privacy commissioner considers translating the European model to Canada, it will also have to confront the challenges of how to balance one person’s right to privacy with another’s right to know, and whether the European right to be forgotten would be consistent with the rights outlined in Canada’s Charter of Rights and Freedoms, which assures Canadians ‘freedom of thought, belief, opinion and expression, including freedom of the press and other media of communication.’“ [CBC News | Privacy watchdog to seek ruling on ‘right to be forgotten’

CA – Liberals Won’t Put Political Parties Under Privacy Laws

The Liberal government will not accept a recommendation — endorsed by MPs from the three major parties on Access to Information, Privacy and Ethics Committee [see here & report here also 56 pg PDF] — to develop a set of privacy rules for political parties or bring them under existing laws. Instead, under the Liberals’ electoral rule changes, parties will simply have to post a privacy policy online. Bill C-76 [here] does not allow for any independent oversight, however, to ensure parties are actually following their policies. Because they’re specifically exempted from federal privacy laws, parties are also not required to report if they’ve been hacked or suffered a data breach involving sensitive information about Canadians. The decision means federal political parties can continue to collect, store and use the personal information of Canadian citizens without limitations, laws or independent oversight. Federal Privacy Commissioner Daniel Therrien — along with his counterparts at the provincial and territorial levels — issued a joint statement calling on all levels of government to put some form of restrictions on parties’ data operations — an increasingly crucial aspect of electioneering in Canadian politics [see PR here & Joint Resolution here]. In exempting political parties from privacy laws, Canada is largely an outlier. The United Kingdom, New Zealand, and much of the European Union subjects parties to privacy rules. [Toronto Star coverage at: Toronto Star Editorial | Political parties excused from privacy laws: Why Albertans’ personal information is at risk]

CA – Buyers’ Privacy Top Priority, Says Ontario’s Online Pot Retailer

Ontario’s government-run cannabis retailer is assuring its future customers that their privacy is the top priority, an issue ranked as a major concern for marijuana users in a recent report which ranked privacy and data security among the top demands of Canadian marijuana consumers, noting one in five listed it as the most important feature. [see Deloitte’s 2018 cannabis report, PR]. Critics have raised concerns about how Ontario Cannabis Store (OCS) [here] customers’ data will be used and stored after the online delivery service launches on Oct. 17. There are worries the data may be stored in the United States, where American border agents could access it and ban travellers from entering the U.S. for using a drug that’s illegal there under federal law. The OCS this week announced it’s taking steps to safeguard customers’ privacy and keep their buying history confidential. Ensuring data is stored within Canada and other privacy considerations were key factors in deciding to partner with Shopify, the Ottawa-based e-commerce platform. All information collected will be deleted and no information will be sold to third parties after it’s held for a minimum time, the company says. While dispensaries across the country are getting ready to open their doors on Oct. 17 — when Canada becomes the second country in the world to legalize recreation marijuana — Ontario residents will be able to legally buy pot only through a government-run delivery service. However, new Ontario Premier Doug Ford has rejected the government-monopoly on cannabis sales — a model set up under the previous Liberal government — [and] storefront pot sales are to begin on April 1. [The London Free Press]

CA – TREB CEO Concerned About Homeowner Privacy, Security

The Toronto Real Estate Board is “pressing ahead” with the Competition Bureau’s demand to make home sales data available on realtors’ password-protected websites, but that doesn’t mean the board’s concerns around privacy are gone. In his first interview since the Supreme Court of Canada refused in August to hear TREB’s seven-year fight [read Competition Bureau PR here & TREB PR here] to keep the numbers under wraps – effectively forcing them to be made public – the board’s chief executive officer John DiMichele told The Canadian Press, “the element of privacy in our opinion hasn’t been settled completely yet.” DiMichele is particularly concerned because he claims to have seen evidence of brokers’ remarks about homeowners being posted online, information that is not included in the home sales data feed TREB had to make available to realtors. DiMichele wouldn’t reveal how he discovered such violations [and he did not] discuss in detail what kind of action will be taken against anyone who is caught posting unauthorized information or home sales data without password protections – conditions mandated in a Competition Tribunal ruling [5 pg PDF here] that came into effect recently, after the Competition Bureau argued that TREB’s refusal to release the data was anti-competitive and stifled innovation. In early September, the board sent cease-and-desist letters to real estate companies warning it will revoke data access and TREB memberships or bring legal action against members it believes are violating its user agreement by posting sales numbers online “in an open and unrestricted fashion.” [The Globe & Mail Additional coverage at: The Toronto Star]


WW – Yes Facebook is Using Your 2FA Phone Number to Target You With Ads

Facebook has confirmed it does in fact use phone numbers that users provided it for security purposes to also target them with ads. Specifically a phone number handed over for two factor authentication (2FA) — a security technique that adds a second layer of authentication to help keep accounts secure. Facebook’s confession follows a story Gizmodo ran related to research work carried out by academics at two U.S. universities [Northeastern University and Princeton University] who ran a study [see Investigating sources of PII used in Facebook’s targeted advertising – 18 pg PDF here] in which they say they were able to demonstrate the company uses pieces of personal information that individuals did not explicitly provide it to, nonetheless, target them with ads. Some months ago Facebook did say that users who were getting spammed with Facebook notifications to the number they provided for 2FA was a bug. “The last thing we want is for people to avoid helpful security features because they fear they will receive unrelated notifications,” Facebook then-CSO Alex Stamos wrote in a blog post at the time. Apparently not thinking to mention the rather pertinent additional side-detail that it’s nonetheless happy to repurpose the same security feature for ad targeting. [TechCrunch coverage at: DeepLinks Blog (EFF), The Mercury News and Tom’s Harware]

Facts & Stats

CA – Federal Workers Cited 3,075 Times for Lapses in Document Security

Office workers at Public Services and Procurement Canada were cited 3,075 times last year for failing to lock up documents, USB keys and other storage devices containing sensitive information, says a new security report. And six of those employees were found to be chronic offenders during a “security sweep” at the department in 2017-2018, with each of them leaving confidential material unsecured at least six times over the 12-month period. According to a June 2018 briefing note, obtained by CBC News under the Access to Information Act. [CBC News]

WW – Cyber Crime’s Toll: $1.1 Million in Losses and 1,861 Victims per Minute

Every minute more than $1.1 million is lost to cyber crime and 1,861 people fall victim to such attacks, according to a new report [Evil Internet Minute 2018] from threat management company RiskIQ [see PR, Blog Post & Infographic]. Despite the best efforts of organizations to guard against external cyber threats, spending up to $171,000 every 60 seconds, attackers continue to proliferate and launch successful campaigns online, the study said. Attacker methods range from malware to phishing to supply chain attacks aimed at third parties. Their motives include monetary gain, large-scale reputational damage, politics and espionage. One of the biggest security threats is ransomware. The report said 1.5 organizations fall victim to ransomware attacks every minute, with an average cost to businesses of $15,221. [Information Management]


CA – N.S. Premier Calls Election Promise to Increase OIPC Powers “a Mistake’

In 2013, Stephen McNeil said that if he became premier, he would “expand the powers and mandate of the Office of the Information and Privacy Commissioner, particularly through granting her order-making power.” At the time he responded to a report by the Centre of Law and Democracy [12 pg PDF] that recommended a complete overhaul of the province’s freedom-of-information policy, writing “If elected Premier, I will expand the powers and mandate of the Review Officer, particularly through granting her order-making power” Nearly five years later and with no follow-through on that commitment, he says the pledge was a “mistake.” He said that he thinks the office is functioning “properly” the way it is and that it has all the power it needs. But experts say that McNeil’s failure to institute meaningful reforms in government transparency five years after taking office indicate a larger failure to take government transparency seriously. Catherine Tully, the province’s current privacy commissioner, has issued her own calls to update the legislation including giving her order-making power. She has said that legislation written in 1993 is outdated for the current digital world. [Global News]

US – Privacy Group Sues Archives for Kavanaugh Surveillance Records

The Electronic Privacy Information Center [EPIC] has filed a federal Freedom of Information Act lawsuit seeking records related to U.S. Supreme Court nominee Brett Kavanaugh’s involvement in the George W. Bush administration’s government surveillance programs between 2001 and 2006 during enactment of the Patriot Act and while the administration was conducting warrantless surveillance for counter-terrorism purposes. [see announcement here & 21 pg PDF claim here] The group alleged that Kavanaugh said in 2006 Senate testimony on his nomination to the U.S. Court of Appeals for the District of Columbia Circuit that he didn’t know anything about the warrantless wiretapping program, which was carried out in secret until 2005. His White House email communications and records related to the program have not been made available to the public, the group alleged. Bloomberg BNA


WW – Please Don’t Give Your Genetic Data to AncestryDNA as Part of Their Spotify Playlist Partnership

Ancestry, the world’s largest for-profit genealogy company, has announced a new partnership with Spotify to create playlists based on your DNA. The partnership combines Spotify’s personalized recommendations with Ancestry’s patented DNA home kit data to give users recommendations based on both their Spotify habits and their ancestral place of origin. A ThinkProgress investigation last year found that buried in their terms of service, Ancestry claims ownership of a “perpetual, royalty-free, worldwide license” that may be used against “you or a genetic relative” as the company and its researchers see fit. Upon agreeing to the company’s terms of service, you and any genetic relatives appearing in the data surrender partial legal rights to the DNA, including any damages that Ancestry may cause unintentionally or purposefully. At the same time, maybe their mission isn’t all that different from Spotify’s, who’ve spent the last few years preaching the Big Data gospel in their aim to deliver the most highly-personalized experience to users through data collection. However you feel about data privacy, the Ancestry partnership feels like another big move for Spotify, who have continued to partner with auto manufacturers, telecom behemoths, video providers and more in recent months. [SPIN coverage at: Jezebel, Quartzy, Complex and Campaign]

Health / Medical

US – Congress Urged To Align 42 CFR Part 2 with HIPAA Privacy Rule

The Partnership to Amend 42 CFR Part 2 is urging Congress to include the Overdose Prevention and Patient Safety Act (HR 6082), which would align 42 CFR Part 2 with the HIPAA Privacy Rule, in compromise opioid legislation that the House and Senate are considering. HR 6082 would allow the sharing of information about a substance abuse patient without the patient’s consent. The House passed its comprehensive opioid crisis legislation (HR 6) [here & 9 pg PDF overview here] in June, while the Senate just passed its legislation (S 2680). The two chambers are working on compromise legislation thaty they hope to pass before the mid-term elections. Currently, 42 CFR Part 2 prevents providers from sharing any information on a patient’s substance abuse history unless the patient gives explicit consent. The Partnership to Amend 42 CFR Part 2 wants current law to be amended because, it argues, the stricter confidentiality requirements have a negative effect on medical treatment of individuals undergoing treatment for addiction. They emphasized their case] In a Sept. 18 letter to the Senate and House majority and minority leaders. Not everyone in healthcare favors changing 42 CFR Part 2. The American Medical Association (AMA) has come out against the effort to change current law [arguing in a letter sent to Congress – coverage here] that amending 42 CFR Part 2 would discourage addicted individuals from seeking treatment out of concern that their addiction treatment information will be shared without their permission. [HealthIT Security]

Horror Stories

CA – Proposed Class Action Lawsuit Launched After Alleged NCIX Data Breach

Kipling Warner, a Vancouver software engineer has launched a proposed class action lawsuit in the wake of an alleged data breach involving personal information belonging to former customers of bankrupt computer retailer NCIX. [The issue is being investigated by the RCMP and the BC OIPC see here]. The notice of civil claim filed in B.C. Supreme Court [here] says he gave the company his name and address along with his debit and credit card details in the course of purchasing computer products. He’s seeking to certify a lawsuit against NCIX and the company tasked with auctioning off the computer firm’s old equipment. Warner claims NCIX failed to properly encrypt the information of at least 258,000 people. And he claims the auctioneer failed to take “appropriate steps to protect the private information on its premises.” Warner is suing for loss including damage to credit reputation, mental distress, “wasted time, frustration and anxiety” and time lost “engaging in precautionary communication” with banks, credit agencies and credit car companies. His lawyer, David Klein [here], told CBC that customers dealing with a technology company would expect anyone who comes into contact with their information to take steps to ensure confidentiality. The provincial privacy act says organizations doing business in British Columbia have a duty to protect the personal information entrusted to them. The federal regulation says personal information that is “no longer required to fulfil the identified purposes should be destroyed, erased or made anonymous.” The proposed class action lawsuit says millions of customers could be affected. [CBC News]

US – Uber Agrees to $148M Settlement With States Over Data Breach

Uber will pay $148 million and tighten data security after the ride-hailing company failed for a year to notify drivers that hackers had stolen their personal information, according to a settlement reached with all 50 states and the District of Columbia after a massive data breach in 2016 [here] announced Wednesday. [see California AG PR here, Illinois AG PR here, Alaska AG PR here, New York AG PR here & New Mexico AG PR here] Instead of reporting [the breach], Uber hid evidence of the theft and paid ransom to ensure the data wouldn’t be misused. “This is one of the most egregious cases we’ve ever seen in terms of notification; a yearlong delay is just inexcusable,” Illinois Attorney General Lisa Madigan [wiki here] told The Associated Press. “And we’re not going to put up with companies, Uber or any other company, completely ignoring our laws that require notification of data breaches.” Uber, whose GPS-tracked drivers pick up riders who summon them from cellphone apps, learned in November 2016 that hackers had accessed personal data, including driver’s license information, for roughly 600,000 Uber drivers in the U.S. The company acknowledged the breach in November 2017, saying it paid $100,000 in ransom for the stolen information to be destroyed. The hack also took the names, email addresses and cellphone numbers of 57 million riders around the world. The settlement requires Uber to comply with state consumer protection laws safeguarding personal information and to immediately notify authorities in case of a breach; to establish methods to protect user data stored on third-party platforms and create strong password-protection policies. The company also will hire an outside firm to conduct an assessment of Uber’s data security and implement its recommendations. The settlement payout will be divided among the states based on the number of drivers each has. [The Washington Post coverage at: TechCrunch, PYMNTS, The Wall Street Journal and engadget]

US – Wendy’s Faces Lawsuit for Unlawfully Collecting Employee Fingerprints

A class-action lawsuit has been filed in Illinois against fast food restaurant chain Wendy’s accusing the company of breaking state laws in regards to the way it stores and handles employee fingerprints. The lawsuit was filed on September 11, in a Cook County court [here], according to a copy of the complaint obtained by ZDNet. [The case is: Martinique Owens and Amelia Garcia v. Wendy’s International LLC, et al., Case No. 2018­-ch-­11423, in the Circuit Court of Cook County — complaint here.] The complaint is centered around Wendy’s practice of using biometric clocks that scan employees’ fingerprints when they arrive at work, when they leave, and when they use the Point-Of-Sale and cash register systems. Plaintiffs, represented by former Wendy’s employees Martinique Owens and Amelia Garcia, claim that Wendy’s breaks state law — the Illinois Biometric Information Privacy Act (BIPA) [here] — because the company does not make employees aware of how the company handles their data. Wendy’s does not inform employees in writing of the specific purpose and length of time for which their fingerprints were being collected, stored, and used, as required by the BIPA, and nor does it obtain a written release from employees with explicit consent to obtain and handle the fingerprints in the first place. Nor does it provide a publicly available retention schedule and guidelines for permanently destroying employees’ fingerprints after they leave the company, plaintiffs said. The class-action also names Discovery NCR Corporation [here], which is the software provider that supplies Wendy’s with the biometric clocks and POS and cash register access systems used in restaurants. Plaintiffs said they believe NCR may hold fingerprint information on other Wendy’s employees [ZDNet coverage at: Top Class Actions, The Daily Dot, Human Capital (HRD), Gizmodo and Biometric Update]

WW – Facebook Forces Mass Logout After Breach

Facebook logged 90 million users out of their accounts after the company discovered that hackers had been exploiting a flaw in Facebook code that allowed them to steal Facebook access tokens and take over other people’s accounts. The stolen tokens could also be used to access apps and websites linked to the Facebook accounts. The hackers exploited a trio of flaws that affected the “View As” feature, which lets users see how their profiles appear to other people. Facebook has fixed the security issue; it has also reset the access tokens for 90 million accounts. Facebook became aware of the issue on September 16, when it noticed an unusual spike in people accessing Facebook. [newsroom.fb.com: Security Update | Wired.com: The Facebook Security Meltdown Exposes Way More Sites Than Facebook | Wired: Everything We Know About Facebook’s Massive Security Breach | eWeek: Facebook Data Breach Extended to Third-Party Applications | – ZDnet: Facebook discloses network breach affecting 50 million user accounts |  krebsonsecurity: Facebook Security Bug Affects 90M Users | The Register: Facebook: Up to 90 million addicts’ accounts slurped by hackers, no thanks to crappy code]

WW – Facebook Says Big Breach Exposed 50 Million Accounts to Full Takeover

Facebook Inc said [notice & details here] that hackers stole digital login codes allowing them to take over nearly 50 million user accounts in its worst security breach ever given the unprecedented level of potential access, adding to what has been a difficult year for the company’s reputation. It has yet to determine whether the attacker misused any accounts or stole private information. It also has not identified the attacker’s location or whether specific victims were targeted. Its initial review suggests the attack was broad in nature. Chief Executive Mark Zuckerberg described the incident as “really serious” in a conference call with reporters [see transcript]. His account was affected along with that of Chief Operating Officer Sheryl Sandberg, a spokeswoman said. The vulnerability had existed since July 2017, but the company first identified it on Tuesday after spotting a “fairly large” increase in use of its “view as” [here] privacy feature on Sept. 16, executives said. “View as” allows users to verify their privacy settings by seeing what their own profile looks like to someone else. The flaw inadvertently gave the devices of “view as” users the wrong digital code, which, like a browser cookie, keeps users signed in to a service across multiple visits. That code could allow the person using “view as” to post and browse from someone else’s Facebook account, potentially exposing private messages, photos and posts. The attacker also could have gained full access to victims’ accounts on any third-party app or website where they had logged in with Facebook credentials. Facebook fixed the issue. It also notified the U.S. Federal Bureau of Investigation, Department of Homeland Security, Congressional aides and the Data Protection Commission in Ireland, where the company has European headquarters. Facebook reset the digital keys of the 50 million affected accounts, and as a precaution temporarily disabled “view as” and reset those keys for another 40 million that have been looked up through “view as” over the last year. About 90 million people will have to log back into Facebook or any of their apps that use a Facebook login, the company said. [Reuters See also: Facebook Security Bug Affects 90M Users | Facebook’s spam filter blocked the most popular articles about its 50m user breach | Here’s what to do if you were affected by the Facebook hack | Facebook Says Three Different Bugs Are Responsible For The Massive Account Hacks | Facebook warns that recent hack could have exposed other apps, including Instagram, Tinder, and Spotify | Facebook Faces Class Action Over Security Breach That Affected 50 Million Users | Facebook Could Face Up to $1.63 Billion Fine for Latest Hack Under the GDPR | Facebook could be fined up to $1.63 billion for a massive breach which may have violated EU privacy laws | Until data is misused, Facebook’s breach will be forgotten]

Internet / WWW

EU – Report Warns of Smart Home Tech Impact on Children’s Privacy

Dr. Veronica Barassi of Goldsmiths, University of London, leads the Child Data Citizen research project, and submitted a report on “Home Life Data and Children’s Privacy“ to the Information Commissioner’s Office (ICO), arguing that data collected from children by home automation devices is both personal data and is “home life data,” which is made up of family, household, biometric and highly contextual data. She calls for the ICO to launch a review the impact of home life data on children’s privacy, and to include the concept in future considerations. [Biometric Update coverage at: TechCrunch]

Law Enforcement

CA – RCMP’s Ability to Police Digital Realm ‘Rapidly Declining’

Privacy watchdogs have warned against any new encryption legislation. A note tucked into the briefing binder prepared for RCMP Commissioner Brenda Lucki when she took over the top job earlier this year obtained by CBC News may launch a renewed battle between the national police service and privacy advocates. “Increasingly, criminality is conducted on the internet and investigations are international in nature, yet investigative tools and RCMP capacity have not kept pace. Growing expectations of policing responsibilities and accountability, as well as complexities of the criminal justice system, continue to overwhelm the administrative demands within policing” [says the memo]. Encryption of online data has a been a persistent thorn in the RCMP’s side. “Approximately 70% of all communications intercepted by CSIS and the RCMP are now encrypted. 80 organized crime groups were identified as using encryption in 2016 alone,” according to the 274-page [briefing binder]. Lucki’s predecessor lobbied the government for new powers to bypass digital roadblocks, including tools to get around encryption and warrantless access to internet subscriber information. Some critics have noted that non-criminals — journalists, protesters and academics, among others — also use encryption tools online and have warned any new encryption legislation could undermine the security of financial transactions and daily online communication. Ann Cavoukian …called the RCMP’s push for more online policing power “appalling.” … “I guess we should remind them that we still live in a free and democratic society where people have privacy rights, which means that they should be in control of their personal information … If you’re a law abiding citizen, you get to decide how your information is used and to whom it’s disclosed. The police have no right to access your personal information online, unless of course they have a warrant” she said. [CBC News]

Online Privacy

US – Facebook Scolds Police for Using Fake Accounts to Snoop on Citizens

In a September 19 letter, addressed to Memphis Police Department Director Michael Rallings, Facebook’s Andrea Kirkpatrick, director and associate general counsel for security, scolded the police for creating multiple fake Facebook accounts and impersonating legitimate Facebook users as part of its investigations into “alleged criminal conduct unrelated to Facebook.” Facebook’s letter was sent following a civil rights lawsuit filed by the American Civil Liberties Union (ACLU) of Tennessee that accused the MPD of illegally monitoring activists to stifle their free speech and protests. The lawsuit claimed that Memphis police violated a 1978 consent decree that prohibits infiltration of citizen groups to gather intelligence about their activities. After two years of litigation, the city of Memphis had entered into a consent decree prohibiting the government from “gathering, indexing, filing, maintenance, storage or dissemination of information, or any other investigative activity, relating to any person’s beliefs, opinions, associations or other exercise of First Amendment rights.” Before the trial even began over the ACLU’s lawsuit last month, US District Judge Jon McCalla issued a 35-page order agreeing with the plaintiffs, but he also ruled that police can use social media to look for specific threats: a ruling that, one imagines, would condone the use of fake profiles during undercover police work… butt not the illegal surveillance of legal, Constitutionally protected activism. the ACLU lawsuit uncovered evidence that Memphis police used a fake “Bob Smith” account to befriend and gather intelligence on Black Lives Matter activists. According to the Electronic Frontier Foundation (EFF), Facebook deactivated “Bob Smith” after the organization gave it a heads-up. Then, Facebook went on to identify and deactivate six other fake accounts managed by Memphis police. [Naked Security (Sophos)]

WW – Google Promises Chrome Changes After Privacy Complaints

Google, on the defensive from concerns raised about how Chrome tracks its users, has promised changes to its web browser. Complaints in recent days involve how Google stores data about browsing activity in files called cookies and how it syncs personal data across different devices. Google representatives said there’s nothing to be worried about but that they’ll be changing Chrome nevertheless. In a recent blog post [Zach Koch, Chrome Product Manager said – here] that it will add new options and explanations for its interface and reverse one Chrome cookie-hoarding policy that undermined people’s attempts to clear those cookies. [CNET News Coverage of complaints at: Bloomberg (video), CNBC, WIRED, TechCrunch, Forbes and Popular Mechanics]

WW – Privacy and Anonymity in the Modern World — CyberSpeak Podcast

On this episode of the CyberSpeak with InfoSec Institute podcast [YouTube here], Lance Cottrell, chief scientist at Ntrepid, talks about the evolution of privacy and anonymity on the Internet, the impact of new regulations and laws, and a variety of other privacy-related topics.In the podcast, Cottrell and host Chris Sienko discuss:

  • What about the early Internet drove you to focus on online anonymity and security? (1:45)
  • Do the early privacy tools and concepts hold up in today’s environment? (3:50)
  • When it did become apparent that fraudsters and phishers were taking over the Internet? (5:00)
  • What are some of the most effective social engineering attacks being used? (8:10)
  • Have you ever been scammed or phished? (11:35)
  • Why is online anonymity important? (14:50)
  • What are some examples of privacy and security issues while traveling? (20:50)
  • How will GDPR and California’s new privacy law affect anonymity and privacy? (23:25)
  • What would be your dream privacy regulation or law? (24:55)
  • What are your thoughts on privacy certifications? (28:50)
  • What’s the future of online privacy and anonymity? (29:40)

[Security Boulevard]

Privacy (US)

US – In Senate Hearing, Tech Giants Push Lawmakers for Federal Privacy Rules

A recent hearing at the Senate Commerce Committee [here] with Apple, Amazon, Google and Twitter, alongside AT&T and Charter, marked the latest in a string of hearings in the past few months. This time, privacy was at the top of the agenda. The problem, lawmakers say, is that consumers have little of it. The hearing said that the U.S. was lagging behind Europe’s new GDPR privacy rules and California’s recently passed privacy law, which goes into effect in 2020, and lawmakers were edging toward introducing their own federal privacy law. Here are the key takeaways: 1) Tech giants want new federal legislation, if not just to upend California’s privacy law; 2) Google made “mistakes” on privacy, but evades China search questioning; and 3) Startups might struggle under GDPR-ported rules, companies claim …Committee chairman, Sen. John Thune (R-SD) said [here] that the committee won’t “rush through” legislation, and will ask privacy advocates for their input in a coming hearing. [Watch the full hearing here and read witness statements: Len Cali of ATT – 6 pg PDF here; Andrew DeVore of Amazon – 5 pg PDF here; Keith Enright of Google – 6 pg PDF here & 3pg PDF here; Damien Kieran of Twitter 5 pg PDF here; Guy (Bud) Tribble of Apple 2 pg PDF here; and Rachel Welch of Caharter Communications 5 pg PDF here TechCrunch Coverage: During Senate Hearing, Tech Companies Push for Lax Federal Privacy Rules | Tech Execs Offer Senate Help Writing a Toothless National Privacy Law | US privacy law is on the horizon. Here’s how tech companies want to shape it | Here’s why tech companies are in favor of *federal* regulation | Google confirms Dragonfly project in Senate hearing, dodges questions on China plans | Google confirms secret Dragonfly project, but won’t say what it is]

US – EFF Opposes Industry Efforts to Have Congress Roll Back State Privacy Protections

The Senate Commerce Committee is holding a hearing on consumer privacy [here & PR here], but consumer privacy groups like EFF were not invited. Instead, only voices from big tech and Internet access corporations will have a seat at the table. In the lead-up to this hearing, two industry groups (the Chamber of Commerce and the Internet Association) have suggested that Congress wipe the slate clean of state privacy laws in exchange for weaker federal protections. EFF opposes such preemption, and has submitted a letter to the Senate Commerce Committee to detail the dangers it poses to user privacy. Current state laws across the country have already created strong protections for user privacy. Our letter identifies three particularly strong examples from California’s Consumer Privacy Act, Illinois’ Biometric Privacy Act, and Vermont’s Data Broker Act. If Congress enacts weaker federal data privacy legislation that preempts such stronger state laws, the result will be a massive step backward for user privacy. … The companies represented at Wednesday’s hearing rely on the ability to monetize information about everything we do, online and elsewhere. They are not likely to ask for laws that restrain their business plans. [DeepLinks Blog (Electronic Frontier Foundation)]

US – NTIA Seeks Comment on New Approach to Consumer Data Privacy

The U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) issued a Request for Comments on a proposed approach to consumer data privacy designed to provide high levels of protection for individuals, while giving organizations legal clarity and the flexibility to innovate [see PDF]. The Request for Comments is part of a transparent process to modernize U.S. data privacy policy for the 21st century. In parallel efforts, the Commerce Department’s National Institute of Standards and Technology is developing a voluntary privacy framework [here & here] to help organizations manage risk; and the International Trade Administration is working to increase global regulatory harmony. The proposed approach focuses on the desired outcomes of organizational practices, rather than dictating what those practices should be. With the goal of building better privacy protections, NTIA is seeking comment on the following outcomes: 1) Organizations should be transparent about how they collect, use, share, and store users’ personal information: 2) Users should be able to exercise control over the personal information they provide to organizations; 3) The collection, use, storage and sharing of personal data should be reasonably minimized in a manner proportional to the scope of privacy risks. 4) Organizations should employ security safeguards to protect the data that they collect, store, use, or share; 5) Users should be able to reasonably access and correct personal data they have provided; 6) Organizations should take steps to manage the risk of disclosure or harmful uses of personal data; and 7) Organizations should be accountable for the use of personal data that has been collected, maintained or used by its systems. Comments are due by October 26, 2018 [Newsroom (National Telecommunications and Information Administration) coverage at: Multichannel News, Reuters, CBS News and engadget]

US – NTIA Seeks Comment on New, Outcome-Based Privacy Approach

The U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) [here] issued a Request for Comments [4 pg PDF Federal Register post — also here & PR here] on a new consumer privacy approach that is designed to focus on outcomes instead of prescriptive mandates. The RFC presents an important opportunity for organizations to provide legal and policy input to the administration, and comments are due October 26. The RFC proposes seven desired outcomes that should underpin privacy protections: 1) Transparency, 2) control, 3) reasonable minimization (of data collection, storage length, use, and sharing), 4) security, 5) access and correction, 6) risk management, and 7) accountability. According to the RFC, the outcome-based approach will provide greater flexibility, consumer protection, and legal clarity. Additionally, the RFC describes eight overarching goals for federal action on privacy: 1) Regulatory harmonization; 2) Legal clarity while maintaining the flexibility to innovate; 3) Comprehensive application; 4) Risk and outcome-based approach; 5) Interoperability; 6) Incentivize privacy research; 7) FTC enforcement; and 8) Scalability. The NTIA is seeking comments on the listed outcomes and goals, as well as other issues such as if the FTC needs additional resources to achieve the goals. [Chronicle of Data Protection (Hogan Lovells) coverage at: Multichannel News, Reuters, CBS News and engadget]

US – SEC Brings First Enforcement Action for Violation of ID Theft Rule

On September 26, 2018, the SEC brought its first ever enforcement action [PR] for violations of Regulation S-ID (the “Identity Theft Red Flags Rule”), 17 C.F.R. § 248.201 [here & here also guidance here], in addition to violations of Regulation S-P, 17 C.F.R. 30(a) (the “Safeguards Rule”) [see here & here]. Regulation S-ID and Regulation S-P apply to SEC-registered broker-dealers, investment companies, and investment advisers, and require those entities to maintain written policies and procedures to detect, prevent and mitigate identity theft, and to safeguard customer records and information, respectively. The SEC’s action against Voya Financial Advisors (“Voya”) cements the SEC’s focus on investment adviser and broker-dealer cybersecurity compliance, both in terms of its examination program—which referred the matter to Enforcement—as well as the Division of Enforcement’s Cyber Unit, which investigated and resolved the matter with Voya. The SEC’s enforcement action against Voya arose out of an April 2016 “vishing” intrusion (voice phishing) that allowed one or more persons impersonating Voya representatives to gain access to personal identifying information of approximately 5,600 Voya’s customers. The SEC’s action against Voya was resolved through a settled administrative order, in which Voya neither admitted nor denied the SEC’s findings, but agreed to engage and follow the recommendations of an independent compliance consultant for two years, certify its compliance with the consultant’s recommendations, and pay a $1 million fine. Voya was also enjoined from future violations of Regulation S-P or Regulation S-ID and was censured by the SEC. The SEC noted that, in reaching the settlement, it considered the remedial actions that Voya promptly undertook following the attack. [Privacy & Data Security (Alston & Bird) and at: Reuters, Infosecurity Magazine, Business Record, InvestmentNews and Law 360]

US – Google Releases Framework to Guide Data Privacy Legislation

Google released a set of privacy principles [3 pg PDF & blog post here] to guide Congress as it prepares to write legislation aimed at governing how websites collect and monetize user data. The framework largely consists of privacy principles that Google already abides by or could easily bring itself into compliance with. It calls for allowing users to easily access and control the data that’s collected about them and requiring companies to be transparent about their data practices. The set of proposals is designed to be a baseline for federal rules regarding data collection. Google appears to be the first internet giant to release such a framework, but numerous trade associations have published their own in recent weeks. The industry has gotten on board with the idea of a national privacy law in the weeks since California passed its own strict regulations aimed at cracking down on data collection and increasing user control. Internet companies have universally opposed the measure and have begun pushing Congress to establish a national law that would block states from implementing their own. [The Hill coverage at: AdWeek Coverage at: Charter: Parity Is Key to Online Privacy Protection | In Reversal, IAB Says Congress Should Consider Privacy Legislation


US – Revealed: DoJ Secret Rules for Targeting Journalists With FISA Court Orders

Revealed for the first time are the Justice Department’s rules for targeting journalists with secret FISA court orders. The documents [PDF] were obtained as part of a Freedom of Information Act lawsuit brought by Freedom of the Press Foundation and Knight First Amendment Institute at Columbia University. While civil liberties advocates have long suspected secret FISA court orders may be used (and abused) to conduct surveillance on journalists, the government—to our knowledge—has never acknowledged they have ever even contemplated doing so before the release of these documents today. [These DOJ] FISA court rules are entirely separate from—and much less stringent—than the rules for obtaining subpoenas, court orders, and warrants against journalists as laid out in the Justice Department’s “media guidelines,” which were strengthened in 2015 after scandals involving surveillance of journalists during the Obama era. The DOJ only must follow its regular FISA court procedures (which can be less strict than getting a warrant in a criminal case) and get additional approval from the Attorney General or Assistant Attorney General. FISA court orders are also inherently secret, and targets are almost never informed that they exist. The documents raise several concerning questions: 1) How many times have FISA court orders been used to target journalists?; 2) Why did the Justice Department keep these rules secret — even their very existence — when the Justice Department updated its “media guidelines” in 2015 with great fanfare? and 3) If these rules can now be released to the public, why are the FBI’s very similar rules for targeting journalists with due process-free National Security Letters still considered classified? And is the Justice Department targeting journalists with NSLs and FISA court orders to get around the stricter “media guidelines”? [Freedom of the Press Foundation coverage at: The Intercept]

CA – Cameras on School Buses Are an Option, Says N.L. Privacy Commissioner

The privacy commissioner of Newfoundland and Labrador says the English School District has the right to put cameras on school buses. The issue came up last week when CBC News reported on allegations of sexual assault on a school bus in Western Newfoundland … [where] a teenaged boy has been charged and faces three counts in relation to incidents involving two alleged victims. The family of one of the alleged victims — an eight-year-old girl — is calling on the school district to install cameras on school buses. … “The school district has the ability to put cameras on school buses. They have lots of cameras in many schools across the province,” information and privacy commissioner Donovan Molloy told CBC’s Corner Brook Morning Show [listen here]. School board CEO Tony Stack has said cameras would only be considered as a “last resort” due to privacy reasons. But Privacy Commissioner Molloy says there’s nothing in the law that says cameras are not allowed. He did say, however, that] other measures should be attempted first, such as assigned seating to separate younger and older students, and the use of student monitors, which is permitted under the law. Molloy emphasized that the Office of the Information and Privacy Commissioner has not forbidden the use of cameras on school buses. At the same time he cautioned that he is not advocating for such a change, because constant surveillance may do more harm than good, taking away children’s sense of independence. [CBC News see also: Teenage boy charged with sexual assaults after incidents on school bus | Renewed Calls for Cameras After Alleged School Bus Sexual Assault | North Shore parent starts petition over safety concerns for children riding school buses | School Bus Cameras Not a Cure-All, says Privacy Commissioner

CA – Maps Show All Secret Surveillance Cameras Spying on Canadians

Canadian police agencies have taken part in the increasingly intense law enforcement protocols that have become common across North America and Europe. The most controversial of these efforts, of course, is public surveillance. While Canada’s public surveillance system is less famous than those in the United States and United Kingdom, it does exist. Road cameras are the most well-known and there are potentially thousands of them across the country, all of which are regularly if not constantly monitored. The cameras are designed to catch traffic violations, but they can also be used as a method of public surveillance more broadly, according to Wired. The cameras, of course, also capture activity on sidewalks and public open spaces. According to the Office of the Privacy Commissioner of Canada, Canadian law enforcement agencies “increasingly view it as a legitimate tool to combat crime and ward off criminal activity—including terrorism … however, they present a challenge to privacy, to freedom of movement and freedom of association.” [see here] While the locations of the cameras are (now) public information, most Canadians are unaware that authorities have placed them so extensively in every Canadian city. To give you a sense of the scope of road surveillance in Canada, we’ve compiled these maps, which depict the exact locations of road cameras in every major Canadian city Including: Vancouver, Calgary, Edmonton, Winnipeg, Toronto, Ottawa and Montreal. [MTL Blog coverage at: CBC News]

US Legislation

US – California Approves Bills Tightening Security, Privacy of IoT Devices

Gov. Jerry Brown has signed two bills that could make manufacturers of Internet-connected devices more responsible for ensuring the privacy and security of California residents. Gov. Jerry Brown’s office announced on September 28 that Brown had signed the legislation, Assembly Bill 1906 and Senate Bill 327. The two bills could make manufacturers of Internet-connected devices more responsible for ensuring the privacy and security of California residents. Both pieces of legislation specified they must be signed by the governor and can only become law if the other bill is also signed. Both bills will become law in about 15 months, on Jan. 1, 2020. Senate Bill 327 is the older of the two and was introduced in Feb. 2017 by state Sen. Hannah-Beth Jackson [wiki here], but as currently amended, the senator told Government Technology, is “pretty much a mirror” of AB 1906, introduced in January by Assemblywoman Jacqui Irwin [wiki here] … Both require manufacturers of connected devices to equip them with a “reasonable security feature or features” that are appropriate to their nature and function, and the information they may collect, contain or transmit — and are designed to protect the device and its information from “unauthorized access, destruction, use, modification or disclosure.” The bills also specify that if such a device has a “means for authentification outside a local area network,” that will be considered a reasonable security feature if either the preprogrammed password is unique to each device made; or the device requires a user to create a new “means of authentication” before initial access is granted. The question of what defines a “reasonable security feature or features” is one of several that industry groups cited in their opposition to AB 1906. In a statement provided to GT, the CMTA [California Manufacturers and Technology Association] said the bills are an attempt to “create a cybersecurity framework by imposing undefined rules on California manufacturers,” but instead create a loophole allowing imported devices to “avoid implementing any security features.” This, it said, makes the state less attractive to manufacturers, less competitive and increases the risk of cyberattacks. The Entertainment Software Association in opposition to SB 327, said existing law already requires manufacturers to set up “reasonable privacy protections appropriate to the nature of the information they collect.” [Government Technology See also: California governor signs country’s first IoT security law | Hey, Alexa, California’s New IoT Law Requires Data Protections

US – Amendments to the California Consumer Privacy Act of 2018

Amendments to California’s expansive Consumer Privacy Act of 2018 [AB – 375 here] include new provisions that may significantly impact the timing of enforcement and provide exemptions for large amounts of personal data regulated by other laws. Because the Act was hastily passed [in June, 2018] … it was expected that the Act would undergo significant amendments before it enters into effect on January 1, 2020. The first amendments were passed by the California State Legislature on August 31, 2018, in the form of SB-1121, and Governor Brown [signed it into law September 23, 2018 – see here]. While SB-1121 is labeled as a “technical corrections” bill designed to address drafting errors, ambiguities, and inconsistencies in the Act, in fact, it creates new provisions in addition to those already contained within the Act. One notable provision of the Bill is that it grants a six-month grace period from the date the California AG issues regulations or July 1, 2020, whichever is earlier, before enforcement actions can be brought. Another key effect of the Bill is that it fully exempts data that is regulated by the Gramm-Leach-Bliley Act, the California Financial Information Privacy Act, HIPAA, the California Confidentiality of Medical Information Act, the clinical trials Common Rule, and the Driver’s Privacy Protection Act from the privacy requirements of the Act. However, these industries are still subject to the privacy provisions of the Act if they engage in activities falling outside of their applicable privacy regulations (except for the health care industry, if it treats all data as PHI, then it remains exempt as to all data). As we previously predicted, the Act will continue to evolve prior to its January 1, 2020 enactment. While the current Bill attempts to clarify the Act, it does not address all of the ambiguities and uncertainties. We anticipate further changes and guidance regarding the Act and will continue to monitor the latest developments. [Security & Privacy Bytes (Squire Patton Boggs) Additional coverage at: Privacy and Cybersecurity Perspectives (Murtha Cullina), Workplace Privacy Report (Jackson Lewis), Privacy & Data Security (Alston & Bird) and Data Privacy Monitor (BakerHostetler)]

US – California Consumer Privacy Act: What to Expect

This is the fourth installment in Hogan Lovells’ series [here] on the California Consumer Privacy Act [see installment 1 here, installment 2 here and installment 3 here]. It discusses litigation exposure that businesses collecting personal information about California consumers should consider in the wake of the California Legislature’s passage of the California Consumer Privacy Act of 2018 (CCPA). [AB – 375 here] For several years, the plaintiffs’ bar increasingly has relied on statutes like the Confidentiality of Medical Information Act, Cal. Civ. Code § 56 et seq. [here], and the Customer Records Act, Cal. Civ. Code § 1798.81, et seq. [here], to support individual and classwide actions for purported data security and privacy violations. The CCPA creates a limited private right of action for suits arising out of data breaches. At the same time, it also precludes individuals from using it as a basis for a private right of action under any other statute. Both features of the law have potentially far-reaching implications and will garner the attention of an already relentless plaintiffs’ bar when it goes into effect January 1, 2020. [This post covers] what you need to know [under two headings]: 1) The CCPA Provides a Limited Private Right of Action for Data Breach Suits; and 2) Plaintiffs Likely Will Argue the CCPA Provides a Basis for Unfair Competition Law Claims. Chronicle of Data Protection (Hogan Lovells)

Workplace Privacy

WW – Many Employee Work Habits Seem Innocent but Invite Security Threats

While most employees are generally risk averse, many engage in behaviors that could lead to security incidents, according to a new report from Spanning Cloud Apps LLC [here], a provider of cloud-based data protection. [see Trends in U.S. Worker Cyber Risk-Aversion and Threat Preparedness here] The company surveyed more than 400 full-time U.S. employees, and found that more than half (55%) admitted to clicking links they didn’t recognize, while 45% said they would allow a colleague to use their work computer and 34% were unable to identify an unsecure ecommerce site. The results paint a picture of a workforce that has a general understanding of security risks, but is underprepared for the increasing sophistication and instance of ransomware and phishing attacks, the report said. Employees would rather be “nice” than safe, the study said. Of workers with administrative access, only 35% responded that they would refuse to allow a colleague to access their device. And they like to shop from work, with more than 52% saying they shop online from their work computer. Workers are underprepared for sophisticated phishing emails. When presented with a visual example, only 36% correctly identified a suspicious link as being the key indicator of a phishing email, the study said. [Information Management coverage at: BetaNews]




Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: