26 August – 06 September 2016

Biometrics

WW – Hackers Trick Facial-Recognition Logins With Photos From Facebook

Researchers have demonstrated a disturbing new method of stealing a face: one that’s based on 3-D rendering and some light Internet stalking. Security and computer vision specialists from the University of North Carolina presented a system that uses digital 3-D facial models based on publicly available photos and displayed with mobile virtual reality technology to defeat facial recognition systems. A VR-style face, rendered in three dimensions, gives the motion and depth cues that a security system is generally checking for. The researchers used a VR system shown on a smartphone’s screen for its accessibility and portability. Their attack, which successfully spoofed four of the five systems they tried, is a reminder of the downside to authenticating your identity with biometrics. By and large your bodily features remain constant, so if your biometric data is compromised or publicly available, it’s at risk of being recorded and exploited. Faces plastered across the web on social media are especially vulnerable. [Wired]

UK – Met Police Rolls Out Real-Time Live Face-Spotting Tech

London’s Metropolitan Police will trial an automated facial recognition system to identify people at this weekend’s Notting Hill Carnival. This is only the second time that British cops have openly trialled live automated facial recognition (AFR) systems in the UK. Last year, Leicestershire Police also trialled AFR at Download Festival – though this was found to not have been part of the policing plan for the event and police didn’t bother assessing how effective it was after the event. According to the Met, the AFR system “involves the use of overt cameras which scan the faces of those passing by and flag up potential matches against a database of custody images. The database has been populated with images of individuals who are forbidden from attending Carnival, as well as individuals wanted by police who it is believed may attend Carnival to commit offences.” The government’s Surveillance Camera Commissioner, Tony Porter, said that “the Surveillance Camera Code of Practice requires relevant authorities such as Local Authorities and Police Forces to ensure they use surveillance cameras effectively, efficiently and proportionately. “Even if the use of AFR complies with the code, the Met’s collection of custody images has been a greater source of controversy. In his annual report earlier this year, the Biometrics Commissioner warned that the Home Office was cruising for a lawsuit in this area, particularly after a High Court ruling in 2012, R (RMC and FJ) v MPS, in which Lord Justice Richards found: [T]he just and appropriate order is to declare that the [Metropolitan Police’s] existing policy concerning the retention of custody photographs … is unlawful. It should be clear in the circumstances that a ‘reasonable further period’ for revising the policy is to be measured in months, not years. According to a Freedom of Information request made by pressure group Liberty last year, however, in the three years since the ruling the Met confessed it had only deleted 560 persons’ images because “the current I.T. system which holds MPS custody images was not designed or built to accommodate a complex retention policy.” In response to a Parliamentary question reported in the Birmingham Mail, Baroness Williams of Trafford reported that by 15 July this year, there were “over 19 million custody images, which may include images other than of faces, uploaded by forces onto the PND (Police National Database).” “Of these, 16,644,143 had been enrolled in the facial image recognition gallery and are searchable using automated facial recognition software,” Williams revealed – a figure representing roughly a quarter of the UK’s entire population. This area is expected to receive enhanced attention when the Home Office publishes its long-awaited Biometrics Review, as well as its Custody Images Review. Though both of these have been completed, the Home Office has not published them, which The Register’s sources have claimed is a result of redrafting the “rubbish” reports. [The Register]

Big Data

WW – Tech Giants Explore AI Ethics Standards Group

With the rise of artificial intelligence, some of the world’s biggest tech companies are commencing informal talks on how best to develop an ethical and self-policing framework for the burgeoning technology. Alphabet, Amazon, Facebook, IBM and Microsoft have been meeting to discuss its impact on jobs, transportation and warfare. Though a name for the standards group has not yet come to light, four people familiar with the meetings said the group intends to “ensure AI research is focused on benefiting people, not hurting them.” Stanford University has also released a report funded by Microsoft researcher Eric Horvitz. The report, titled “Artificial Intelligence and Life in 2030,” contends that it will be impossible for government to regulate AI. “The study panel’s consensus is that attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.” [The New York Times reports]

WW – Are Algorithms ‘Weapons of Math Destruction’?

Remember the 2008 financial crisis and the “dark financial arts” that caused it? Cathy O’Neil sees parallels between those calamitous days and the use of big data today. In her new book, “Weapons of Math Destruction,” O’Neil, a Harvard-trained mathematician who used to ply her talents on Wall Street, argues that, the “discriminatory and even predatory way in which algorithms are being used in everything from our school system to the criminal justice system is really a silent financial crisis.” To solve the problem, O’Neil has proposed a Hippocratic Oath for mathematicians and a host of regulatory reforms. [Time]

Canada

CA – NL OIPC Issues Guidelines on Legal Advice Exemption

The Newfoundland and Labrador (NL) OIPC has issued guidelines on applying the legal advice exception found in section 30 of the Access to Information and Protection of Privacy Act, 2015

  • The guidelines rely heavily on the decision by the NL Supreme Court in Newfoundland and Labrador (Information and Privacy Commissioner) v. Eastern Regional Integrated Health Authority, 2015 NLTD(G) 183 (Eastern Health case).
  • The Court in the Eastern Health case reviewed the current state of the law regarding solicitor and client privilege.
  • The guidance document annotates and summarizes the court’s review of both solicitor-client and litigation privileges, both of which are covered by the legal advice exception.
  • When relying on these exceptions the NL OIPC noted that “public bodies should consider the scope and intention of the privilege.”
  • The NL OIPC affirms that if a public body is relying on the exception of solicitor and client (legal advice) it must be able to show that:
  1.   the document was a communication between a solicitor, acting in his or her capacity, and the client;
  2.   the communication entailed the seeking or giving of legal advice, AND
  3.   the communication was intended to be confidential.
  • If a public body is relying on litigation privilege it must be able to show that:
  1.   the dominant purpose for the preparation of the document must be the litigation in question, AND
  2.   litigation must have been in reasonable contemplation at the time of preparation of the document.

Source: [OIPC NFLD – Section 30 – Legal Advice]

E-Mail

US – Yahoo Email Scanning Settlement Garners Criticism

Yahoo has agreed to a settlement on its alleged scanning of user emails, but is making no plans to stop the practice. The tech giant was accused of scanning emails without user consent. The lawsuit was one of six requesting Yahoo to halt its monitoring activities. The settlement awarded $4 million, but none of it will go to the public, with the entirety of the award going to lawyers. The settlement also allows Yahoo to continue to look over user emails without non-Yahoo users’ consent. Yahoo now agrees to only scan the emails when they are on its servers, not while they are in transit. [Ars Technica]

Encryption

US – Tech Companies Use Encryption as Marketing Tool, Not a Security One: FBI Director

At the 2016 Symantec Government Symposium, FBI Director James Comey discussed the problems of encryption by default and the need for a backdoor, maintaining that tech companies tout encryption not for security’s sake but for marketing’s. “What has happened in the three years I’ve been Director [of the FBI], post-Snowden, is that that dark corner of the room, especially through default encryption, especially through default encryption on devices, that shadow is spreading through more and more of the room,” he said. Technologists countered that his comments over-simplify the issue. “But when you look into it, what they’re really asking for is dramatic, it’s a huge thing,” said Errata Security CEO Robert Graham. “They’d need to outlaw certain kinds of code.” [The Daily Dot]

EU Developments

EU – EU Regulators to Look at Facebook-WhatsApp Changes

Fall out from recently announced plans for WhatsApp to share user data with parent company Facebook continue. The Wall Street Journal reports the Article 29 Working Party said it is following changes to WhatsApp’s privacy policy “with great vigilance.” Additionally, privacy advocates, including the Electronic Privacy Information Center and the Center for Digital Democracy, have filed a complaint with the U.S. Federal Trade Commission, arguing proposed changes that allow it to use WhatsApp user data for “marketing purposes” is an “unfair and deceptive trade practice.” Delhi’s High Court in India has asked the government, specifically the Telecom Regulatory Authority of India, for its response to the privacy policy changes. The New York Post reports that, in addition to individual users, businesses are also concerned about the changes, particularly how it can protect corporate and user data that is shared when companies communicate via WhatsApp with their consumers. [Full Story]

Facts & Stats

WW – Airbnb Releases First Transparency Report on Law Enforcement Requests

Airbnb has released its first transparency report on the amount of law enforcement data requests it has received. Airbnb provided data on 82 of the 188 requests sent to it from law enforcement agencies during the first six months of 2016. The report is published as part of Airbnb’s Community Compact initiative, where the home-sharing company works to become more transparent to the public and local governments in the cities where it operates. “We’re building a more transparent community and sharing data about our community with the general public,” said Airbnb spokesman Christopher Nulty. “We felt that this is an important first step. In the future, we’ll look to share additional sorts of data about our community.” [TechCrunch]

Filtering

WW – Google to Tweak Search Result Algorithm to Favor Sites that Make Content Readily Accessible

Google plans to alter its search result ranking algorithms so sites that have pop-up advertisements or interstitial pages that interfere with users’ ability to view content are less favored. Google cites examples of techniques that interfere with viewing content: pop-ups that cover portions of the main content; interstitial pages that must be closed before being able to view content; and advertisements that fill web browsers’ screens so users must scroll down to access content. Exceptions will include pop-ups that tell users about the use of cookies, and pages that require login information. [BBC: Google punishes sites with pop-up adverts | – Google Blog: Helping users easily access content on mobile]

Finance

US – FTC Opens Public Comment on Safeguards Rule

The Federal Trade Commission is asking for public comment on its Safeguards Rule as the agency reviews its rules and guidelines. The Safeguards Rule requires financial institutions to create and maintain comprehensive information security programs for handling customer data. “The FTC seeks comments on a number of questions, including the economic impact and benefits of the Rule; possible conflict between the Rule and state, local or other federal laws or regulations; and the effect on the Rule of any technological, economic or other industry changes,” the agency’s announcement said. In another blog post, FTC Chief Technologist Lorrie Cranor previews the agency’s “Putting Disclosures to the Test workshop.” The event will cover topics including measuring disclosure effectiveness and whether consumers actually pay attention to a disclosure. [FTC]

WW – Google, Amazon Offer to Build Wall Street Database

Major tech companies are vying for the right to build a new database for the Securities and Exchange Commission designed to track stock and options trading from exchanges and broker-dealers on a daily basis, Bloomberg reports. Amazon and Google’s parent company, Alphabet Inc., are looking to help build the Consolidated Audit Trail database, designed to host exchanges in the cloud, but will also hold personal information on more than 100 million customer accounts. Brokers and bankers are concerned about the database’s construction, fearing problems from data breaches and technology firms asserting themselves within the financial industry. “This is a huge opportunity for Amazon and Google,” said Harvard University Senior Fellow Jo Ann Barefoot. “Their involvement in this project I do think is a threat to the incumbents. If big tech firms can win more trust in Washington, that’s one of the biggest challenges facing banks.” [Full Story]

CA – 80,000 People Suffer Pay Crisis in Canada After IBM System Debacle

No-one in Canada can accuse public servants of being overpaid these days. The crisis affects 80,000 employees or almost one third of Canada’s federal public servants. Thanks to a massive breakdown of the Federal Government’s new, privatised pay system, tens of thousands of Canadian public servants have been going weeks, even months with reduced pay — or in many cases, no pay at all. It is a crisis on a huge scale for Prime Minister Justin Trudeau’s new Government, and the cause of thousands of crises on an individual level, with people forced to borrow money or max out their credit cards to make ends meet. [ABC News]

FOI

CA – AB OIPC Probes ‘Chronic’ Delays in Meeting Access Requests

Alberta’s privacy commissioner has launched an investigation into the justice department for she calls “chronic” delays in responding to freedom of information requests. The Information and Privacy Commissioner’s office said it has issued eight orders since February, after it found instances where the department did not meet the 30-day time limit for responding to an access requests under the Freedom of Information and Protection of Privacy Act. “Essentially, there has just been no response to the applicant to those requests, which is a significant compliance issue within the legislation.” Included in the orders issued by the OIPC are requests for communication records between an individual and Crown counsel, the entire file of a named individual with an Alberta Serious Incident Response Team file number, emails relating to a named individual, and an applicant’s request for records of his employment. Time extension requests, along with delays in responding to requests, have become an issue within the justice department as well. Pprivacy commissioner Jill Clayton said the justice department’s “apparent systemic issue” of not responding to access requests within the time limit is a “significant” compliance issue. There is no penalty under the act for delays in complying with the time limit. The investigation will review the department’s process for dealing with access requests to determine the reasons for the delays, and will make recommendations to improve its compliance. An emailed statement from press secretary on behalf of justice minister Kathleen Ganley said the government takes the concerns raised by the Information and Privacy Commissioner “extremely seriously.” [CBC News]

CA – Secretive Drug Policies Putting Injured Workers at Risk, Critics Say

It’s the body tasked with recommending which drug treatments are covered for tens of thousands of injured workers across the province. But no minutes are taken at its meetings, its members are a secret, possible conflicts of interest are not publicly reported and the full list of drugs subsidized by Ontario’s worker compensation board is unknown. Critics say that lack of transparency surrounding the Workplace Safety and Insurance Board’s Drug Advisory Committee and its overall drug policies are compromising the care of often-vulnerable injured workers — who sometimes have no idea whether drugs prescribed by their doctor will be paid for by the board until they’re out of pocket at the pharmacy. [The Star]

UK – New UK Commissioner Sets Out FOI Plans

In her first interview since becoming UK commissioner in July, Ms Denham told me about her plans for the FOI side of her new responsibilities. Ms Denham particularly wants to improve the transparency of public services delivered by private companies, as more and more national and local state functions are outsourced. She says she will be raising this issue with ministers. “Private contractors above a certain threshold for a contract or doing some specific types of work could be included under the FOI Act. The government could do more to include private bodies that are basically doing work on behalf of the public,” she says. The new commissioner also plans to review how her office tackles public authorities with a poor track record of handling FOI requests. [BBC]

Genetics

CA – Free DNA Tests Offered After Two Cases of Manitoba Men Switched at Birth

After four men revealed they were switched at birth at Norway House, Health Canada is offering free DNA tests to others born there in the mid-1970s. Two men from Norway House announced last week — and two men from nearby Garden Hill revealed last year — that they had been switched at birth at the federally run hospital in 1975. David Tait Jr. and Leon Swanson cried in front of news cameras Friday after receiving initial DNA test results. Tests last November showed Luke Monias and Norman Barkman also went home from the hospital with each other’s families. The two cases have raised the question of whether there could be more. Health Canada spokesman Eric Morrissette said Tuesday that the department is offering free DNA tests to anyone born at the Norway House hospital in the mid-1970s. [The Canadian Press]

Health / Medical

US – Facebook Argues No Concrete Harm from Disclosure of Health-Related Internet Communications

Facebook Inc. et al. have filed a reply to support their motion to dismiss a class action complaint by Winston Smith et al., alleging unlawful collection, use and disclosure of personal information. The social media company argued that its targeted advertising based on disclosures from various medical websites (of static links to public web pages) did not violate user privacy; the URLs disclosed indicated whether someone visited a website (sensitive medical information was not disclosed), many of the websites’ policies and procedures expressly stated that the URLs would be disclosed, and the individuals failed to take available measures to safeguard their information (e.g. by opting-out). [Smith et al. v. Facebook Inc. et al. – Defendants Joint Reply in Support of Motion to Dismiss the Complaint – US District Court for the Northern District of California, San Jose Division]

US – EHR Burden Weighs Heavily on Physicians, Leads to Burnout

Physicians are spending more time with patients’ electronic health records (EHRs) than they are with the patients themselves, according to an observational study looking at the allocation of physician time in ambulatory practice. For every hour of clinic time they spend with patients, physicians spend approximately 2 additional hours on EHR and desk work during office hours, Christine Sinsky, MD, vice president of professional satisfaction at the American Medical Association (AMA), and colleagues report in an article published online September 6 in the Annals of Internal Medicine. In addition to the time physicians spend at the office, they also spend another 1 to 2 hours on computer and other clerical work during their personal time each day. This finding adds to the growing body of evidence suggesting that the current generation of EHRs adds to physicians’ daily administrative burden and, as a result, may be increasing rates of professional burnout. [MedScape] SEE ALSO: [Recent study | Another study | Medscape EHR Report 2016]

CA – Computer Medical Records Breached at Grey Bruce Health Services

An investigation was initiated after four individuals came forward with concerns about access to personal information within their electronic medical records. Results of the investigation indicate that a former employee inappropriately accessed the electronic medical records of 246 individuals over a seven year period from January 2008 to September 2015. All individuals involved in the privacy breach have been notified in writing, and a summary of the investigation has been given to the province’s Information and Privacy Commissioner. This privacy breach does not impact any test results or diagnosis. This breach involves one individual who accessed electronic medical records for no work-related reason and appears to be related to personal curiosity. [Blackburn News]

Horror Stories

WW – Hackers Dump Data from Dropbox’s 2012 Hack Online

Unidentified hackers have dumped the stolen user passwords and emails from more than 68 million Dropbox users online. The data was from a 2012 hack that Dropbox had then reported only included passwords, and at the time compromised more than two-thirds of its customer base, the report states. “The hack highlights the need for tight security, both at the user end — the use of strong passwords, two-step authentication and no reuse of passwords — and for the companies storing user data,” the report adds. “Even with solid encryption practices for securing users’ passwords, Dropbox fell [a]foul of password reuse and entry into its company network.” [The Guardian ]

UK – Reported UK Data Breaches Soar 88% in a Year

The volume of data breach incidents reported to the Information Commissioner’s Office (ICO) has almost doubled in the space of a year, according to a new Freedom of Information (FoI) request. The figure rose from 1,089 in the period April 2014-March 2015 to 2,048 in virtually the same period a year later, according to Huntsman Security. Health, local government and education were the worst performing sectors in terms of the volume of breaches disclosed, accounting for 64% of the total in 2015-16. However, financial organizations were the worst hit by ICO fines. Despite accounting for fewer than 6% of incidents they were on the receiving end of 33% of the watchdog’s financial penalties during the period, which hints at the severity of these breaches. In three-quarters of the total number of cases, no action was taken by the ICO, either suggesting that the incidents themselves were fairly innocuous or that the watchdog needs to grow some sharper teeth. It’s believed that incoming commissioner Elizabeth Denham may be less forgiving of organizations in this regard than her predecessor. Data disclosed in error accounted for the vast majority of reported breaches (67%), followed by security incidents (30%). [InfoSecurity]

Identity Issues

US – NIST Publishes Major Revisions to Digital Authentication Guidance

The National Institute of Standards and Technology released a major update to Special Publication 800-63 for digital authentication. The third version was published Aug. 30, and divides the digital authentication document into four sections: digital authentication guidelines, enrollment and identity proofing, authentication and lifecycle management, and federation and assertions. The third revision has already received more than 200 comments. Michael Garcia, deputy director at NIST’s National Strategy for Trusted Identities in Cyberspace, said identity proofing is “a complete re-write,” based off good practices guidance like the kind seen in Canada and the UK. “It’s much more about the characteristics of quality evidence and the outcomes of the event itself,” Garcia said, pointing out that the Federations and Asserts document was practically all new. According to the draft, this type of system “is preferred over a number of siloed identity systems that each serve a single agency or RP [relying party].” the draft states. The benefits of “federated identity architecture,” NIST says in its draft, include enhanced privacy, data minimization, cost reduction and enhanced user experience. Garcia said the third iteration reflects a better understanding of the digital authentication space, however, “we’re not there yet.” [Federal News Radio]

WW – Identity Governance Red Flags Identified

Five of the most common warning signs that a company is struggling with identity governance issues are identified. They include orphaned accounts, poorly defined certification processes, inadequate access request approvals, lack of segregation-of-duty controls, and independent processes across the organization. The issues are very typical and can lead to employee-catalyzed breaches. “Fortunately, the right identity governance and intelligence solution can solve these issues to minimize your security risks and help you systematically achieve and manage your regulatory compliance.” [SecurityIntelligence]

UK – One in Five Mothers Say They Chose Wrong Name for Their Child: Poll

One in five mothers feels “namer’s remorse” and would pick another name for their child if they had the choice, according to a survey before this week’s annual announcement on baby names. Names most frequently regretted were Charlotte, Amelia, Anne, Daniel, Jacob, James and Thomas. Of the 245 mothers who regretted the names they gave their children, 12% “always knew it was the wrong choice”, 3% knew from the moment the child was born, 8% knew within a couple of days, 32% knew within the first six weeks and 23% began to regret their choice when their children first started nursery or school. The main reason for regretting the name was that it was too commonly used (25%). Just over one in five mothers who regretted their choice said it “just doesn’t feel right”. One in five said they had never liked the name but had been pressured into using it. Just over 10% of mothers said the name did not suit their child. Another 11% said it was not distinctive enough. A further 11% said it caused their child problems with spelling or pronunciation. Six percent regretted their choice because they disliked the shortened version of the name their child ended up being called. Only 3% pinned their regret on the fact there had been a change in public perception of the name since their child was born. Just 1% regretted their choice because a celebrity had used the name for their child. The consolation is that most children grow into their names – and those who don’t can always fall back on middle names, nicknames or (in extremis) deed polls.” Just 6% of mothers, however, have changed any of their children’s names, although one in three has considered it. [The Guardian]

Law Enforcement

CA – Ottawa Police Introduce Automatic Licence Plate Scanners, Privacy Concerns Raised

Technology that will allow Ottawa police to scan up to 5,000 licence plates per hour has already netted results in the city, while privacy advocates are voicing their concerns over how the data will be collected and safeguarded. Police unveiled the first Ottawa Police Service cruiser to implement the Automatic Licence Plate Recognition technology – a device with three all-weather infrared cameras mounted to the roof, with the ability to scan and record licence plates in multiple lanes of traffic and in multiple directions. The readings are fed into a database, and the officer is alerted to potential offenders within seconds if the plate number matches the police “hot list.” In accordance with the Ontario Privacy Commission’s stringent guidelines, Ottawa police have agreed to track data only for offenders – one of the ACLU’s primary recommendations. That information will be stored for five years, while licence plates of “non-hit” vehicles are immediately purged from the data bank. [Ottawa Citizen]

US – Alaskan Police Force Removes Body Cameras, Citing Privacy Fears

The Kodiak, Alaska, police department has stopped using body cameras, citing privacy and effectiveness concerns. While the department’s initial use of the technology in February 2015 “appeared beneficial to the community,” issues arose, said Kodiak Police Chief Rhonda Wallace. Among technological concerns and attachment problems, officers were fearful that the cameras were hurting citizen privacy, especially when they interacted with people on their “worst days” or when they had to deliver sensitive information. The police removed the cameras in December 2015, a move that has caused some controversy as police cameras successfully bolstered an autistic man’s suit earlier that year. “Once a person’s right to privacy has been addressed, we’ll work toward getting the program back up and using them again,” said City Manager Aimee Kniaziowski. [Govtech] [govtech.com]

CA – Cape Breton Prostitution Sting Raises Public Shaming Concerns

Experts in privacy and civil rights are raising questions about a police news conference that identified 27 men caught in a Cape Breton prostitution sting, saying the move amounted to unnecessary “public shaming.” “Public shaming is not something that our justice system should promote … [and] when you release names to try to deter others that sounds like public shaming to me,” said a spokeswoman for the Canadian Civil Liberties Association. “Deterrence is a feature of our criminal justice system, but we usually leave that to the sentencing process.” Last week, provincial court Judge Brian Williston rejected a legal challenge from one of the accused, saying police have the discretion to release personal information to the media, so long as it does not jeopardize a fair trial. However, the lawyer for John Russell Mercer, 73, argued in court that the news conference last September was akin to “locking someone in the stocks” — a form of public humiliation that violated his client’s rights under Section 7 of the Charter of Rights and Freedoms. But the judge disagreed, saying the information released by Cape Breton Regional Police was “limited to what was already accessible to the media and the public.” Deshman said that line of reasoning doesn’t recognize the impact of holding a news conference to draw attention to the accused. [The Canadian Press]

Location

US – Location Privacy and Use of ALPR at Airports

I’d just finished parking my car in the covered garage at Reagan National Airport when I noticed a dark green minivan slowly creeping through the row behind me. The vehicle caught my attention because its driver didn’t appear to be looking for an open spot. What’s more, the van had what looked like two cameras perched atop its roof — one of each side, both pointed down and slightly off to the side. I had a few hours before my flight boarded, so I delayed my walk to the terminal and cut through several rows of cars to snag a video of the guy moving haltingly through another line of cars. I approached the driver and asked what he was doing. He smiled and tilted the lid on his bolted-down laptop so that I could see the pictures he was taking with the mounted cameras: He was photographing every license plate in the garage (for the record, his plate was a Virginia tag number 36-646L). The man said he was hired by the airport to keep track of the precise location of every car in the lot, explaining that the data is most often used by the airport when passengers returning from a trip forget where they parked their vehicles. I checked with the Metropolitan Washington Airports Authority (MWAA), which manages the garage, and they confirmed the license plate imaging service was handled by a third-party firm called HUB Parking. “Reagan National uses this service to assist customers in finding their lost vehicles,” said MWAA spokesperson Kimberly Gibbs. “If the customer remembers their license plate it can be entered into the system to determine what garages and on what aisle their vehicle is parked.” What does HUB Parking do with the information its clients collect? Ilaria Riva, marketing manager for HUB Parking, says the company does not sell or share the data it collects, and that it is up to the client to decide how that information is stored or shared. “It is true the solution that HUB provides to our clients may collect data, but HUB does not own the data nor do we have any control over what the customer does with it,” Riva said. Gibbs said MWAA does not share parking information with outside organizations. [Krebs on Security]

Online Privacy

US – Online Tool Allows Users to Inspect Banks’ Privacy Notices

Computer scientists at Carnegie Mellon have developed an online tool designed to help users examine banks’ privacy notices. The tool, simply titled “Bank Privacy” inspects the notices of a user’s bank, and other banks within the area, giving the user the opportunity to possibly find a bank with a privacy notice they prefer. “We collected lists of financial institutions in the United States and wrote a computer program that automatically queries Google in search of companies’ standardized notices on their websites,” Carnegie Mellon wrote in a paper on the subject. “Upon finding such a notice, the program automatically parses the standardized notice and feeds the extracted information into a database, enabling a large-scale comparison of financial institutions’ privacy practices.” [Motherboard]

WW – Survey: Indians Most Likely to Share Sensitive Info on Public Wi-Fi Hubs During Vacation

An Intel Security survey of 13,960 respondents across 14 nations found that at 31%, India boasts the most leisure travelers comfortable with sharing personal information over public Wi-Fi. Among the personal information is credit card data, usernames and passwords, the report states. “More than one out of three Indians (36%) share their personal data even when they realize that this will make them vulnerable,” the survey states. This is potentially problematic as cyber thieves target public Wi-Fi with increased frequency. [Business Standard]

US – Facebook in Privacy Fail as Psychiatrist’s Patients Are Recommended to Become Friends With Each Other

Facebook’s mission, as defined by its founder Mark Zuckerberg, has always been to ‘connect the world’, but now it seems the social media giant has gotten too good at doing just that. Every Facebook user is familiar with the ‘People You May Know’ section of the site, which lists people with whom you have friends in common, or in whose photos you’ve been tagged. But Facebook seemingly takes other factors into account when suggesting whom you should friend, including phone contacts, and possibly geographical proximity. According to Fusion writer Kashmir Hill, she has been contacted recently by a psychiatrist named Lisa who discovered that Facebook had started recommending her own patients as potential friends. The mental health professional, who lives in a small town, was surprised and troubled by this development, since she was an infrequent Facebook user and had not granted the app access to her phone contacts.  However, upon reviewing her Facebook profile, Lisa realized that she had shared her own phone number on the social media site. The matter took a more disturbing turn when one of her patients, a snowboarder in his 30s, came to her saying that he had begun getting recommendations to ‘friend’ septuagenarians with whom he had nothing in common, and whom he never met. Sometime later, another patient of Lisa’s got a friend suggestion on Facebook for a person she recognized from a chance encounter in the office’s elevator. Now the woman had another patient’s full name and other personal information listed on his social media profile. ‘It’s a massive privacy fail,’ said Lisa, who asked Fusion not to use her real name. ‘I have patients with HIV, people that have attempted suicide and women in coercive and violent relationships.’ As a precaution, the psychiatrist and her colleagues in the medical community now urge their patients not go on Facebook while at the office, or even leave their phones at home when going for an appointment. However, Facebook says its friend-finding algorithm does not rely on geographic proximity. An alternative theory is that Lisa’s patients began popping up on each other’s Facebook pages because they have her phone number in their own phones, which the social network’s algorithm then possibly used to link them up. In a statement to Fusion, Facebook could not confirm that hypothesis, but a spokesperson said that the ‘People You May Know’ function uses a variety of data to source its suggestions, including mutual friends, phone contacts, school and work information, and networks to which users belong. [Daily Mail]

Other Jurisdictions

AU – NSW Gov’t Rejects Legal Remedies for Invasions of Privacy

The NSW government has knocked back the advice of its law and justice committee to adopt new legal protections that would allow residents to take court action against serious invasions of their privacy. Attorney-general Gabrielle Upton rejected the recommendation following the committee’s nine-month investigation into the remedies available to individuals who feel their personal privacy has been breached. The laws recommended by the NSW committee could have seen individuals handed the ability to sue people and organisations alike for serious breaches of privacy. Existing privacy laws only apply to government agencies and businesses with a turnover greater than $3 million per annum, and govern how they must store and manage personal data. Instead of introducing a privacy tort, the NSW government has indicated it will tweak existing criminal legislation to outlaw the “non-consensual sharing of intimate images” or ‘revenge porn’. She said in the absence of a uniform national law addressing the issue – which to date has been ignored by the Commonwealth – a NSW-only course of action would create inter-jurisdictional headaches for business and would open the Australian courts system up to “forum shopping” for preferential conditions by litigants. However, NSW has not ruled out continuing to lobby for a federal law with the help of its fellow states and territories. [Source]

WW – The World is Looking to the US for Third Party Risk Guidance

As more organizations here in North America and overseas increasingly utilize third party vendors with a global presence to perform critical functions, process key transactions and provide exposure to sensitive proprietary information, those organizations with mature third party risk (TPR) programs are receiving a loud call to provide assistance to those new to the TPR field. This issue is also not a US-centric challenge; organizations globally are struggling with standardization as well. Robin Jones, of the UK’s Financial Conduct Authority (FCA), Prudential Regulation Authority (PRA) discussed the fact that innovation in technology is receiving the strongest emphasis in the prudential specialists unit and that the unit is focused on those issues that surround events that involve an organization’s third parties (1). He further added his unit is paying renewed focus on technology resiliency and outsourcing (termed “TRO”) and that the FCA’s Cyber Risk Team is monitoring these elements of soundness and risk with the industry. [Huffington Post]

Privacy (US)

US – Google to Pay $5.5 Million for Sneaking Around Apple’s Privacy Settings to Scoop User Data

Google has agreed to pay a $5.5 million settlement in a class-action suit that resulted from cookie placement that worked around Apple Safari do-not-track settings. The lawsuit suggested Google collected the user data to boost ad revenue. “Behaviorally targeted advertisements based on a user’s tracked internet activity generally sell for at least twice as much as non-targeted, run-of-network ads,” the suit said. The settlement money will be sent to six technology and privacy groups, including the Berkeley Center for Law & Technology and the Center for Internet & Society at Stanford.Editor’s Note: Find the facts and analysis of the FTC settlement with Google in our FTC Casebook. [SiliconBeat]

US – Court Ruling Is A ‘Fatal Blow’ to Consumer Protections, Advocates Say

Companies such as Google and Facebook thrive on your personal data — the bits of information that tell advertisers how old you are, what brands you like and how long you lingered on that must-see cat video. Historically, how these companies use this data has been subject to oversight by the Federal Trade Commission, the government’s top privacy watchdog. A big court defeat for the FTC this week is putting the agency’s power to protect consumers in jeopardy, analysts say. The ruling could wind up giving Google and Facebook — not to mention other companies across the United States — the ability to escape all consumer-protection actions from the FTC, and possibly from the rest of government, too, critics claim, unless Congress intervenes. In the wake of the setback, the FTC is mulling an appeal — which would mean either asking for a rehearing at the U.S. Court of Appeals for the 9th Circuit, or escalating to the Supreme Court, according to a person close to the agency. But unless regulators can persuade the courts to overturn Monday’s decision, the result will be “a fatal blow” to consumer protection, said Jeffrey Chester, executive director of the Center for Digital Democracy. [Washington Post]

US – Clinton Campaign Switching to ‘Snowden-Approved’ Signal Messaging App

Following suspected Russian hacks of the DNC and the subsequent release of email messages through WikiLeaks, the Hillary Clinton campaign is said to be taking security advice from an unusual source: Edward Snowden. According to a new Vanity Fair article, campaign staffers were told: “If anyone was going to communicate about Donald Trump over e-mail or text message, especially if those missives were even remotely contentious or disparaging, it was imperative that they do so using an application called Signal…Signal, staffers in the meeting were told, was ‘Snowden-approved.’“ Signal is a messaging app for iOS and Android that allows for encrypted communication. The Clinton campaign has not yet responded to a request for comment about what messaging apps staffers are using. [CNET]

Privacy Enhancing Technologies (PETs)

WW – HP Builds First Laptop with Built-In Privacy Screen

Yahoo reports HP has built the first laptop to have a built-in privacy screen. Previously, consumers had to bolt on physical privacy screens designed to prevent anyone 35 degrees away from the center from seeing the contents of the monitor. Now, 3M’s solution will be built in. “Designed with more than 20 years of 3M optical films technology experience incorporated into the privacy screen, HP Sure View helps address the concern of protecting sensitive information through a world-class solution tailor-made for open work environments and for the mobile worker,” said 3M’s Vice President and General Manager of display materials and systems division, Makoto Ishii. [Full Story]

RFID / IoT

WW – Industrial IoT Groups Working Together to Develop Industrywide Standards

The Organization for Machine Automation and Control, OPC Foundation, and PLCopen have announced plans to band together and create industrial internet of things standards for data sharing and “seamless … interoperability.” This alliance comes on the heels of each group’s individual IIoT developments, like creating a global taskforce charged with developing a companion specification for industry tools. However, industrywide “standards are needed to support communications from machine-to-machine and from the plant floor to interfaces that will allow large scale data analytics and information transfer,” said OMAC’s John Kowal. “It just makes sense for these organizations which have individually done so much to advance automated manufacturing to collaborate and avoid redundant developments.” [AutomationWorld]

US – Chicago’s New Data-Collecting Sensors Stir Privacy Concerns

The Array of Things made its live debut in Chicago, where the city installed two 10-pound nodes on traffic posts last week. The nodes contain low resolution cameras, microphones and various air quality sensors, along with sensors that detect use of WiFi and Bluetooth devices within a 100-foot range. The Array of Things is a collaborative project between the University of Chicago, Argonne National Laboratory and the School of the Art Institute of Chicago that was originally launched in 2014 and is designed to be a “fitness tracker” for the city. But for privacy-minded citizens, there are glaring holes in the project that have yet to be addressed. The resolution on cameras is thought to be low, and the sound sensors are meant to only monitor sound levels – not record noises, as there will be audio and image files that will be used to calibrate the sensors. A written response from project managers explained, “These images will contain no sensitive PII, but some may show faces or license plate numbers.” All information gathered by AoT will be available to the public – except for ones containing PII. In an attempt to maintain transparency, the Department of Innovation and Technology fielded questions from residents about their concerns. PII data will not be made public but will be stored in a separate, safe facility, where access to this data is “restricted to operator employees, contractors and approved scientific partners who need to process the data for instrument design and calibration purposes, and who are subject to strict contractual confidentiality obligations and will be subject to discipline and/or termination if they fail to meet these obligations. … The privacy and governance policies nevertheless limit who will have access to data, under what circumstances, and for the limited purpose of research and development.” When it comes to warrants, the project managers were even vaguer, saying, “The University of Chicago, as copyright holder of the data, would be responsible for responding to law enforcement requests.” [rt.com]

WW – The Internet of Things: A 101 Guide to Privacy in The Digitized World

According to a new report by Altimeter, ‘Consumer Perception in the Internet of Things’ there’s a growing consumer anxiety concerning the ‘digitization of our physical world’. At the same time the report states that 87% of American citizens in one study didn’t even know what IoT is. They were worried about their privacy, but weren’t exactly sure how or why it was being plundered in the digital world. Other respondents in the study were aware their cookies were being tracked, but had little idea why, or at least asked for more transparency from those collecting the information. The gist of the study: “Roughly 60% of all respondents report such heightened discomfort in the sharing/selling of their data.” So what should the consumer be thinking about right now in terms of his/her privacy? “At a minimum, you need to be aware of two facts: (1) people and companies will want to collect data about you and might do so without your permission, and (2) there is no total security, and every system can be hacked. Follow some simple rules: be mindful about what data you share and ask yourself what somebody could do with it. If in doubt, reject to share and ask the vendor questions, and ask yourself if the vendor is trustworthy. For the security aspect, always keep your software and devices updated; don’t use weak passwords, be mindful of the risks, and encrypt your data wherever possible.” [siliconangle.com]

Security

US – FTC Cautions that Developing Secure APIs Remains a Challenge

The FTC examines the ongoing challenge of developing secure application programmable interfaces (“APIs”) in light of the InMobi settlement. Consumers are unaware that app developers or third party ad networks can use legitimately collected unique identifiers (e.g. BSSIDs) and other Wi-Fi network information to infer and track consumers’ location; despite related changes made to Android and iOS, app developers should ensure that their use of APIs are consistent with their privacy promises and consider contractual terms to ensure that their third party service providers (e.g. ad networks and analytics firms) do not circumvent consumers’ privacy choices. [FTC – A Deep Dive Into Mobile App Location Privacy Following The InMobi Settlement]

WW – Data Science Helping Organizations Stop Insider Threats

With physical boundaries of corporate networks and digital assets not as clearly defined as they once used to be, the focus in fighting insider threats needs to shift toward protecting user accounts. “Now that the traditional security perimeter has been erased by mobile and cloud computing, identities have become both an attack vector and security perimeter.” The truth is that credential theft does happen, and it happens a lot. In fact, a Verizon 2015 data breach report found that the majority of confirmed security incidents occur as a result of compromised user accounts. Massive lists of user credentials and passwords are being sold on the Dark Web at low prices, and, for a small fee, anyone can obtain access to all sorts of enterprise networks and cloud services, and impersonate legitimate users. Therefore, fighting insider attacks hinges on detecting anomalous user behavior. But this again presents its own set of challenges, because defining normal and malicious behavior is not an exact science and involves a lot of intricacies. Data science is helping organizations crack down on insider threats. Data science is used to extract knowledge and detect patterns. The information it produces can help an organization define normal user behavior based on identities, roles, and working circumstances. Using data science can help point out abnormal user behavior, stop insider threats, and help lower the amount of false positives. “Most users have rather clean and repeating patterns in their work from a statistics point of view,” said F-Secure Labs Lead Researcher Jarno Niemelä. “Thus, alarming changes in the users’ behavior can be detected with suitable near real-time statistics analysis tools, supported by heuristics and machine learning systems.” [TechCrunch]

MX – Mexican DPA Says Lost, Stolen or Improperly Discarded Devices Are Common Cause of Data Breaches

The Mexican data protection authority issues its “Guide to Securely Erasing Personal Data“. Individuals seeking to retrieve personal data for improper purposes collect discarded documents and tape them back together, find broken equipment parts and reuse them, and use specialized software to retrieve data from a “wiped” device; proper destruction methods include crushing, incinerating, pulverizing, shredding or chemical processes (physical media), and degaussing, over-writing or cryptographic erasure (electronic media. [DPA Mexico – Lost Stolen or Improperly Discarded Devices The Primary Cause of Data Breaches | Press release]

Smart Cards

WW – Apple’s New Patent Shows Future iPhones and iPads Will Capture the Biometrics, Photos, Videos and Audio of the Thief

Theft of smartphones is still rampant, despite current security measures such as fingerprint technology and Apple’s Touch ID. Thieves always find a way around these security protocols. However, a patent application by Apple will make life difficult for iPhone and iPad thieves in the future. Apple filed a patent with the USPTO on 25 August 2016. The patent details a technology that will allow a “trigger condition” to record the biometric, photos, audio, and video of an authorized user of a “computing device”, in this case, an iPhone or iPad, which are currently the only Apple devices that can capture biometrics. The technology will then store the acquired data which may be fingerprints, photos, and so on. The computing device may then provide the stored data for identification of the unauthorized user. From the information in the filed patent, the trigger conditions are unclear. Probably the trigger is a report by the authorized user to law enforcement authorities or Apple. Or maybe a single failed attempt to unlock the device using touch ID will be the trigger. However, there is a slight problem with Apple’s Touch ID. The technology requires a user to place the finger in different angles for verification. It is, therefore, a little unclear how Apple will register a failed unlock attempt(s) as a trigger condition. The fact that the patent suggests Apple will stealthily capture personal identifier data already raises security concerns. A more practical move would be to make the technology optional in future iOS releases. But even then we are not sure that would not make the company lose credibility among customers who mind about their privacy. [Mobipicker.com]

US – Delta Air Lines Introduces Tracking Tags to Combat Lost Luggage

Radio frequency identification (RFID) is also widely used in our daily lives, from keyless cars to pet microchips. Delta Air Lines, which says the amount of luggage it mishandles is low, has spent $50 million (U.S.) in new technology to keep better track of the 120 million bags it checks each year. The system launched this month. It’s replacing an old barcode system with RFID technology, also known as radio frequency identification. It allows for data to be read at a distance, easily pinpointing a single bag if it needs to come off a plane. The airline has deployed 4,600 scanners and 3,800 bag tag printers at airports around the world. Conveyer belt loaders have sensors that give the green light if the suitcase is headed to the right plane, and a red light if it’s not, so a baggage handler can redirect it. Australia’s Qantas Airways has used similar technology for its automatic bag drop system on domestic flights, which the airline says has shortened lines. Elite frequent flyers receive a reuseable RFID bag tag, and other passengers can buy one. An estimated 1.5 million permanent tags have been issued in the past two years. In Canada, no airline has plans to adopt the tags yet, though Air Canada is running a test in its Montreal and Frankfurt warehouses for cargo shipments. WestJet Airlines spokeswoman Lauren Stewart said the airline has reviewed the technology but has no plans to run any trials. “As a low-cost carrier we are highly aware of the expense of such tools,” she said. “In addition, the hardware and infrastructure would require installation at each airport.” Porter Airlines spokesman Brad Cicero said the carrier’s baggage mishandling rate for the last two years is 0.4 per 1,000 passengers, “so we’re very comfortable with this standard and our current processes.” [Toronto Star]

Surveillance

WW – Transit Systems Have their Eyes on You but Surveillance Footage Isn’t Always There When it Counts

Security cameras are ubiquitous on public transit across the country, but when it comes to using them to investigate sexual harassment or assault, what they record is often gone before it can be used. While victims might take weeks or even months to report an incident, surveillance footage can be erased in a matter of days. In Canada’s largest city, Toronto, security camera footage from streetcars, buses, subway trains and stations is kept for three days. It was the report of an alleged assault on a city bus that prompted Toronto’s transit agency to extend the amount of time that it holds on to footage a year ago. At the time, footage from streetcars and buses was held for only 15 hours, but after a teenage girl went to police to report an assault a few days after it allegedly happened and found there was no video evidence available, the Toronto Transit Commission extended that to 72 across its whole system. Some women who have experienced harassment or assault say 72 hours still doesn’t give victims enough time to report an incident. The TTC used to be allowed to hold on to surveillance camera footage for a week, but that changed eight years ago when it expanded the use of cameras throughout the transit network. At the time, Ontario’s privacy commissioner, Ann Cavoukian, approved the addition of 12,000 cameras on condition that images be held for a maximum of 72 hours to protect riders’ right to privacy. The exception to the 72-hour limit is the TTC’s wheelchair transportation service, which holds on to footage for seven days. The transit agency’s justification is that riders with handicaps or cognitive impairments might need more time to report incidents. An investigation by the city’s ombudsman also found that the footage has been used by the TTC to reassess whether riders are still eligible for the service. Transit agencies in some other Canadian cities keep their security footage for longer than the TTC. In Edmonton, footage from trains on the light rail transit system is retained for 48 hours, but footage from stations is held for 21 days. Bus system surveillance is held for 18 days. Vancouver used to keep surveillance footage from its SkyTrain system for only two hours when it was using video tapes, but since moving to a digital system in 2008, footage can be held for up to a week. Reports of sexual assaults on the Toronto subway system are significantly down, according to police, with 67 reported in 2014 compared to 56 last year, but police say that’s not necessarily a good thing given that the majority of sex assaults never get reported. [CBC] ‘Harassment on TransLink’ website lets victims speak out | TTC votes on whether to retain surveillance video longer | Sexual harassment on the rise on transit, say police | Ontario privacy chief gives green light to TTC surveillance plans | ETS starts ‘zero tolerance’ campaign to curb sexual harassment | TransLink safe despite recent assaults, officials say | TTC to develop new app that would enable riders to report harassment | New OnDuty transit police smartphone app released and [Considering Privacy in the Age of the Camera]

WW – Ambient Light Sensors an Up-and-Coming Privacy Issue

University College London’s Lukasz Olejnik’s new research maintains that online ambient light sensors could pose a threat to privacy. “Lighting conditions in the user’s surrounding convey rich and sensitive data describing users and their behavior,” Olejnik writes. “This information could be hijacked and abused, applied to profile the users and perhaps discriminate them.” The information at stake includes data about “the user, the user’s environment, the user’s behavior and life patterns,” as well as information about the user’s home, he adds. While Olejnik cautions users not to be fearful online, as many projects like SensorsPrivacy.com work to increase the safety of technology. However, he does encourage websites to limit the amount of ambient light sensors they collect. [The Daily Dot] [Privacy problems on the Web: Even your device’s battery life can be used to track you]

Workplace Privacy

EU – Spying on an Employee in France Breaches His Right to Privacy, Even Where He is Committing Breaches of His Employment Contract

The French Supreme Court recently ruled that an employer could not rely on the report of a private detective it had hired to spy on one of its employees to obtain an injunction against him because this was a breach of the employee’s privacy and that could not be justified, however legitimate were its concerns. The first instance Court accepted that the employer had legitimate reasons to secure evidence but on appeal the employee disputed the validity of the Order on the ground that the employer had breached the employee’s right to a private life as protected by Article 9 of the French Civil Code and Article 8 of the European Convention on Human Rights. The Supreme Court ruled that the first instance Court should have rejected the employer’s application because it had relied on unlawfully obtained evidence to sustain it (i.e. the report from the detective). It was immaterial to this that the report clearly showed the employee to be in breach of his obligations to the employer and that he could now potentially destroy evidence of that guilt before a trial on the issue. This decision is consistent with earlier case law of the French Supreme Court, which declares inadmissible any evidence collected by employers through covert surveillance of employees, whether the spying is done by someone hired by the employer or by the employer itself, on the ground that it breaches the employee’s privacy rights. More generally, the Supreme Court also usually rejects as inadmissible any other evidence that has been collected through clandestine means (i.e. without the employee having been informed of the control/surveillance methods) with the consequence that an employee’s dismissal based on that evidence will be deemed automatically unfair, almost however guilty of the misconduct in question he may actually be. [The National Law Review]

WW – Research: Customer Monitoring Also Affects Employees

Solon Barocas and Karen Levy discuss how retailers’ efforts to monitor customer behavior also affects their employees, a consequence they refer to as refractive surveillance. “This effect of data collection is often overlooked. Debates about consumer privacy have largely missed the fact that firms’ abilities to develop a better understanding of consumers also impacts workers’ day-to-day experiences, their job security, and their financial well-being,” they write. Barocas and Levy detail the repercussions of the tracking, saying it impacts employees’ relationships with customers, when they work, and the evaluation process. Since these are “still early days for in-store tracking,” Barocas and Levy contend that managers “have an opportunity to explore how to collect customer data in ways that both respect consumers’ privacy and advance the legitimate interests of workers.” [Harvard Business Review]

+++

 

 

Advertisements
Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: