CDD

Publishings

  • The COVID-19 pandemic is a global public health emergency that requires a coordinated and large-scale response by governments worldwide. However, States’ efforts to contain the virus must not be used as a cover to usher in a new era of greatly expanded systems of invasive digital surveillance.We, the undersigned organizations, urge governments to show leadership in tackling the pandemic in a way that ensures that the use of digital technologies to track and monitor individuals and populations is carried out strictly in line with human rights.Technology can and should play an important role during this effort to save lives, such as to spread public health messages and increase access to health care. However, an increase in state digital surveillance powers, such as obtaining access to mobile phone location data, threatens privacy, freedom of expression and freedom of association, in ways that could violate rights and degrade trust in public authorities – undermining the effectiveness of any public health response. Such measures also pose a risk of discrimination and may disproportionately harm already marginalized communities.These are extraordinary times, but human rights law still applies. Indeed, the human rights framework is designed to ensure that different rights can be carefully balanced to protect individuals and wider societies. States cannot simply disregard rights such as privacy and freedom of expression in the name of tackling a public health crisis. On the contrary, protecting human rights also promotes public health. Now more than ever, governments must rigorously ensure that any restrictions to these rights is in line with long-established human rights safeguards.This crisis offers an opportunity to demonstrate our shared humanity. We can make extraordinary efforts to fight this pandemic that are consistent with human rights standards and the rule of law. The decisions that governments make now to confront the pandemic will shape what the world looks like in the future.We call on all governments not to respond to the COVID-19 pandemic with increased digital surveillance unless the following conditions are met:Surveillance measures adopted to address the pandemic must be lawful, necessary and proportionate. They must be provided for by law and must be justified by legitimate public health objectives, as determined by the appropriate public health authorities, and be proportionate to those needs. Governments must be transparent about the measures they are taking so that they can be scrutinized and if appropriate later modified, retracted, or overturned. We cannot allow the COVID-19 pandemic to serve as an excuse for indiscriminate mass surveillance.If governments expand monitoring and surveillance powers then such powers must be time-bound, and only continue for as long as necessary to address the current pandemic. We cannot allow the COVID-19 pandemic to serve as an excuse for indefinite surveillance.States must ensure that increased collection, retention, and aggregation of personal data, including health data, is only used for the purposes of responding to the COVID-19 pandemic. Data collected, retained, and aggregated to respond to the pandemic must be limited in scope, time-bound in relation to the pandemic and must not be used for commercial or any other purposes. We cannot allow the COVID-19 pandemic to serve as an excuse to gut individual’s right to privacy.Governments must take every effort to protect people’s data, including ensuring sufficient security of any personal data collected and of any devices, applications, networks, or services involved in collection, transmission, processing, and storage. Any claims that data is anonymous must be based on evidence and supported with sufficient information regarding how it has been anonymized. We cannot allow attempts to respond to this pandemic to be used as justification for compromising people’s digital safety.Any use of digital surveillance technologies in responding to COVID-19, including big data and artificial intelligence systems, must address the risk that these tools will facilitate discrimination and other rights abuses against racial minorities, people living in poverty, and other marginalized populations, whose needs and lived realities may be obscured or misrepresented in large datasets. We cannot allow the COVID-19 pandemic to further increase the gap in the enjoyment of human rights between different groups in society.If governments enter into data sharing agreements with other public or private sector entities, they must be based on law, and the existence of these agreements and information necessary to assess their impact on privacy and human rights must be publicly disclosed – in writing, with sunset clauses, public oversight and other safeguards by default. Businesses involved in efforts by governments to tackle COVID-19 must undertake due diligence to ensure they respect human rights, and ensure any intervention is firewalled from other business and commercial interests. We cannot allow the COVID-19 pandemic to serve as an excuse for keeping people in the dark about what information their governments are gathering and sharing with third parties.Any response must incorporate accountability protections and safeguards against abuse. Increased surveillance efforts related to COVID-19 should not fall under the domain of security or intelligence agencies and must be subject to effective oversight by appropriate independent bodies. Further, individuals must be given the opportunity to know about and challenge any COVID-19 related measures to collect, aggregate, and retain, and use data. Individuals who have been subjected to surveillance must have access to effective remedies.COVID-19 related responses that include data collection efforts should include means for free, active, and meaningful participation of relevant stakeholders, in particular experts in the public health sector and the most marginalized population groups.Signatories:7amleh – Arab Center for Social Media AdvancementAccess NowAfrican Declaration on Internet Rights and Freedoms CoalitionAI NowAlgorithm WatchAlternatif BilisimAmnesty InternationalApTIARTICLE 19Asociación para una Ciudadanía Participativa, ACI ParticipaAssociation for Progressive Communications (APC)ASUTIC, SenegalAthan - Freedom of Expression Activist OrganizationAustralian Privacy FoundationBarracón DigitalBig Brother WatchBits of FreedomCenter for Advancement of Rights and Democracy (CARD)Center for Digital DemocracyCenter for Economic JusticeCentro De Estudios Constitucionales y de Derechos Humanos de RosarioChaos Computer Club - CCCCitizen D / Državljan DCIVICUSCivil Liberties Union for EuropeCódigoSurCoding RightsColetivo Brasil de Comunicação SocialCollaboration on International ICT Policy for East and Southern Africa (CIPESA)Comité por la Libre Expresión (C-Libre)Committee to Protect JournalistsConsumer ActionConsumer Federation of AmericaCooperativa Tierra ComúnCreative Commons UruguayD3 - Defesa dos Direitos DigitaisData Privacy BrasilDemocratic Transition and Human Rights Support Center "DAAM"Derechos DigitalesDigital Rights Lawyers Initiative (DRLI)Digital Rights WatchDigital Security Lab UkraineDigitalcourageEPICepicenter.worksEuropean Digital Rights - EDRiFitugFoundation for Information Policy ResearchFoundation for Media AlternativesFundación Acceso (Centroamérica)Fundación Ciudadanía y Desarrollo, EcuadorFundación Datos ProtegidosFundación Internet BoliviaFundación Taigüey, República DominicanaFundación Vía LibreHermes CenterHiperderechoHomo DigitalisHuman Rights WatchHungarian Civil Liberties UnionImpACT International for Human Rights PoliciesIndex on CensorshipInitiative für NetzfreiheitInnovation for Change - Middle East and North AfricaInternational Commission of JuristsInternational Service for Human Rights (ISHR)Intervozes - Coletivo Brasil de Comunicação SocialIpandetecIPPFIrish Council for Civil Liberties (ICCL)IT-Political Association of DenmarkIuridicum Remedium z.s. (IURE)KarismaLa Quadrature du NetLiberia Information Technology Student UnionLibertyLuchadorasMajal.orgMasaar "Community for Technology and Law"Media Rights Agenda (Nigeria)MENA Rights GroupMetamorphosis FoundationNew America's Open Technology InstituteObservacomOpen Data InstituteOpen Rights GroupOpenMediaOutRight Action InternationalPangeaPanoptykon FoundationParadigm Initiative (PIN)PEN InternationalPrivacy InternationalPublic CitizenPublic KnowledgeR3D: Red en Defensa de los Derechos DigitalesRedesAyudaSHARE FoundationSkyline International for Human RightsSursiendoSwedish Consumers’ AssociationTahrir Institute for Middle East Policy (TIMEP)Tech InquiryTechHerNGTEDICThe Bachchao ProjectUnwanted Witness, UgandaUsuarios DigitalesWITNESSWorld Wide Web Foundation
  • By Jeffrey Chester The COVID-19 pandemic is a profound global public health crisis that requires our upmost attention: to stem its deadly tide and rebuild the global health system so we do not experience such a dire situation in the future. It also demands that we ensure the U.S. has a digital media system that is democratic, accountable, and one that both provides public services and protects privacy. The virus is profoundly accelerating our reliance on digital media worldwide, ushering (link is external) in “a new landscape in terms of how shoppers are buying and how they are behaving online and offline.” Leading platforms—Amazon, Facebook and Google—as well as many major ecommerce and social media sites, video streaming services, gaming apps, and the like—are witnessing a flood of people attempting to research health concerns, order groceries and supplies, view entertainment and engage in communication with friends and family. According to a marketing industry report (link is external), “nearly 90% of consumers have changed their behavior because of COVID-19.” More data (link is external) about our health concerns, kids, financial status, products we buy and more are flowing into the databases of the leading digital media companies. The pandemic will further strengthen their power as they leverage all the additional personal information they are currently capturing as a consequence of the pandemic. This also poses a further threat to the privacy of Americans who are especially dependent on online services if they are to survive. The pandemic is accelerating societal changes (link is external) in our relationship to the Internet. For example, marketers predict that we are witnessing the emergence of an experience they call the “fortress home”—as “consumer psychology shifts into an extreme form of cocooning.” The move to online buying via ecommerce—versus going to a physical store—will become an even more dominant consumer behavior. So, too, will in-home media consumption increase, especially the reliance on streaming (“OTT”) video. Marketers are closely examining all these pandemic-related developments using a global lens—since the digital behaviors of all consumers—from China to the U.S.—have so many commonalities. For example, Nielsen has identified six (link is external) “consumer behavior thresholds” that reveal virus-influenced consumer buying behaviors, such as “quarantined living preparation” and “restricted living.” A host of sites are now regularly reporting how the pandemic impacts the public, and what it means for marketing and major brands. See, for example, Ipsos (link is external), Comscore (link is external), Nielsen (link is external), Kantar (link is external), and the Advertising Research Foundation (ARF (link is external)). In addition to the expanded market power of the giants, there are also growing threats to our privacy from surveillance by both government (link is external) and the commercial sector. Marketers are touting how all the real-time geolocation data that is continuously mined from our mobile devices, wearables (link is external) and “apps” can help public health experts better respond to the virus and similar threats. At a recent (link is external) Advertising Research Foundation townhall on the virus it was noted that “the location-based data that brand stewards have found useful in recent years to deliver right-time/right-place messages has ‘gone from being useful that helps businesses sell a little bit more’ to truly being a community and public-health tool.” Marketers will claim that they have to track all our moves because it’s in the national interest in order to sanction the rapid expansion of geo-surveillance (link is external) in all areas of our lives. They are positioning themselves to be politically rewarded for their work on the pandemic, hoping it will immunize them from the growing criticism about their monopolistic and anti-consumer privacy behaviors. Amazon, Facebook, Google, Snapchat and various “Big Data” digital marketing companies announced (link is external), for example, a COVID-19 initiative with the White House and CDC. Brokered by the Ad Council, it will unleash various data-profiling technologies, influencer marketing, and powerful consumer targeting engines to ensure Americans receive information about the virus. (At the same time, brands are worried about having their content appear alongside information about the coronavirus, adopting new (link is external) “brand safety” tools that can “blacklist” news and other online sites. This means that the funding for journalism and public safety information becomes threatened (link is external) because advertisers wish to place their own interests first.) But the tactics (link is external) now sanctioned by the White House are the exact same ones that must be addressed in any legislation that effectively protects our privacy online. We believe that the leading online companies should not be permitted to excessively enrich themselves during this moment by gathering even more information on the public. They will mine this information for insights that enable them to better understand our private health needs and financial status. They will know more about the online behaviors of our children, grandparents and many others. Congress should enact protections that ensure that the data gathered during this unprecedented public health emergency are limited in how they can be used. It should also examine how the pandemic is furthering the market power of a handful of platforms and ecommerce companies, to ensure there is a fair marketplace accessible to the public. It’s also evident there must be free or inexpensively priced broadband for all. How well we address the role of the large online companies during this period will help determine our ability to respond to future crises, as well as the impact of these companies on our democracy.
  • Press Release

    Children’s privacy advocates call on FTC to require Google, Disney, AT&T and other leading companies to disclose how they gather and use data to target kids and families

    Threats to young people from digital marketing and data collection are heightened by home schooling and increased video and mobile streaming in response to COVID-19

    Contact: Jeffrey Chester, CDD, jeff@democraticmedia.org (link sends e-mail), 202-494-7100 Josh Golin, CCFC, josh@commercialfreechildhood.org (link sends e-mail), 339-970-4240 Children’s privacy advocates call on FTC to require Google, Disney, other leading companies to disclose how they gather and use data to target kids and families Threats to young people from digital marketing and data collection are heightened by home schooling and increased video and mobile streaming in response to COVID-19 WASHINGTON, DC and BOSTON, MA – March 26, 2020 – With children and families even more dependent on digital media during the COVID-19 crisis, the Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) called on the Federal Trade Commission (FTC) to require leading digital media companies to turn over information on how they target kids, including the data they collect. In a letter to the FTC, the advocates proposed a series of questions to shed light on the array of opaque data collection and digital marketing practices which the tech companies employ to target kids. The letter includes a proposed list of numerous digital media and marketing companies and edtech companies that should be the targets of the FTC’s investigation—among them are Google, Zoom, Disney, Comcast, AT&T, Viacom, and edtech companies Edmodo and Prodigy. The letter—sent by the Institute for Public Representation at Georgetown Law, attorneys for the advocates—is in response to the FTC’s early review of the rules protecting children under the Children’s Online Privacy Protection Act (COPPA). The groups said “children’s privacy is under siege more than ever,” and urged the FTC “not to take steps that could undermine strong protections for children’s privacy without full information about a complex data collection ecosystem.” The groups ask the Commission to request vital information from two key sectors that greatly impact the privacy of children: the edtech industry, which provides information and technology applications in the K-12 school setting, and the commercial digital data and marketing industry that provides the majority of online content and communications for children, including apps, video streaming, and gaming. The letter suggests numerous questions for the FTC to get to the core of how digital companies conduct business today, including contemporary Big Data practices that capture, analyze, track, and target children across platforms. “With schools closed across the country, American families are more dependent than ever on digital media to educate and occupy their children,” said CCFC’s Executive Director, Josh Golin. “It’s now urgent that the FTC use its full authority to shed light on the business models of the edtech and children’s digital media industries so we can understand what Big Tech knows about our children and what they are doing with that information. The stakes have never been higher.” “Although children’s privacy is supposed to be protected by federal law and the FTC, young people remain at the epicenter of a powerful data-gathering and commercial online advertising system," said Dr. Katharina Kopp, Deputy Director of the Center for Digital Democracy. “We call on the FTC to investigate how companies use data about children, how these data practices work against children’s interests, and also how they impact low-income families and families of color. Before it proposes any changes to the COPPA rules, the FTC needs to obtain detailed insights into how contemporary digital data practices pose challenges to protecting children. Given the outsize intrusion of commercial surveillance into children’s and families’ lives via digital services for education, entertainment, and communication, the FTC must demonstrate it is placing the welfare of kids as its highest priority.” In December, CCFC and CDD led a coalition of 31 groups—including the American Academy of Pediatrics, Center for Science in the Public Interest, Common Sense Media, Consumer Reports, Electronic Privacy Information Center, and Public Citizen—in calling on the FTC to use its subpoena authority. The groups said the Commission must better assess the impacts on children from today’s digital data-driven advertising system, and features such as cross-device tracking, artificial intelligence, machine learning, virtual reality, and real-time measurement. “Childhood is more digital than ever before, and the various ways that children's data is collected, analyzed, and used have never been more complex or opaque,” said Lindsey Barrett, Staff Attorney and Teaching Fellow at IPR’s Communications and Technology Law Clinic at Georgetown Law. “The Federal Trade Commission should shed light on how children's privacy is being invaded at home, at school, and throughout their lives by investigating the companies that profit from collecting their data, and cannot undertake an informed and fact-based revision of the COPPA rules without doing so.” "Children today, more than ever, have an incredible opportunity to learn, play, and socialize online,” said Celia Calano, student attorney at the Institute for Public Representation. “But these modern playgrounds and classrooms come with new safety concerns, including highly technical and obscure industry practices. The first step to improving the COPPA Rule and protecting children online is understanding the current landscape—something the FTC can achieve with a 6(b) investigation." ###
  • Google’s (i.e., Alphabet, Inc.) proposed acquisition of Fitbit, a leading health wearable device company, is just one more piece illustrating how the company is actively engaged in shaping the future of public health. It has assembled a sweeping array of assets in the health field, positioning its advertising system to better take advantage of health information, and is playing a proactive role lobbying to promote significant public policy changes for medical data at the federal level that will have major implications (link is external)for Americans and their health.Google understands that there are tremendous revenues to be made gathering data—from patients, hospitals, medical professionals and consumers interested in “wellness”—through the various services that the company offers. It sees a lucrative future as a powerful presence in our health system able to bill Medicare and other government programs. In reviewing the proposed takeover, regulators should recognize that given today’s “connected” economy, and with Google’s capability and intention to generate monetizeable insights from individuals across product categories (health, shopping, financial services, etc.), the deal should not be examined solely within a narrow framework. While the acquisition directly bolsters Google’s growing clout in what is called the “connected-health” marketplace, the company understands that the move is also designed to maintain its dominance in search, video and other digital marketing applications. It’s also a deal that raises privacy concerns, questions about the future direction of the U.S. health system, and what kinds of safeguards—if any at all—will be in place to protect health consumers and patients. As health venture capital fund Rock Health explained in a recent report, “Google acquired Fitbit in a deal that gives the tech giant access to troves of personal health data and healthcare partnerships, in addition to health tracking software.” Fitbit reports that “28 million active users” worldwide use its wearable device products. For Google, Fitbit brings (link is external) a rich layer of personal data, expertise in fitness (link is external) tracking software, heart-rate sensors, as well as relationships with health-service and employee-benefit providers. Wearable devices can provide a stream (link is external)of ongoing data on our activities, physical condition, geolocation and more. In a presentation to investors made in 2018, Fitbit claimed to be the “number one health and fitness” app in the U.S. for both the Android and Apple app store, and considered itself the “number one “wearable brand globally,” available in 47,000 stores, and had “direct applications for health and wellness categories such as diabetes, heart health, and sleep apnea.” “Driving behavior change” is cited as one of the company’s fundamental capabilities, such as its “use of data…to provide insights and guidance.” Fitbit developed a “platform for innovative data collection” for clinical researchers, designed to help advance (link is external) “the use of wearable devices in research and clinical applications. Fitbit also has relationships with pharmacies, including those that serves people with “complex health conditions.” Fitbit has also “made a number of moves to expand its Health Services division,” such as its 2018 acquisition of Twine Health, a “chronic disease management platform.” In 2018, it also unveiled a “connected health platform that enables payers and health systems to deliver personalized coaching” to individuals. The company’s Fitbit Health Solutions division is working with more than 100 insurance companies in the U.S., and “both government sponsored and private plans” work with the company. Fitbit Premium was launched last year, which “mines consumer data to provide personalized health insights” for health care delivery. According to Business Insider Intelligence, “Fitbit plans to use the Premium service to get into the management of costly chronic conditions like diabetes, sleep apnea, and hypertension.” The company has dozens of leading “enterprises” and “Fortune 500” companies as customers. It also works with thousands of app developers and other third parties (think Google’s dominance in the app marketplace, such as its Play store). Fitbit has conducted research to understand “the relationship between activity and mood” of people, which offers an array of insights that has applications for health and numerous other “vertical” markets. Even prior to the formal takeover of Fitbit by Google, it had developed strong ties to the digital data marketing giant. It has been a Google Cloud client since 2018, using its machine learning prowess to insert Fitbit data into a person’s electronic health record (EHR). In 2018, Fitbit said that it was going to transfer its “data infrastructure” to the Google Cloud platform. It planned to “leverage Google’s healthcare API” to generate “more meaningful insights” on consumers, and “collaborate on the future of wearables.” Fitbit’s data might also assist Google in forging additional “ties with researchers who want to unlock the constant stream of data” its devices collect. When considering how regulators and others should view this—yet again—significant expansion by Google in the digital marketplace—the following issues must be addressed: Google Cloud and its use of artificial intelligence and machine learning in a new data pipeline for health services, including marketing Google’s Cloud service offers “solutions” (link is external) for the healthcare and life sciences industry, by helping to “personalize patient experiences,” “drive data interoperability,” and improve commercialization and operations”—including for “pharma insights and analytics.” Google Cloud (link is external) has developed a specific “API” (application programming interface) that enables health-related companies to process and analyze their data, by using machine learning technologies, for example. The Health Care Cloud API (link is external)also provides a range of other data functionalities (link is external) for clinical and other uses. Google is now working to help create a “new data infrastructure layer via 3 key efforts,” according to a recent report on the market. It is creating “new data pipes for health giants,” pushing the Google Cloud and building “Google’s own healthcare datasets for third parties.” (See, for example, “G Suite (link is external) for Healthcare Businesses” products as well as its “Apigee API Platform,” which works with the Cleveland Clinic, Walgreens, and others). Illustrating the direct connection between the Google Cloud and Google’s digital marketing apparatus is their case study (link is external) of the leading global ad conglomerate, WPP. “Our strong partnership with Google Cloud is key,” said WPP’s CEO, who explained that “their vast experience in advertising and marketing combined with their strength in analytics and AI helps us to deliver powerful and innovative solutions for our clients” (which include (link is external) “369 of the Fortune Global 500, all 30 of the Dow Jones 30 and 71 of the NASDAQ 100”). WPP links the insights and other resources it generates from the Google Cloud to Google’s “Marketing Platform” (link is external) so its clients can “deliver better experiences for their audiences across media and marketing.” Google has made a significant push (link is external) to incorporate the role that machine learning plays with marketing across product categories, including search and YouTube. It is using machine learning to “anticipate needs” of individuals to further its advertising (link is external) business. Fitbit will bring in a significant amount of additional data for Google to leverage in its Cloud services, which impact a number of consumer and commercial markets beyond (link is external) health care. The Fitbit deal also involves Google’s ambitions to become an important force providing healthcare providers access to patient, diagnostic and other information. Currently the market is dominated by others, but Google has plans for this market. For example, it has developed a “potential EHR tool that would empower doctors with the same kind of intuitive and snappy search functionality they've come to expect from Google.” According to Business Insider Intelligence, Google could bundle such applications along with Google Cloud and data analytics support that would help hospitals more easily navigate the move to data heavy (link is external), value-based care (VBC) reimbursement models (link is external).” Google Health already incorporates a wide range of health-related services and investments “Google is already a health company,” according (link is external) to Dr. David Feinberg, the company’s vice president at Google Health. Feinberg explains that they are making strides in organizing and making health data more useful thanks to work being done by Cloud (link is external) and AI (link is external) teams. And looking across the rest of Google’s portfolio of helpful products, we’re already addressing aspects of people’s health. Search helps people answer everyday health questions (link is external), Maps helps get people to the nearest hospital, and other tools and products are addressing issues tangential to health—for instance, literacy (link is external), safer driving (link is external), and air pollution (link is external)…. and in response, Google and Alphabet have invested in efforts that complement their strengths and put users, patients, and care providers first. Look no further than the promising AI research and mobile applications coming from Google and DeepMind Health (link is external), or Verily’s Project Baseline (link is external) that is pushing the boundaries of what we think we know about human health. Among Google Health’s initiatives are “studying the use of artificial intelligence to assist in diagnosing (link is external) cancer, predicting (link is external) patient outcomes, preventing (link is external) blindness…, exploring ways to improve patient care, including tools that are already being used by clinicians…, [and] partnering with doctors, nurses, and other healthcare professionals to help improve the care patients receive.” Through its AI work, Google is developing “deep learning” applications for electronic health records. Google Health is expanding its team, including specifically to take advantage of the wearables market (and has also hired a former FDA commissioner to “lead health strategy”). Google is the leading source of search information on health issues, and health-related ad applications are integrated into its core marketing apparatus A billion health-related questions are asked every day on Google’s search engine, some 70,000 every minute (“around 7 percent of Google’s daily searches”). “Dr. Google,” as the company has been called, is asked about conditions, medication, symptoms, insurance questions and more, say company leaders. Google’s ad teams in the U.S. promote how health marketers can effectively use its ad products, including YouTube, as well as understand how to take advantage of what Google has called “the path to purchase.” In a presentation on “The Role of Digital Marketing in the Healthcare Industry,” Google representatives reported that After conducting various studies and surveys, Google has concluded that consumers consult 12.4 resources prior to a hospital visit. When consumers are battling a specific disease or condition, they want to know everything about it: whether it is contagious, how it started, the side-effects, experiences of others who have had the same condition, etc. When doing this research, they will consult YouTube videos, read patient reviews of specific doctors, read blog articles on healthcare websites, read reviews, side-effects, and uses of particular medicines. They want to know everything! When consuming this information, they will choose the business that has established their online presence, has positive reviews, and provides a great customer experience, both online and offline. Among the data shared with marketers was information that “88% of patients use search to find a treatment center,” “60% of patients use a mobile device,” “60% of patients like to compare and validate information from doctors with their own online research,” “56% of patients search for health-related concerns on YouTube,” “5+ videos are watched when researching hospitals or treatment centers,” and that “2 billion health-related videos are on YouTube.” The “Internet is a Patient/Caregiver’s #1 confidant,” they noted. They also discussed how mobile technologies have triggered “non-linear paths to purchase,” and that mobile devices are “now the main device used for health searches.” “Search and video are vital to the patient journey,” and “healthcare videos represent one of the largest, fastest growing content segments on YouTube today.” Their presentation demonstrated how health marketers can take advantage of Google’s ability to know a person’s location, as well as how other information related to their behaviors and interests can help them “target the right users in the right context.” To understand the impact of all of Google’s marketing capabilities, one also should review the company’s restructured (and ever-evolving) “Marketing Platform.” Google’s Map Product will be able to leverage Fitbit data Google is using data related to health that are gathered by Google Maps, such as when we do searches for needed care services (think ERs, hospitals, pharmacies, etc.). “The most popular mapping app in the U.S…. presents a massive opportunity to connect its huge user base with healthcare services,” explain Business Insider Intelligence. Google has laid the groundwork with its project addressing the country’s opioid epidemic, linking “Google Maps users with recovery treatment centers,” as well as identifying where Naloxone (the reversal drug for opioid overdoes) is available. Last year, Google Maps launched a partnership with CVS “to help consumers more easily find places to drop off expired drugs.” Through its Waze subsidiary, which provides navigation information for drivers, Google sells ads to urgent care centers, which find new patients as a result of map-based, locally tailored advertisements. Google’s impact on the wearable marketplace, including health, wellness and other apps The acquisition of Fitbit will bolster Google’s position in the wearables market, as well as its direct and indirect role providing access to its own and third-party apps. Google Fit, which “enables Android users to pair health-tracking devices with their phone to monitor activity,” already has partnerships with a number of wearable device companies, such as Nike, Adidas and Noom. Business Intelligencer noted in January 2020 that Google Fit was “created to ensure Android devices have a platform to house user-generated health data (making it more competitive with Apple products). In 2019, Google acquired the smartwatch technology from Fossil. Fitbit will play a role in Google’s plans for its Fit service, such as providing additional data that can be accessed via third parties and made available to medical providers through patients’ electronic health records. The transaction, said one analyst, “is partly a data play,” and also one intended to keep customers from migrating from its Android platform to Apple’s. It is designed, they suggest, to ensure that Google can benefit from the sales of health-related services during the peak earning years of consumers. The Google Play app store offers access to an array of health and wellness apps that will be impacted by this deal. Antitrust authorities in the EU have already sanctioned Google for the way it has leveraged its Android platform for anti-competitive behavior. Google’s health related investments, including its use of artificial intelligence, and the role of Fitbit data Verily is “where Alphabet is doing the bulk of its healthcare work,” according to a recent report on the role AI plays in Google’s plans to “reinvent the $3 Trillion U.S. healthcare industry.” Verily is “focused on using data to improve healthcare via analytics tools, interventions, research” and other activities, partnering with “existing healthcare institutions to find areas to apply AI.” One of these projects is the “Study Watch, a wearable device that captures biometric data.” Verily has also made significant investments globally as it seeks to expand. DeepMind works on AI research, including how it is applicable to healthcare. Notably, DeepMind is working with the UK’s National Health Service. Another subsidiary, Calico, uses AI as part of its focus to address aging and age-related illnesses. Additionally, “GV” (Google Ventures) makes health-related investments. According to the CB Insights report, “Google’s strategy involves an end-to-end approach to healthcare, including: Data generation — This includes digitizing and ingesting data produced by wearables, imaging, and MRIs among other methods. This data stream is critical to AI-driven anomaly detection; Disease detection — Using AI to detect anomalies in a given dataset that might signal the presence of some disease; and Disease/lifestyle management — These tools help people who have been diagnosed with a disease or are at risk of developing one go about their day-to-day lives and/or make positive lifestyle modifications. Google has also acquired companies that directly further its health business capabilities, such as Apigee, Senosis Health and others. Google’s continuous quest to gather more health data, such as “Project Nightingale,” has already raised concerns. There are now also investigations of Google by the Department of Justice and State Attorney’s-General. The Department of Justice, which is currently reviewing the Google/Fitbit deal, should not approve it without first conducting a thorough review of the company’s health-related business operations, including the impact (including for privacy) that Fitbit data will have on the marketplace. This should be made a part of the current ongoing antitrust investigation into Google by both federal and state regulators. Congress should also call on the DoJ, as well as the FTC, to review this proposed acquisition in light of the changes that digital applications are bringing to health services in the U.S. This deal accompanies lobbying from Google and others that is poised to open the floodgates of health data that can be accessed by patients and an array of commercial and other entities. The Department of Health and Human Services has proposed a rule on data “interoperability” that, while ostensibly designed to help empower health services users to have access to their own data, is also a “Trojan Horse” designed to enable app developers and other commercial entities to harvest that data as an important new profit center. “The Trump Administration has made the unfettered sharing of health data a health IT priority,” explained one recent news report. Are regulators really ready to stop further digital consolidation? The diagnosis is still out! For a complete annotated version, please see attached pdf
  • A new report (link is external) on how political marketing insiders and platforms such as Facebook view the “ethical” issues raised by the role of digital marketing in elections illustrates why advocates and others concerned about election integrity should make this issue a public-policy priority. We cannot afford to leave it in the hands of “Politech” firms and political campaign professionals, who appear unable to acknowledge the consequences to democracy of their unfettered use of powerful data-driven online-marketing applications. “Digital Political Ethics: Aligning Principles with Practice” reports on a series of conversations and a two-day meeting last October that included representatives of firms (such as Blue State, Targeted Victory, WPA Intelligence, and Revolution Messaging) that work either for Democrats or Republicans, as well as officials from both Facebook and Twitter. The goal of the project was to “identify areas of agreement among key stakeholders concerning ethical principles and best practices in the conduct of digital campaigning in the U.S.” Perhaps it should not be a surprise that this group of people appears to be incapable of critically examining (or even candidly assessing) all of the problems connected with the role of digital marketing in political campaigns. Missing from the report is any real concern about how today’s electoral process takes advantage of the absence of any meaningful privacy safeguards in the U.S. A vast commercial surveillance apparatus that has no bounds has been established. This same system that is used to market goods and services, and which is driven by data-brokers, marketing clouds, (link is external) real-time ad-decision engines, geolocation (link is external) identification and other AI-based (link is external)technologies—along with the clout of leading platforms and publishers—is now also used for political purposes. All of us are tracked and profiled 24/7, including where we go and what we do—with little location privacy anymore. Political insiders and data ad companies such as Facebook, however, are unwilling to confront the problem of this loss of privacy, given how valuable all this personal data is to their business model or political goal. Another concern is that these insiders now view digital marketing as a normative, business-as-usual process—and nothing out of the ordinary. But anyone who knows how the system operates should be deeply concerned about the nontransparent and often far-reaching ways digital marketing is constructed to influence (link is external) our decision-making and behaviors, including at emotional (link is external) and subconscious (link is external) levels. The report demonstrates that campaign officials have largely accepted as reasonable the various invasive and manipulative technologies and techniques that the ad-tech industry has developed over the past decade. Perhaps these officials are simply being pragmatic. But society cannot afford such a cynical position. Today’s political advertising is not yesterday’s TV commercial—nor is it purely an effort to “microtarget” sympathetic market segments. Today’s digital marketing apparatus follows all of us continuously, Democrats, Republicans, and independents alike. The marketing ecosystem (link is external) is finely tuned to learn how we react, transforming itself depending on those reactions, and making decisions about us in milliseconds in order to use—and refine—various tactics to influence us, entirely including new ad formats, each tested and measured to have us think and behave one way or another. And this process is largely invisible to voters, regulators and the news media. But for the insiders, microtargeting helps get the vote out and encourages participation. Nothing much is said about what happened in the 2016 U.S. election, when some political marketers sought to suppress the vote among communities of color, while others engaged is disinformation. Some of these officials now propose that political campaigns should be awarded a digital “right of way” that would guarantee them unfettered access to Facebook, Google and other sites, as well as ensure favorable terms and support. This is partly in response to the recent and much-needed reforms adopted by Twitter (link is external)and Google (link is external)that either eliminate or restrict how political campaigns can use their platforms, which many in the politech industry dislike. Some campaign officials see FCC (link is external) rules regulating TV ads for political ads as an appropriate model to build policies for digital campaigning. That notion should be alarming to those who care about the role that money plays in politics, let alone the nature of today’s politics (as well as those who know the myriad failures of the FCC over the decades). The U.S. needs to develop a public policy for digital data and advertising that places the interests of the voter and democracy before that of political campaigns. Such a policy should include protecting the personal information of voters; limiting deceptive and manipulative ad practices (such as lookalike (link is external) modeling); as well as prohibiting those contemporary ad-tech practices (e.g., algorithmic based real-time programmatic (link is external) ad systems) that can unfairly influence election outcomes. Also missing from the discussion is the impact of the never-ending expansion of “deep-personalization (link is external)” digital marketing applications designed to influence and shift consumer behavior more effectively. The use of biodata, emotion recognition (link is external), and other forms of what’s being called “precision data”—combined with a vast expansion of always-on sensors operating in an Internet of Things world—will provide political groups with even more ways to help transform electoral outcomes. If civil society doesn’t take the lead in reforming this system, powerful insiders who have their own conflicts of interests will be able to shape the future of democratic decision-making in the U.S. We cannot afford to leave it to the insiders to decide what is best for our democracy.
  • Press Release

    Popular Dating, Health Apps Violate Privacy

    Leading Consumer and Privacy Groups Urge Congress, the FTC, State AGs in California, Texas, Oregon to Investigate

    Popular Dating, Health Apps Violate Privacy Leading Consumer and Privacy Groups Urge Congress, the FTC, State AGs in California, Texas, Oregon to Investigate For Immediate Release: Jan. 14, 2020 Contact: David Rosen, drosen@citizen.org (link is external), (202) 588-7742 Angela Bradbery, abradbery@citizen.org (link is external), (202) 588-7741 WASHINGTON, D.C. – Nine consumer groups today asked (link is external) the Federal Trade Commission (FTC), congressional lawmakers and the state attorneys general of California, Texas and Oregon to investigate several popular apps available in the Google Play Store. A report (link is external) released today by the Norwegian Consumer Council (NCC) alleges that the apps are systematically violating users’ privacy. The report found that 10 well-known apps – Grindr, Tinder, OkCupid, Happn, Clue, MyDays, Perfect365, Qibla Finder, My Talking Tom 2 and Wave Keyboard – are sharing information they collect on users with third-party advertisers without users’ knowledge or consent. The European Union’s General Data Protection Regulation forbids sharing information with third parties without users’ knowledge or consent. When it comes to drafting a new federal privacy law, American lawmakers cannot trust input from companies who do not respect user privacy, the groups maintain. Congress should use the findings of the report as a roadmap for a new law that ensures that such flagrant violations of privacy found in the EU are not acceptable in the U.S. The new report alleges that these apps (and likely a great many others) are allowing commercial third parties to collect, use and share sensitive consumer data in a way that is hidden from the user and involves parties that the consumer neither knows about nor would be familiar with. Although consumers can limit some tracking on desktop computers through browser settings and extensions, the same cannot be said for smartphones and tablets. As consumers use their smartphones throughout the day, the devices are recording information about sensitive topics such as our health, behavior, religion, interests and sexuality. “Consumers cannot avoid being tracked by these apps and their advertising partners because they are not provided with the necessary information to make informed choices when launching the apps for the first time. In addition, consumers are unable to make an informed choice because the extent of tracking, data sharing, and the overall complexity of the adtech ecosystem is hidden and incomprehensible to average consumers,” the letters sent to lawmakers and regulators warn. The nine groups are the American Civil Liberties Union of California, Campaign for a Commercial-Free Childhood, the Center for Digital Democracy, Consumer Action, Consumer Federation of America, Consumer Reports, the Electronic Privacy Information Center (EPIC), Public Citizen and U.S. PIRG. In addition to calling for an investigation, the groups are calling for a strong federal digital privacy law that includes a new data protection agency, a private right of action and strong enforcement mechanisms. Below are quotes from groups that signed the letters: “Every day, millions of Americans share their most intimate personal details on these apps, upload personal photos, track their periods and reveal their sexual and religious identities. But these apps and online services spy on people, collect vast amounts of personal data and share it with third parties without people’s knowledge. Industry calls it adtech. We call it surveillance. We need to regulate it now, before it’s too late.” Burcu Kilic, digital rights program director, Public Citizen “The NCC’s report makes clear that any state or federal privacy law must provide sufficient resources for enforcement in order for the law to effectively protect consumers and their privacy. We applaud the NCC’s groundbreaking research on the adtech ecosystem underlying popular apps and urge lawmakers to prioritize enforcement in their privacy proposals.” Katie McInnis, policy counsel, Consumer Reports “U.S. PIRG is not surprised that U.S. firms are not complying with laws giving European consumers and citizens privacy rights. After all, the phalanx of industry lobbyists besieging Washington, D.C., has been very clear that its goal is simply to perpetuate a 24/7/365 surveillance capitalism business model, while denying states the right to protect their citizens better and denying consumers any real rights at all.” Ed Mierzwinski, senior director for consumer programs, U.S. PIRG “This report reveals how the failure of the U.S. to enact effective privacy safeguards has unleashed an out-of-control and unaccountable monster that swallows up personal information in the EU and elsewhere. The long unregulated business practices of digital media companies have shred the rights of people and communities to use the internet without fear of surveillance and manipulation. U.S. policymakers have been given a much-needed wake-up call by Norway that it’s overdue for the enactment of laws that bring meaningful change to the now lawless digital marketplace.” Jeff Chester, executive director, Center for Digital Democracy “For those of us in the U.S., this research by our colleagues at the Norwegian Consumer Council completely debunks the argument that we can protect consumers’ privacy in the 21st century with the old notice-and-opt-out approach, which some companies appear to be clinging to in violation of European law. Business practices have to change, and the first step to accomplish that is to enact strong privacy rights that government and individuals can enforce.” Susan Grant, director of consumer protection and privacy, Consumer Federation of America “The illuminating report by our EU ally the Norwegian Consumer Council highlights just how impossible it is for consumers to have any meaningful control over how apps and advertising technology players track and profile them. That’s why Consumer Action is pressing for comprehensive U.S. federal privacy legislation and subsequent strong enforcement efforts. Enough is enough already! Congress must protect us from ever-encroaching privacy intrusions.” Linda Sherry, director of national priorities, Consumer Action “For families who wonder what they’re trading off for the convenience of apps like these, this report makes the answer clear. These companies are exploiting us – surreptitiously collecting sensitive information and using it to target us with marketing. It’s urgent that Congress pass comprehensive legislation which puts the privacy interests of families ahead of the profits of businesses. Thanks to our friends at the Norwegian Consumer Council for this eye-opening research.” David Monahan, campaign manager, Campaign for a Commercial-Free Childhood “This report highlights the pervasiveness of corporate surveillance and the failures of the FTC notice-and-choice model for privacy protection. Congress should pass comprehensive data protection legislation and establish a U.S. Data Protection Agency to protect consumers from the privacy violations of the adtech industry.” Christine Bannan, consumer protection counsel, EPIC
  • Contact: Jeff Chester, CDD (jeff@democraticmedia.org (link sends e-mail); 202-494-7100) David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail); 617-896-9397) Groups Praise Sen. Markey and Google for Ensuring Children on YouTube Receive Key Safeguards BOSTON, MA & WASHINGTON, DC—December 18, 2019—The organizations that spurred the landmark FTC settlement with Google over COPPA violations applauded the announcement of additional advertising safeguards for children on YouTube today. The Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) commended Google for announcing it would apply most of its robust marketing protections on YouTube Kids, including no advertising of food or beverages or harmful products, to all child-directed content on its main YouTube platform. The groups also lauded Senator Markey for securing (link is external) a public commitment from Google to implement these long-overdue safeguards. The advocates expressed disappointment, however, that Google did not agree to prohibit paid influencer marketing and product placement to children on YouTube as it does on YouTube Kids “Sen. Ed Markey has long been and remains the champion for kids,” said Jeff Chester, CDD’s executive director. “Through the intervention of Sen. Markey, Google has finally committed to protecting children whether they are on the main YouTube platform or using the YouTube Kids app. Google has acted responsibly in announcing that its advertising policies now prohibit any food and beverage marketing on YouTube Kids, as well as ads involving ‘sexually suggestive, violent or dangerous content.’ However, we remain concerned that Google may try to weaken these important child- and family-friendly policies in the near future. Thus we call on Google to commit to keeping these rules in place, and to implement other needed safeguards that children deserve,” added Chester. Josh Golin, Executive Director of CCFC, said, “We are so grateful to Senator Markey for his leadership on one of the most crucial issues faced by children and families today. And we commend Google for implementing a robust set of advertising safeguards on the most popular online destination for children. We urge Google to take another critical step and prohibit child-directed influencer content on YouTube; if this manipulative marketing isn’t allowed on children’s TV or YouTube Kids, it shouldn’t be targeted to children on the main YouTube platform either.” ###
  • In the aftermath of Google’s settlement with the FTC over its COPPA violations, some independent content producers on YouTube have expressed unhappiness with the decision. They are unclear how to comply with COPPA, and believe their revenue will diminish considerably. Some also worry that Google’s recently announced (link is external) system to meet the FTC settlement—where producers must identify if their content is child-directed—will affect their overall ability to “monetize” their productions even if they aren’t aiming to primarily serve a child audience. These YouTubers have focused their frustration at the FTC and have mobilized to file comments in the current COPPA proceedings (link is external). As Google has rolled out its new requirements, it has abetted a misdirected focus on the FTC and created much confusion and panic among YouTube content producers. Ultimately, their campaign, designed to weaken the lone federal law protecting children’s privacy online, could create even more violations of children’s privacy. While we sympathize with many of the YouTubers’ concerns, we believe their anger and sole focus on the FTC is misplaced. It is Google that is at fault here, and it needs finally to own up and step up. The truth is, it is Google’s YouTube that has violated the 2013 COPPA rule (link is external) pretty much since its inception. The updated rule made it illegal to collect persistent identifiers from children under 13 without parental consent. Google did so while purposefully developing YouTube as the leading site for children. It encouraged content creators to go all in and to be complicit in the fiction that YouTube is only for those whose age is 13 and above. Even though Google knew that this new business model was a violation of the law, it benefitted financially by serving personalized ads to children (and especially by creating the leading online destination for children in the U.S. and worldwide). All the while small, independent YouTube content creators built their livelihood on this illegitimate revenue stream. The corporate content brand channels of Hasbro, Mattel and the like, who do not rely on YT revenue, as well as corporate advertisers, also benefitted handsomely from this arrangement, allowing them to market to children unencumbered by COPPA regulations. But let’s review further how Google is handling the post-settlement world. Google chose to structure its solution to its own COPPA violation in a way that continues to place the burden and consequences of COPPA compliance on independent content creators. Rather than acknowledging wrong-doing and culpability in the plight of content creators who built their livelihoods on the sham that Google had created, Google produced an instructional video (link is external) for content creators that emphasizes the consequences of non-compliance and the potential negative impact on the creators’ monetization ability. It also appeared to have scared those who do not create “for kids” content. Google requires content creators to self-identify their content as “for kids,” and it will use automated algorithms to detect and flag “for kids” content. Google appears to have provided little useful information to content providers on how to comply, and confusion now seems rampant. Some YouTubers also fear (link is external) that the automated flagging of content is a blunt instrument “based on oblique overly broad criteria.” Also, Google declared that content designated as “for kids” will no longer serve personalized ads. The settlement and Google’s implementation are designed to assume the least risk for Google, while maximizing its monetary benefits. Google will start limiting the data it collects on made “for kids” content – something they should have done a long time ago, obviously. As a result, Google said it will no longer show personalized ads. However, the incentives for content creators to self-identify as “for kids” are not great, given that disabling (link is external) behavioral ads “may significantly reduce your channel’s revenue.” Although Google declares that it is “committed to help you with this transition,” it has shown no willingness to reduce its own significant cut of the ad revenue when it comes to children’s content. While incentives for child-directed content creators are high to mis-label their content, and equally high for Google to encourage them in this subterfuge, the consequences for non-compliance now squarely rest with content creators alone. Let’s be clear here. Google should comply with COPPA as soon as possible where content is clearly child- directed. Google has already developed a robust set of safeguards and policies (link is external) on YouTube Kids to protect children from advertising (link is external) for harmful products and from exploitative influencer marketing. It should apply the same protections on all child-directed content, regardless of which YouTube platform kids are using. When CCFC and CDD filed our COPPA complaint in 2018, we focused on how Google was shirking its responsibilities under the law by denying that portions of YouTube were child-directed (and thus governed by COPPA). The channels we cited in our complaint were not gray-area channels that might be child attractive but also draw lots of teen and adult viewers. Our complaint discussed such channels as Little Baby Bum, ChuChu TV Nursery Rhymes and Kids Songs, and Ryan’s Toy Reviews. We did not ask the FTC to investigate or sanction any channel owners, because Google makes the rules on YouTube, particularly with regard to personal data collection and use, and therefore it was the party that chose to violate COPPA. (Many independent content creators concur indirectly when they say that they should not be held accountable under COPPA. They maintain that they actually don’t have access to detailed audience data and do not know if their YouTube audience is under 13 at all. Google structures what data they have access to.) For other content, in the so-called “gray zone,” such as content for general audiences that children under 13 also watch, or content that cannot be easily classified, we need more information about Google’s internal data practices. Do content creators have detailed access to demographic audience data and are thus accountable, or does Google hold on to that data? Should accountability for COPPA compliance be shifted more appropriately to Google? Can advertising restrictions be applied at the user level once a user is identified as likely to be under thirteen regardless of what content they watch? We need Google to open up its internal processes, and we are asking the FTC to develop rules that share accountability appropriately between Google and its content creators. The Google settlement has been a significant victory for children and their parents. For the first time, Google has been forced to take COPPA seriously, a U.S. law that was passed by Congress to express the will of the majority of the electorate. Of course, the FTC is also complicit in this problem as it waited six years to enforce the updated law. They watched Google’s COPPA violations increase over time, allowing a monster to grow. What’s worse, the quality of the kids YouTube content was to most, particularly to parents, more than questionable (link is external), and at times even placed children seriously at risk (link is external). What parents saw in the offering for their children was quantity rather than quality content. Now, however, after six years, the FTC is finally requiring Google and creators to abide by the law. Just like that. Still, this change should not come as a complete surprise to content creators. We sympathize with the independent YT creators and understand their frustration, but they have been complicit in this arrangement as well. The children’s digital entertainment industry has discussed compliance with COPPA for years behind closed doors, and many knew that YouTube was in non-compliance with COPPA. The FTC has failed to address the misinformation that Google is propagating among content creators, its recent guidance (link is external) not withstanding. Moreover, the FTC has allowed Google to settle its COPPA violation by developing a solution that allows Google to abdicate any responsibility with COPPA compliance, while continuing to maximize revenue. It’s time for the FTC to study Google’s data practices and capabilities better, and squarely put the onus on Google to comply with COPPA. As the result of the current COPPA proceedings, rules must be put in place to hold platforms, like YouTube, accountable.
  • SUBJECT: CCFC and CDD statement on today’s YouTube inquiry by Senator Markey Campaign for a Commercial-Free Childhood and the Center for Digital Democracy, whose complaint led to the FTC settlement (link is external) which requires YouTube to change its practices to comply with federal children’s privacy law, applaud Senator Ed Markey for writing to Google (link is external) to inquire about YouTube’s child-directed advertising practices. “To its credit, Google has developed a robust set of safeguards and policies on YouTube Kids to protect children from advertising for harmful products and exploitative influencer marketing. Now that Google has been forced to abandon the fiction that the main YouTube platform is exclusively for ages 13 and up, it should apply the same protections on all child-directed content, regardless of which YouTube platform kids are using.” Josh Golin, Campaign for a Commercial-Free Childhood “Google should treat all children fairly on YouTube and apply the same set of advertising and content safeguards it has especially developed for YouTube Kids. When young people view child-directed programming on YouTube, they should also be protected from harmful and unfair practices such as ‘influencer’ marketing, exposure to ‘dangerous’ material, violent content, and exposure to food and beverage marketing.” Jeff Chester, Center for Digital Democracy
  • Press Release

    Grading Digital Privacy Proposals in Congress

    Which digital privacy proposals in Congress make the grade?

    Subject: Which digital privacy proposals in Congress make the grade? Nov. 21, 2019 Contact: David Rosen, drosen@citizen.org (link sends e-mail), (202) 588-7742 Susan Grant, sgrant@consumerfed.org (link sends e-mail), (202) 387-6121 Caitriona Fitzgerald, fitzgerald@epic.org (link sends e-mail), (617) 945-8409 Katharina Kopp, kkopp@democraticmedia.org (link sends e-mail), (202) 836-4621 Campaign for a Commercial-Free Childhood · Center for Digital Democracy · Color of Change · Consumer Federation of America · Consumer Action · Electronic Privacy Information Center · Parent Coalition for Student Privacy · Privacy Rights Clearinghouse · Public Citizen · U.S. PIRG NOTE TO REPORTERS Grading Digital Privacy Proposals in Congress When it comes to digital privacy, we’re facing an unprecedented crisis. Tech giants are spying on our families and selling the most intimate details about our lives for profit. Bad actors, both foreign and domestic, are targeting personal data gathered by U.S. companies – including our bank details, email messages and Social Security numbers. Algorithms used to determine eligibility for jobs, housing, credit, insurance and other life necessities are having disparate, discriminatory impacts on disadvantaged groups. We need a new approach. Consumer, privacy and civil rights groups are encouraged by some of the bills that recently have been introduced in Congress, many of which follow recommendations in the groups’ Framework for Comprehensive Privacy Protection and Digital Rights in the United States (link is external). The framework calls for baseline federal privacy legislation that: - Has a clear and comprehensive definition of personal data; - Establishes an independent data protection agency; - Establishes a private right of action allowing individuals to enforce their rights; - Establishes individual rights to access, control and delete data; - Puts meaningful privacy obligations on companies that collect personal data; - Requires the establishment of algorithmic governance to advance fair and just data practices; - Requires companies to minimize privacy risks and minimize data collection; - Prohibits take-it-or-leave-it or pay-for-privacy terms; - Limits government access to personal data; and - Does not preempt stronger state laws. Three bills attained the highest marks in the recent Privacy Legislation Scorecard (link is external) compiled by the Electronic Privacy Information Center (EPIC): - The Online Privacy Act (H.R. 4978 (link is external)), introduced by U.S. Reps. Anna Eshoo (D-Calif.) and Zoe Lofgren (D-Calif.), takes a comprehensive approach and is the only bill that calls for a U.S. Data Protection Agency. The bill establishes meaningful rights for individuals and clear obligations for companies. It does not preempt state law, but it lacks explicit anti-preemption language, which would make it more effective. - The Mind Your Own Business Act (S. 2637 (link is external)), introduced by U.S. Sen. Ron Wyden (D-Ore.), requires companies to assess the impact of the automated systems they use to make decisions about consumers and how well their data protection mechanisms are working. It has explicit anti-preemption language and holds companies accountable when they fail to protect privacy. The private right of action should be broader, and the bill needs clear limits on data uses. - The Privacy Rights for All Act (S. 1214 (link is external)), introduced by U.S. Sen. Ed Markey (D-Mass.), has important provisions minimizing data collection and delinking user identities from collected data, and prohibits bias and discrimination in automated decision-making. It also includes a strong private right of action and bans forced arbitration for violations. It does not preempt state law, but it lacks explicit anti-preemption language, which would make it more effective. Two bills are plainly anti-privacy. The Information Transparency & Personal Data Control Act (H.R. 2013 (link is external)), introduced by U.S. Rep. Suzan DelBene (D-Wash.), falls woefully short. It provides few protections for individuals, contains overly broad exemptions and preempts stronger state laws. The Balancing the Rights of Web Surfers Equally and Responsibility (BROWSER) Act (S. 1116 (link is external)), introduced by U.S. Sen. Marsha Blackburn (R-Tenn.), is based on the old, ineffective take-it-or-leave-it terms of use model, does not allow agency rulemaking, is weak on enforcement and preempts state laws. Both are bad, anti-privacy bills. Future federal privacy bills must make the grade. Additional privacy bills are expected to be introduced by U.S. Sen. Maria Cantwell (D-Wash.) and U.S. Rep. Jan Schakowsky (D-Ill.). Separately, U.S. Sens. Richard Blumenthal (D-Conn.), Roger Wicker (R-Miss.) and Josh Hawley (R-Mo.) may release their own bills. These leaders should strive to meet the standards that the framework lays out. Baseline privacy legislation must not preempt stronger state protections and laws – such as the California Consumer Privacy Protection Act (link is external) that takes effect in 2020, biometric data protection laws such as those in Illinois (link is external) and Texas (link is external), and data breach notification laws (link is external) that exist in every state. States must be allowed to continue serving as “laboratories of democracy,” pioneering innovative new protections to keep up with rapidly changing technologies. In addition, federal privacy legislation must include a strong private right of action – a crucial tool consumers need to enforce their rights and change the behavior of powerful corporations – and establish safeguards against data practices that lead to unjust, unfair, manipulative and discriminatory outcomes. For more information, see these fact sheets (link is external). Please contact any of the individuals listed above to speak with an expert. ###
  • Big Tech companies are lobbying to undermine the only federal online privacy law in the US – one which protects children--and we need your help to stop them. Along with the Campaign for a Commercial-Free Childhood (CCFC), we ask for your help to urge the Federal Trade Commission to strengthen—not weaken—the Children’s Online Privacy Protection Act (COPPA). Please sign this petition because your voice is essential to a future where children’s privacy is protected from marketers and others. Take action (link is external) to protect the privacy of children now and in the future! Commercialfreechildhood.org/coppa (link is external)
  • Press Release

    Will the FTC Weaken Children’s Privacy Rules?

    Invited Advocates Raise Concerns About Upcoming COPPA Workshop, Plans to Undermine Federal Protections for Kids, October 7 D.C. lineup dominated by tech industry supporters

    Contact: David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail); 617-896-9397) Jeff Chester, CDD (jeff@democraticmedia.org (link sends e-mail); 202-494-7100) Will the FTC Weaken Children’s Privacy Rules? Invited Advocates Raise Concerns About Upcoming COPPA Workshop, Plans to Undermine Federal Protections for Kids October 7 D.C. lineup dominated by tech industry supporters WHAT: The Future of the Children’s Online Privacy Protection Act Rule (COPPA): An FTC Workshop (link is external) WHEN: October 7, 2019, 9:00 am ET WHERE: Constitution Center, 400 7th St SW, Washington, DC WORKSHOP PRESENTERS FOR CAMPAIGN FOR A COMMERCIAL-FREE CHILDHOOD (CCFC) AND CENTER FOR DIGITAL DEMOCRACY (CDD): THE CHALLENGE: In 2012, the FTC approved new safeguards to protect children’s privacy in the digital era, heeding the advice of child advocates, consumer groups, privacy experts and health professionals. But now the Commission has called for comments (link is external) on COPPA three years before a new review is mandated by statute. The questions posed by the Commission, as well as public comments made by FTC staff, make privacy advocates wary that the FTC’s goal is to roll back COPPA safeguards rather than strengthen protections for children. Concerns about the FTC creating new loopholes or supporting industry calls to weaken the rules are heightened by the FTC’s speaker list for this workshop, replete with tech and marketing companies and their lawyers and lobbyists, with just a few privacy and children’s advocates at the table. The advocates are also concerned that the FTC is contemplating this action just weeks after its most significant COPPA enforcement action to date—requiring major changes to Google’s data collection practices on YouTube—a move that could result in rules being changed before those new practices have even been implemented. Children and families need increased COPPA enforcement, not weaker rules. The key problems, the advocates note, are the lack of enforcement of the law by the FTC; the failure of the agency to protect children from unfair marketing practices, such as influencers; and the need to maintain the strongest possible safeguards—whether in the home, school or on mobile devices. Speakers at the workshop include: Josh Golin, Executive Director, CCFC Will participate in a panel entitled Scope of the COPPA Rule. Katharina Kopp, Ph.D., Deputy Director, Director of Policy, CDD Will participate in a panel entitled Uses and Misuses of Persistent Identifiers. Laura M. Moy, Associate Professor of Law, Director, Communications & Technology Law Clinic, Georgetown University Law Center Will participate in a panel entitled State of the World in Children’s Privacy. Josh, Katharina, and Laura are available for questions in advance of the workshop, and will also be available to speak with press on site. See video of Future of COPPA Workshop here: https://www.ftc.gov/news-events/audio-video/video/future-coppa-rule-ftc-... (link is external) https://www.ftc.gov/news-events/audio-video/video/future-coppa-rule-ftc-... (link is external) ###
  • CDD, EPIC, USPIRG Opposition to Google/Doubleclick "Big Data" Merger

    2007 FTC filings example of groups calling for antitrust, privacy and other safeguards for digital marketplace

    Working closely with the Electronic Privacy Information Center (epic.org) and US PIRG, CDD led a campaign to oppose (link is external) the acquisition of Doubleclick by Google. CDD opposed (link is external) the deal on privacy, consumer protection and competiton grounds. We all foresaw what would happen if Google was allowed to swallow a leading digital marketing giant--more data collection, industry consolidation, weakening of consumer and privacy rights. It all happened of course, in part because the FTC hasn't ever been able to deal with the marketplace. Here are two of the filings done in this case.
    Jeff Chester
  • I played a key role (link is external) helping get the Children’s Online Privacy Protection Act (COPPA) passed by Congress in 1998 (when I was executive director of the Center for Media Education). Since then, I have tried to ensure that the country’s only federal law addressing commercial privacy online was taken seriously. That’s why it has been especially egregious to have witnessed Google violating COPPA for many (link is external) years, as it deliberately developed YouTube as the leading site for children. Google disingenuously claimed in its terms of service that YouTube was only meant for those 13 (link is external) and older, while it simultaneously unleashed programming and marketing strategies designed to appeal directly to kids. Google’s behavior sent a message that any powerful and well-connected corporation could ignore U.S. privacy law, even when that law was specifically designed to protect young people. In collaborations with our colleagues at the Campaign for Commercial-Free Childhood (CCFC (link is external)), our attorneys at the Institute for Public Representation (IPR (link is external)) at Georgetown University Law Center, and a broad coalition of consumer, privacy, public health and child rights groups, we began filing complaints at the FTC in 2015 concerning Google’s child-directed practices (on YouTube, its YouTube Kids app, and elsewhere). We also told top officials at the commission that Google was not abiding by COPPA, and repeatedly provided them documentation (link is external) of Google’s child-directed business operations. CCFC, CDD and IPR kept up the pressure on the FTC, in Congress and with the news media (see attached, for example). For a variety of reasons, the FTC, under the leadership of Chairman Joe Simons, finally decided to take action. The result was last week’s decision (link is external)—which in many ways is both historic and highly positive. Google was fined $170 million for its violations of children’s privacy, a record amount in terms of previous COPPA-connected financial sanctions. The FTC’s action also implemented important new policies (link is external) protecting children: Children will no longer be targeted with data-driven marketing and advertising on YouTube programming targeted to kids: This is the most important safeguard. Google announced that starting around January 2020, there would no longer be any form of personalized “behavioral” marketing permitted on YouTube’s programming that targets children. The “Official” YouTube blog post explained that Google “will limit data collection and use on videos made for kids only to what is needed to support the operation of the service. We will also stop serving personalized ads on this content entirely….” Google will require video producers and distributers to self-identify that their content is aimed at kids; it also committed to “use machine learning to find videos that clearly target young audiences, for example those that have an emphasis on kids characters, themes, toys, or games.” Google also explained that child-directed programming on YouTube will receive an additional safeguard—it won’t permit any personalized targeting on its child-directed content. Google committed to make substantial investments in its YouTube Kids (link is external) service: Google launched the YouTube Kids “app” in 2015, claiming it was “the first Google product (link is external) built from the ground up with little ones in mind.” But the app never rivaled the main YouTube platform’s hold on children, and was plagued with a number of problems (such as harmful content). Now, as a result of the FTC investigation, Google announced that it will bring “the YouTube Kids experience to the desktop,” increase its promotion of the service to parents, and more effectively curate different programming that will appeal to more young people—with new tiers of content suitable for “Preschool (ages 4 & under); Younger (ages 5-7); and Older (ages 8-12).” Google created a $100 million fund for “quality kids, family and educational content.” This is another proposal CCFC and CDD made and we are gratified Google acknowledged it bears responsibility to support programing that enriches the lives of children. This is to be a three-year program that is designed for “the creation of thoughtful, original children’s content on YouTube and YouTube globally.” Google has made changes to make YouTube a “safer platform for children:” The company is proactively promoting “quality” children’s programming by revising the algorithm used to make recommendations. It is also not permitting comments and notifications on its YouTube child-directed content. There are questions that still need to be answered about how Google will implement these new policies. For example, will the company prohibit the data targeting of children on YouTube worldwide? (It should.) How will it treat programming classified as “family viewing”—exempt it from the new data targeting safeguards? (It should not be permitted to do so.) Will the new $100 million production fund commit to supporting child-directed non-commercial content (instead of serving as a venture investment strategy for Google to expand its marketing to kids plans). Will Google ensure that its other child-directed commercial activities—such as its Play Store—also reflect the new safeguards the company have adopted for YouTube? Google also targets young people via so-called “influencers,” including videos where toys and other products are “unboxed.” Google needs to declare such content as child-directed (and should refrain from these practices as well). CCFC, CDD and our allies intend to play a proactive role holding Google, its programmers, advertisers and the FTC accountable to make sure that these new policies are implemented effectively. These new FTC-forced changes to how Google serves children are part of our commitment to ensuring that young people around the world grow up in a media environment that respects and promotes their health, privacy, and well-being.
    Jeff Chester
  • Contact: Jeff Chester, CDD (jeff@democraticmedia.org (link sends e-mail); 202-494-7100) David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail); 617-896-9397) Advocates Who Filed the Privacy Complaint Against Google/YouTube Laud Improvements, But Say FTC Settlement Falls Far Short BOSTON, MA & WASHINGTON, DC—September 4, 2019—The advocates who triggered the Federal Trade Commission’s (FTC) investigation into YouTube’s violations of the Children’s Online Privacy Protection Act (COPPA) say the FTC’s settlement with Google will likely significantly reduce behavioral marketing to children on YouTube, but doesn’t do nearly enough to ensure children will be protected or to hold Google accountable. In April, 2018, Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD), through their attorneys at Georgetown Law’s Institute for Public Representation (IPR), filed an FTC complaint (link is external) detailing YouTube’s COPPA violations. Twenty-one other privacy and consumer groups signed on to CCFC and CDD’s complaint, which detailed how Google profits by collecting personal information from kids on YouTube, without first providing direct notice to parents and obtaining their consent as required by law. Google uses this information to target advertisements to children across the internet and across devices, in clear violation of COPPA. Today, the FTC and the New York Attorney General announced a settlement with Google, fining the company $170 million. The settlement also “requires Google and YouTube to develop, implement, and maintain a system that permits channel owners to identify their child-directed content on the YouTube platform so that YouTube can ensure it is complying with COPPA.” Content creators will be asked to disclose if they consider their videos to be child-directed; if they do, no behavioral advertising will be served to viewers of those videos. “We are pleased that our advocacy has compelled the FTC to finally address YouTube’s longstanding COPPA violations and that there will be considerably less behavioral advertising targeted to children on the number one kids’ site in the world,” said CCFC’s Executive Director Josh Golin. “But it’s extremely disappointing that the FTC isn’t requiring more substantive changes or doing more to hold Google accountable for harming children through years of illegal data collection. A plethora of parental concerns about YouTube – from inappropriate content and recommendations to excessive screen time – can all be traced to Google’s business model of using data to maximize watch time and ad revenue.” In a July 3, 2019 (link is external) letter to the FTC, the advocates specifically warned that shifting the burden of COPPA compliance from Google and YouTube to content creators would be ineffective. The letter noted many children’s channels were unlikely to become COPPA compliant by turning off behavioral advertising, since Google warns that turning off these ads “may significantly reduce your channel’s revenue.” The letter also detailed Google’s terrible track record of ensuring COPPA compliance on its platforms; a 2018 study found that 57% of apps in the Google Play Store’s Designed for Families program were violating COPPA despite Google’s policy that apps in the program must be COPPA compliant. And as Commissioner Rebecca Slaughter wrote in her dissent, many children’s content creators are not U.S.-based and therefore are unlikely to be concerned about FTC enforcement. “We are gratified that the FTC has finally forced Google to confront its longstanding lie that it wasn’t targeting children on YouTube,” said CDD’s executive director Jeff Chester, who helped spearhead the campaign that led to the 1998 passage of COPPA “However, we are very disappointed that the Commission failed to penalize Google sufficiently for its ongoing violations of COPPA and failed to hold Google executives personally responsible for the roles they played. A paltry financial penalty of $170 million—from a company that earned nearly $137 billion in 2018 alone -- sends a signal that if you are a politically powerful corporation, you do not have to fear any serious financial consequences when you break the law. Google made billions off the backs of children, developing a host of intrusive and manipulative marketing practices that take advantage of their developmental vulnerabilities. More fundamental changes will be required to ensure that YouTube is a safe and fair platform for young people.” Echoing Commissioner Rohit Copra’s dissent, the advocates noted that unlike smaller companies sanctioned by the FTC, Google was not forced to pay a penalty larger than its “ill-gotten gains.” In fact, with YouTube earning a reported $750 million annually from children’s content alone, the $170 million fine amounts to less than three months of advertising revenue from kids’ videos. With a maximum fine of $41,484 per violation, the FTC easily could have sought a fine in the tens of billions of dollars. "I am pleased that the FTC has made clear that companies may no longer avoid complying with COPPA by claiming their online services are not intended for use by children when they know that many children in fact use their services,” said Angela Campbell, Director Emeritus of IPR’s Communications and Technology Clinic at Georgetown Law, which researched and drafted the complaint. Campbell, currently chair of CCFC’s Board, served as lead counsel to CCFC and CDD on the YouTube and other complaints alleging COPPA violations. She, along with Chester, was responsible for filing an FTC complaint in 1996 against a child-directed website that led to Congress’s passage of COPPA in 1998 (link is external). COPPA gave the FTC expanded authority to implement and enforce the law, for example, by including civil penalties. About the proposed settlement, Campbell noted: “It’s disappointing that the FTC has not fully used its existing authority to hold Google and YouTube executives personally liable for adopting and continuing to utilize a business model premised on ignoring children’s privacy protection, to adopt a civil penalty substantial enough to deter future wrongdoing, or to require Google to take responsibility for ensuring that children’s content on YouTube platforms complies with COPPA.” On the heels of a sweetheart settlement with Facebook, the advocates said the deal with Google was further proof the FTC wasn’t up to the task of protecting consumers’ privacy. Said Campbell, “I support Commissioner Slaughter’s call to state attorney generals to step up and hold Google accountable. Added Chester, “The commission’s inability to stop Google’s cynically calculated defiance of COPPA underscores why Congress must create a new consumer watchdog that will truly protect Americans’ privacy.” Organizations which signed on to the CCFC/CDD 2018 FTC complaint were Berkeley Media Studies Group; Center for Media Justice; Common Sense; Consumer Action; Consumer Federation of America; Consumer Federation of California; Consumers Union, the advocacy division of Consumer Reports; Consumer Watchdog; Corporate Accountability; Defending the Early Years; Electronic Privacy Information Center (“EPIC”); New Dream; Obligation, Inc.; Parent Coalition for Student Privacy; Parents Across America; Parents Television Council; Privacy Rights Clearinghouse; Public Citizen; The Story of Stuff Project; TRUCE (Teachers Resisting Unhealthy Childhood Entertainment); and USPIRG. ###
  • Press Statement Google YouTube FTC COPPA Settlement Statement of Katharina Kopp, Ph.D. Deputy Director Center for Digital Democracy August 30, 2019 It has been reported that Google has agreed to pay between $150 million and $200 million to resolve an FTC investigation into YouTube over alleged violations of a children's privacy law. A settlement amount of $150-200 million would be woefully low, considering the egregious nature of the violation, how much Google profited from violating the law, and given Google’s size and revenue. Google’s unprecedented violation requires an unprecedented FTC response. A small amount like this would effectively reward Google for engaging in massive and illegal data collection without any regard to children’s safety. In addition to assessing substantial civil penalties, the FTC must enjoin Google from committing further violations of COPPA and impose effective means for monitoring compliance; the FTC must impose a 20-year consent decree to ensure Alphabet Inc. acts responsibly when it comes to serving children and parents. ------ In April, 2018, the Center for Digital Democracy (CDD) and the Campaign for Commercial-Free Childhood (CCFC), through their attorneys at Georgetown Law’s Institute for Public Representation (IPR), filed an FTC complaint (link is external) detailing YouTube’s COPPA violations. Twenty-one other privacy and consumer groups signed on to CCFC and CDD’s complaint, which detailed how Google profits by collecting personal information from kids on YouTube, without first providing direct notice to parents and obtaining their consent as required by law. Google uses this information to target advertisements to children across the internet and across devices, in clear violation of COPPA.
  • Blog

    CDD Memo to FTC on Facebook Consent Decree Violations--2013

    FTC has long ignored how market operates-it still does in 2019

  • News

    Groups Join Legal Battle to Fight Ineffective FTC Privacy Decision on Facebook

    Statements from Campaign for Commercial-Free Childhood, CDD, Color of Change, Common Sense Media, Consumer Action, Consumer Federation of America, Open Markets, Public Citizen, USPIRG

    FOR RELEASE July 26, 2019 Consumer Privacy Organizations to Challenge Facebook Settlement Statement from Groups --------- “The Settlement Fails to Provide Meaningful Relief to Facebook Users” WASHINGTON, DC – Many of the nation’s leading consumer privacy organizations are urging a federal court in Washington, DC to consider public comments before finalizing a proposed settlement between the Federal Trade Commission and Facebook. “The Facebook settlement is both historic and controversial. Many believe the FTC failed to establish meaningful safeguards for consumer privacy. We believe the court overseeing the case should consider the views of interested parties,” said Marc Rotenberg, President of the Electronic Privacy Information Center. Under the terms of the settlement, Facebook will pay a record-breaking $5 b fine to the United States Treasury, but there will be no significant changes in Facebook’s business practices and the FTC will release all pending complaints against the company. Typically in a proposed FTC settlement, the public would be provided an opportunity to provide comments to the agency before finalizing the deal. But no such opportunity was provided in the Facebook settlement. Many of the organizations that are joining the effort have also filed detailed complaints with the Federal Trade Commission, alleging that Facebook has violated privacy laws, including the Children’s Online Privacy Protection Act. A Freedom of Information Act case revealed that there are more than 26,000 complaints against Facebook currently pending at the Commission. In a similar case in 2012, the privacy group Consumer Watchdog challenged the FTC settlement with Google regarding the Safari hack. In other consumer privacy cases, courts have created opportunities for interested parties to file papers and be heard prior to a final determination on a proposed settlement. The case is In the Matter of Facebook, No. 19-cv-2184 (D.D.C. Filed July 24, 2019) EPIC filed with the court today: https://epic.org/2019/07/epic-challenges-ftc-facebook-s.html (link is external) Statements of Support: Brandi Collins-Dexter, Senior Campaign Director, Color of Change, “Despite the large price tag, the FTC settlement provides no meaningful changes to Facebook’s structure or financial incentives. It allows Facebook to continue to set its own limits on how much user data it can collect and it gives Facebook immunity for unspecified violations. The public has a right to know what laws Facebook violated. Corporations should face consequences for violating the public trust, not be given a rubber stamp to carry out business as usual. This settlement limits the ability of Black users to challenge Facebook’s misuse of their data and force real accountability which is why the courts must review the fairness of this settlement.” Susan Grant, Director of Consumer Protection and Privacy, Consumer Federation of America: “The FTC’s settlement with Facebook sells consumers short by failing to change the company’s mass surveillance practices and wiping away other complaints that deserved to be addressed. It needs to be stronger to truly protect our privacy.” Linda Sherry, Director of National Priorities, Consumer Action: “The FTC’s pending Facebook settlement does not take adequate measures to limit the collection and sharing of consumers’ personal information, but appears to provide the company with extensive protections from even future violations. Consumer Action respectfully urges the court to consider positions from interested parties who have related complaints filed with the FTC to ensure that the most fair and comprehensive agreement is approved.” Sally Hubbard, Director of Enforcement Strategy, Open Markets. “The FTC’s settlement is woefully insufficient in light of Facebook’s persistent privacy violations. The fine is a mere cost of doing business that makes breaking the law worth it for Facebook. Remedies must curb Facebook’s widespread data collection and promote competition. Otherwise Facebook will continue to fortify its monopoly power by surveilling users both on Facebook and off, and users can’t vote with their feet when Facebook violates their privacy. The public must have the opportunity to be heard on this negligent settlement." Robert Weissman, President, Public Citizen: “The FTC's settlement amounts to Facebook promising yet again to adhere to its own privacy policy, while reserving the right to change that policy at any time. That approach will fail to protect users' privacy. The court should reject the settlement and order the FTC to try again and do better.” Josh Golin, Executive Director, Campaign for Commercial-Free Childhood: “Facebook has been exploiting kids for years, and this proposed settlement is essentially a get-out-of-jail-free card. It potentially extinguishes our children's privacy complaints against Facebook, but offers absolutely no protections for kids' privacy moving forward. It also sweeps under the rug a complaint detailing how Facebook knowingly and intentionally tricked kids into spending money on mobile games over several years, sometimes to the tune of thousands of dollars per child.” James P. Steyer, CEO and Founder of Common Sense Media: "On behalf of families across the country, Common Sense fully stands behind EPIC's motion. The proposed settlement is a "get out of jail free" card for Facebook, purporting to absolve Facebook not only of liability for privacy abuses but for other -- completely unaddressed and unexplored -- Section 5 abuses. One such abuse that the FTC is aware of and that court documents confirm includes tricking kids into making in-app purchases that have put families out hundreds and even thousands of dollars —something the company has yet to meaningfully change its policies on to this day. Such a broad release is unprecedented, unjustified and unacceptable." Edmund Mierzwinski, Senior Director for Federal Consumer Programs, U.S. PIRG: "This laughable $5 billion settlement with the category-killer social media giant Facebook makes the much smaller Equifax settlement for sloppy security look harsh. Facebook intentionally collects and shares an ever-growing matrix of information about consumers, their friends and their interests in a mass surveillance business model. It routinely changes its previous privacy promises without consent. It doesn't adequately audit its myriad business partners. The FTC essentially said to Facebook: "Pay your parking ticket but don't ever change. Your fast-and-loose practices are okay with 3 of the 5 of us." Not changing those practices will come back to haunt the FTC, consumers and the world.” Jeff Chester, Executive Director, Center for Digital Democracy: "The 3-2 Facebook decision by the FTC leaves millions of Americans vulnerable to all the problems unleashed by the Cambridge Analytica scandal. The commission adopted a woefully inadequate remedy that does nothing to stem the fundamental loss of its user privacy which led to our original 2009 complaint."
    Jeff Chester
  • Press Release

    FTC Fails to Protect Privacy in Facebook decision

    Instead of serious structural and behavioral change, 3-2 deal is a huge giveaway. By dismissing all other claims, Simons' FTC does disservice to public

    Statement of Jeff Chester, executive director, Center for Digital Democracy--CDD helped bring the 2009 FTC complaint that is the subject of today's decision on the Consent Order Once again, the Federal Trade Commission has shown itself incapable of protecting the privacy of the public and also preventing ongoing consumer harms. Today's announcement of a fine and--yet again! --improved system of internal compliance and other auditing controls doesn't address the fundamental problems. First, the FTC should have required Facebook to divest both its Instagram and Whatsapp platforms. By doing so, the commission would have prevented what will be the tremendous expansion of Facebook's ability to continually expand its data gathering activities. By failing to require this corporate break-up, the FTC has set the stage for what will be "Groundhog Day" violations of privacy for years to come. The FTC should have insisted that an independent panel of experts--consumer groups, data scientists, civil rights groups, etc.--be empaneled to review all the company's data related products, to decide which ones are to be modified, eliminated, or allowed to continue (such as lookalike modeling, role of influencers, cross-device tracking, etc.). This group should have been given the authority to review all new products proposed by the company for a period of at least five years. What was needed here was a serious change in the corporate culture, along with serious structural remedies, if the FTC really wanted to ensure that Facebook would act more responsibly in the future. The dissents by Commissioners Chopra and Slaughter illustrate that the FTC majority could have taken another path, instead of supporting a decision that will ultimately enable the problems to continue. Today's decision also dismisses all other complaints and requests for investigation related to Facebook's consent decree failures--a huge giveway. The FTC should be replaced by a new data protection agency to protect privacy. The commission has repeatedly demonstrated that--regardless of who is in charge--it is incapable of confronting the most powerful forces that undermine our privacy--and digital rights.