CDD

program areas Digital Consumer

  • Press Release

    Against surveillance-based advertising

    CDD joins an international coalition of more than 50 NGOs and scholars in a call for a surveillance-based advertising ban in its Digital Services Act and for the U.S. to enact a federal digital privacy and civil rights law

    International coalition calls for action against surveillance-based advertising Every day, consumers are exposed to extensive commercial surveillance online. This leads to manipulation, fraud, discrimination and privacy violations. Information about what we like, our purchases, mental and physical health, sexual orientation, location and political views are collected, combined and used under the guise of targeting advertising.   In a new report, the Norwegian Consumer Council (NCC) sheds light on the negative consequences that this commercial surveillance has on consumers and society. Together with [XXX] organizations and experts, NCC is asking authorities on both sides of the Atlantic to consider a ban. In Europe, the upcoming Digital Services Act can lay the legal framework to do so. In the US, legislators should seize the opportunity to enact comprehensive privacy legislation that protects consumers.  - The collection and combination of information about us not only violates our right to privacy, but renders us vulnerable to manipulation, discrimination and fraud. This harms individuals and society as a whole, says the director of digital policy in the NCC, Finn Myrstad.  In a Norwegian population survey conducted by YouGov on behalf of the NCC, consumers clearly state that they do not want commercial surveillance. Just one out of ten respondents were positive to commercial actors collecting personal information about them online, while only one out of five thought that ads based on personal information is acceptable. - Most of us do not want to be spied on online, or receive ads based on tracking and profiling. These results mirror similar surveys from Europe and the United States, and should be a powerful signal to policymakers looking at how to better regulate the internet, Myrstad says. Policymakers and civil society organisations on both sides of the Atlantic are increasingly standing up against these invasive practices. For example, The European Parliament and the European Data Protection Supervisor (EDPS) have already called for phasing out and banning surveillance-based advertising. A coalition of consumer and civil rights organizations in the United States has called for a similar ban.     Significant consequences  The NCC report ’Time to ban surveillance-based advertising’ exposes a variety of harmful consequences that surveillance-based advertising can have on individuals and on society:    1. Manipulation  Companies with comprehensive and intimate knowledge about us can shape their messages in attempts to reach us when we are susceptible, for example to influence elections or to advertise weight loss products, unhealthy food or gambling.     2. Discrimination  The opacity and automation of surveillance-based advertising systems increase the risk of discrimination, for example by excluding consumers based on income, gender, race, ethnicity or sexual orientation, location, or by making certain consumers pay more for products or services.     3. Misinformation   The lack of control over where ads are shown can promote and finance false or malicious content. This also poses significant challenges to publishers and advertisers regarding revenue, reputational damage, and opaque supply chains. 4. Undermining competition   The surveillance business model favours companies that collect and process information across different services and platforms. This makes it difficult for smaller actors to compete, and negatively impacts companies that respect consumers’ fundamental rights.  5. Security risks  When thousands of companies collect and process enormous amounts of personal data, the risk of identity theft, fraud and blackmail increases. NATO has described this data collection as a national security risk.    6. Privacy violations   The collection and use of personal data is happening with little or no control, both by large companies and by companies that are unknown to most consumers. Consumers have no way to know what data is collected, who the information is shared with, and how it may be used.   -  It is very difficult to justfy the negative consequences of this system. A ban will contribute to a healthier marketplace that helps protect individuals and society, Myrstad comments.  Good alternatives  In the report, the NCC points to alternative digital advertising models that do not depend on the surveillance of consumers, and that provide advertisers and publishers more oversight and control over where ads are displayed and which ads are being shown. - It is possible to sell advertising space without basing it on intimate details about consumers. Solutions already exist to show ads in relevant contexts, or where consumers self-report what ads they want to see, Myrstad says. - A ban on surveillance-based advertising would also pave the way for a more transparent advertising marketplace, diminishing the need to share large parts of ad revenue with third parties such as data brokers. A level playing field would contribute to giving advertisers and content providers more control, and keep a larger share of the revenue. The coordinated push behind the report and letter illustrates the growing determination of consumer, digital rights, human rights and other civil society groups to end the widespread business model of spying on the public.
  • The Center for Digital Democracy and 23 other leading civil society groups sent a letter to President Biden today asking his Administration to ensure that any new transatlantic data transfer deal is coupled with the enactment of U.S. laws that reform government surveillance practices and provide comprehensive privacy protections.
  • Contact: Jeff Chester, CDD (jeff@democraticmedia.org (link sends e-mail); 202-494-7100) David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail);) Advocates Ask FTC to Protect Youth From Manipulative “Dark Patterns” Online BOSTON, MA and WASHINGTON, DC — May 28, 2021—Two leading advocacy groups protecting children from predatory practices online filed comments today asking the FTC to create strong safeguards to ensure that internet “dark patterns” don’t undermine children’s well-being and privacy. Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) cited leading authorities on the impacts of internet use on child development in their comments prepared by the Communications & Technology Law Clinic at Georgetown University Law Center. These comments follow testimony given by representatives of both groups last month at a FTC workshop spearheaded by FTC Acting Chair Rebecca Slaughter. CCFC and CDD say tech companies are preying upon vulnerable kids, capitalizing on their fear of missing out, desire to be popular, and inability to understand the value of misleading e-currencies, as well as putting them on an endless treadmill on their digital devices. They urged the FTC to take swift and strong action to protect children from the harms of dark patterns. Key takeaways include: - A range of practices, often called “dark patterns” are pervasive in the digital marketplace, manipulate children, are deceptive and unfair and violate Section 5 of the FTC Act. They take advantage of a young person’s psycho-social development, such as the need to engage with peers. - The groups explained the ways children are vulnerable to manipulation and other harms from “dark patterns,” including that they have “immature and developing executive functioning,” which leads to impulse behaviors. - The FTC should prohibit the use of dark pattern practices in the children’s marketplace; issue guidance to companies to ensure they do not develop or deploy such applications, and include new protections under their Children’s Online Privacy Protection Act (COPPA) rulemaking authority to better regulate them. The commission must bring enforcement actions against the developers using child-directed dark patterns. - The FTC should prohibit the use of micro-transactions in apps serving children, including the buying of virtual currency to participate in game playing. - The FTC should adopt a definition of dark patterns to include all “nudges” designed to use a range of behavioral techniques to foster desired responses from users. The groups’ filing was in response to the FTC’s call for comments (link is external) on the use of digital “dark patterns” — deceptive and unfair user interface designs — on websites and mobile apps. Comment of Jeff Chester, executive Director of the Center for Digital Democracy: “Dark Patterns” are being used in the design of child-directed services to manipulate them to spend more time and money on games and other applications, as well as give up more of their data. It’s time the FTC acted to protect young people from being unfairly treated by online companies. The commission should issue rules that prohibit the use of these stealth tactics that target kids and bring legal action against the companies promoting their use. Comment of Josh Golin, executive Director of the Campaign for a Commercial-Free Childhood: In their rush to monetize children, app and game developers are using dark patterns that take advantage of children’s developmental vulnerabilities. The FTC has all the tools it needs to stop unethical, harmful, and illegal conduct. Doing so would be a huge step forward towards creating a healthy media environment for children. Comment of Michael Rosenbloom, Staff Attorney & Clinical Teaching Fellow, Communications and Technology Law Clinic, Georgetown University Law Center: Software and game companies are using dark patterns to pressure children into playing more and paying more. Today, many apps and games that children play use dark patterns like arbitrary virtual currencies, encouragement from in-game characters, and ticking countdown timers, to get children to spend more time and money on microtransactions. These dark patterns harm children and violate Section 5 of the FTC Act, and we urge the FTC to act to stop these practices. ###
  • To watch the full FTC Dark Patterns Workshop online visit the FTC website here (link is external).
  • Contact: Jeff Chester, CDD jeff@democraticmedia.org (link sends e-mail); 202-494-7100David Monahan, CCFC, david@commercialfreechildhood.org (link sends e-mail)Advocates say Google Play continues to disregard children’s privacy law and urge FTC to act BOSTON, MA and WASHINGTON, DC — March 31, 2021—Today, advocacy groups Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) called on the Federal Trade Commission (FTC) to investigate Google’s promotion of apps which violate the Children’s Online Privacy Protection Act (COPPA). In December 2018, CCFC and CDD led a coalition of 22 consumer and health advocacy groups in asking the FTC to investigate these same practices. Since then Google has made changes to the Play Store, but the advocates say these changes fail to address the core problem: Google is certifying as safe and appropriate for children apps that violate COPPA and put children at risk. Recent studies found that a significant number of apps in Google Play violate COPPA by collecting and sharing children’s personal information without getting parental consent. For instance, a JAMA Pediatrics study found that 67% of apps used by children aged 5 and under were transmitting personal identifiers to third parties.Comment of Angela Campbell, Chair of the Board of Directors, Campaign for a Commercial-Free Childhood, Professor Emeritus, Communications & Technology Law Clinic, Georgetown University Law Center:“Parents reasonably expect that Google Play Store apps designated as ‘Teacher approved’ or appropriate for children under age 13 comply with the law protecting children’s privacy. But far too often, that is not the case. The FTC failed to act when this problem was brought to its attention over two years ago. Because children today are spending even more time using mobile apps, the FTC must hold Google accountable for violating children’s privacy.”Comment of Jeff Chester, executive Director of the Center for Digital Democracy:"The Federal Trade Commission must swiftly act to stop Google’s ongoing disregard of the privacy and well-being of children. For too long, the Commission has allowed Google’s app store, and the data marketing practices that are its foundation, to operate without enforcing the federal law that is designed to protect young people under 13. With children using apps more than ever as a consequence of the pandemic, the FTC should enforce the law and ensure Google engages with kids and families in a responsible manner."###
  • Contextual Advertising—Now Driven by AI and Machine Learning—Requires Regulatory Review for Privacy and Marketing FairnessWhat’s known as contextual advertising is receiving a big boost from marketers and some policymakers, who claim that it provides a more privacy-friendly alternative to the dominant global surveillance-based “behavioral” marketing model. Google’s plans to eliminate cookies and other third-party trackers used for much of online ad delivery are also spurring greater interest in contextual marketing, which is being touted especially as safe for children.Until several years ago, contextual ads meant that you would see an ad based on the content of the page you were on—so there might be ads for restaurants on web pages about food, or cars would be pitched if you were reading about road trips. The ad tech involved was basic: keywords found on the page would help trigger an ad.Today’s version of what’s called “contextual intelligence (link is external), “Contextual 2.0 (link is external),” or Google’s “Advanced Contextual (link is external)” is distinct. Contextual marketing uses artificial intelligence (AI (link is external)) and machine learning technologies, including computer vision and natural language processing, to provide “targeting precision.” AI-based techniques, the industry explains, allow marketers to read “between the lines” of online content. Contextual advertising is now capable of comprehending “the holistic and subtle meaning of all text and imagery,” enabling predictions and decisions on ad design and placement by “leveraging deep neural (link is external) networks” and “proprietary data sets.” AI is used to decipher the meaning of visuals “on a massive scale, enabling advertisers to create much more sophisticated links between the content and the advertising.” Computer vision (link is external) technologies identify every visual element, and “natural language processing” minutely classifies all the concepts found on each page. Millions of “rules (link is external)” are applied in an instant, using software that helps advertisers take advantage of the “multiple meanings” that may be found on a page.For example, one leading contextual marketing company, GumGum (link is external), explains that its “Verity” algorithmic and AI-based service “combines natural language processing with computer vision technology to execute a multi-layered reading process. First, it finds the meat of the article on the page, which means differentiating it from any sidebar and header ads. Next, it parses the body text, headlines, image captions with natural language processing; at the same time, it uses computer vision to parse the main visuals.… [and then] blends its textual and visual analysis into one cohesive report, which it then sends off to an adserver,” which determines whether “Verity’s report on a given page matches its advertisers campaign criteria.”Machine learning also enables contextual intelligence services to make predictions about the best ways to structure and place marketing content, taking advantage of real-time events and the ways consumers interact with content. It enables segmentation of audience targets to be fine-tuned. It also incorporates a number of traditional behavioral marketing concepts, gathering a range of data “signals (link is external)” that ensure more effecting targeting. There are advanced measurement (link is external) technologies; custom methods to influence what marketers term our “customer journey,” structuring ad-buying in similar ways to behavioral, data-driven approaches, as “bids” are made to target—and retarget—the most desirable people. And, of course, once the contextual ad “works” and people interact with it, additional personal and other information is then gathered.Contextual advertising, estimated to generate (link is external) $412 billion in spending by 2025, requires a thorough review by the FTC and data regulators. Regulators, privacy advocates and others must carefully examine how the AI and machine-learning marketing systems operate, including for Contextual 2.0. We should not accept marketers’ claims that it is innocuous and privacy-appropriate. We need to pull back the digital curtain and carefully examine the data and impact of contextual systems.
    Jeff Chester
    black laptop computer turned on by Lewis Kang'ethe Ngugi
  • The Whole World will Still be Watching You: Google & Digital Marketing Industry “Death-of-the-Cookie” Privacy Initiatives Require Scrutiny from Public Policymakers Jeff Chester One would think, in listening to the language used by Google, Facebook, and other ad and data companies to discuss the construction and future of privacy protection, that they are playing some kind of word game. We hear terms (link is external) such as “TURTLEDOVE,” “FLEDGE,” SPARROW and “FLoC.” Such claims should be viewed with skepticism, however. Although some reports make it appear that Google and its online marketing compatriots propose to reduce data gathering and tracking, we believe that their primary goal is still focused on perfecting the vast surveillance system they’ve well-established. A major data marketing industry effort is now underway to eliminate—or diminish—the role of the tracking software known as “third-party” cookies. Cookies were developed (link is external) in the very earliest days of the commercial “World Wide Web,” and have served as the foundational digital tether connecting us to a sprawling and sophisticated data-mining complex. Through cookies—and later mobile device IDs and other “persistent” identifiers—Google, Facebook, Amazon, Coca-Cola and practically everyone else have been able to surveil and target us—and our communities. Tracking cookies have literally helped engineer a “sweet spot (link is external)” for online marketers, enabling them to embed spies into our web browsers, which help them understand our digital behaviors and activities and then take action based on that knowledge. Some of these trackers—placed and used by a myriad (link is external) of data marketing companies on various websites—are referred to as “third-party” cookies, to distinguish them from what online marketers claim, with a straight face, are more acceptable forms of tracking software—known as “first-party” cookies. According to the tortured online advertiser explanation, “first-party” trackers are placed by websites on which you have affirmatively given permission to be tracked while you are on that site. These “we-have-your-permission-to-use” first-party cookies would increasingly become the foundation for advances in digital tracking and targeting. Please raise your hand if you believe you have informed Google or Amazon, to cite the two most egregious examples, that they can surveil what you do via these first-party cookies, including engaging in an analysis of your actions, background, interests and more. What the online ad business has developed behind its digital curtain—such as various ways to trigger your response, measure your emotions (link is external), knit together information on device (link is external) use, and employ machine learning (link is external) to predict your behaviors (just to name a few of the methods currently in use)—has played a fundamental role in personal data gathering. Yet these and other practices—which have an enormous impact on privacy, autonomy, fairness, and so many other aspects of our lives—will not be affected by the “death-of-the-cookie” transition currently underway. On the contrary, we believe that a case to be made that the opposite is true. Rather than strengthening data safeguards, we are seeing unaccountable platforms such as Google actually becoming more dominant, as so-called “privacy preserving (link is external)” systems actually enable enhanced data profiling. In a moment, we will briefly discuss some of the leading online marketing industry work underway to redefine privacy. But the motivation for this post is to sound the alarm that we should not—once again—allow powerful commercial interests to determine the evolving structure of our online lives. The digital data industry has no serious track record of protecting the public. Indeed, it was the failure of regulators to rein in this industry over the years that led to the current crisis. In the process, the growth of hate speech, the explosion of disinformation, and the highly concentrated control over online communications and commerce—to name only a few— now pose serious challenges to the fate of democracies worldwide. Google, Facebook and the others should never be relied on to defer their principal pursuit of monetization out of respect to any democratic ideal—let alone consumer protection and privacy. One clue to the likely end result of the current industry effort is to see how they frame it. It isn’t about democracy, the end of commercial surveillance, or strengthening human rights. It’s about how best to preserve what they call the “Open Internet.” (link is external)Some leading data marketers believe we have all consented to a trade-off, that in exchange for “free” content we’ve agreed to a pact enabling them to eavesdrop on everything we do—and then make all that information available to anyone who can pay for it—primarily advertisers. Despite its rhetoric about curbing tracking cookies, the online marketing business intends to continue to colonize our devices and monitor our online experiences. This debate, then, is really about who can decide—and under what terms—the fate of the Internet’s architecture, including how it operationalizes privacy—at least in the U.S. It illustrates questions that deserve a better answer than the “industry-knows-best” approach we have allowed for far. That’s why we call on the Biden Administration, the Federal Trade Commission (FTC) and the Congress to investigate these proposed new approaches for data use, and ensure that the result is truly privacy protective, supporting democratic governance and incorporating mechanisms of oversight and accountability. Here’s a brief review (link is external) of some of the key developments, which illustrate the digital “tug-of-war” ensuing over the several industry proposals involving cookies and tracking. In 2019, Google announced (link is external) that it would end the role of what’s known as “third-party cookies.” Google has created a “privacy sandbox (link is external)” where it has researched various methods it claims will protect privacy, especially for people who rely on its Chrome browser. It is exploring “ways in which a browser can group together people with similar browsing habits, so that ad tech companies can observe the habits of large groups instead of the activity of individuals. Ad targeting could then be partly based on what group the person falls into.” This is its “Federated Learning of Cohorts (FLoC) approach, where people are placed into “clusters” based on the use of “machine learning algorithms” that analyze the data generated from the sites a person visited and their content. Google says these clusters would “each represent thousands of people,” and that the “input features” used to generate the targeting algorithm, such as our “web history,” would be stored on our browsers. There would be other techniques deployed, to add “noise” to the data sets and engage in various “anonymization methods” so that the exposure of a person’s individual information is limited. Its TURTLEDOVE initiative is designed to enable more personalized targeting, where web browsers will be used to help ensure our data is available for the real-time auctions that sell us to advertisers. The theory is that by allowing the data to remain within our devices, as well using clusters of people for targeting, our privacy is protected. But the goal of the process— to have sufficient data and effective digital marketing techniques—is still at the heart of this process. Google recently (link is external) reported that “FLoC can provide an effective replacement signal for third-party cookies. Our tests of FLoC to reach in-market and affinity Google Audiences show that advertisers can expect to see at least 95% of the conversions per dollar spent when compared to cookie-based advertising.” Google’s 2019 announcement caused an uproar in the digital marketing business. It was also perceived (correctly, in my view) as a Google power grab. Google operates basically as a “Walled Garden (link is external)” and has so much data that it doesn’t really need third-party data cookies to hone in on its targets. The potential “death of the cookie” ignited a number of initiatives from the Interactive (link is external) Advertising Bureau, as well as competitors (link is external) and major advertisers, who feared that Google’s plan would undermine their lucrative business model. They include such groups as the Partnership for Addressable Media (PRAM), (link is external) whose 400 members include Mastercard, Comcast/NBCU, P&G, the Association of National Advertisers, IAB and other ad and data companies. PRAM issued a request (link is external) to review proposals (link is external) that would ensure the data marketing industry continues to thrive, but could be less reliant on third-party cookies. Leading online marketing company Trade Desk is playing a key role here. It submitted (link is external) its “United ID 2.0 (link is external),” plan to PRAM, saying that it “represents an alternative to third party cookies that improves consumer transparency, privacy and control, while preserving the value exchange of relevant advertising across channels and devices.” There are also a number of other ways now being offered that claim both to protect privacy yet take advantage of our identity (link is external), such as various collaborative (link is external) data-sharing efforts. The Internet standards groups Worldwide Web Consortium (W3C) has created (link is external) a sort of neutral meeting ground where the industry can discuss proposals and potentially seek some sort of unified approach. The rationale for the [get ready for this statement] “Improving Web Advertising Business Group goal is to provide monetization opportunities that support the open web while balancing the needs of publishers and the advertisers that fund them, even when their interests do not align, with improvements to protect people from the individual and societal impacts of tracking content consumption over time.” Its participants (link is external) are another “Who’s Who” in data-driven marketing, including Google, AT&T, Verizon, NYT, IAB, Apple, Group M, Axel Springer, Facebook, Amazon, Washington Post, Verizon, and Criteo. DuckDuckGo is also a member (and both Google and Facebook have multiple representatives in this group). The sole NGO listed as a member is the Center for Democracy and Technology. W3Cs ad business group has a number of documents (link is external) about the digital marketing business that illustrate why the issue of the future of privacy and data collection and targeting should be a public—and not just data industry—concern. In an explainer (link is external) on digital advertising, they make the paradigm so many are working to defend very clear: Marketing’s goal can be boiled down to the "5 Rights": Right Message to the Right Person at the Right Time in the Right Channel and for the Right Reason. Achieving this goal in the context of traditional marketing (print, live television, billboards, et al) is impossible. In digital realm, however, not only can marketers achieve this goal, they can prove it happened. This proof is what enables marketing activities to continue, and is important for modern marketers to justify their advertising dollars, which ultimately finance the publishers sponsoring the underlying content being monetized.” Nothing I’ve read says it better. Through a quarter century of work to perfect harvesting our identity for profit, the digital ad industry has created a formidable complex of data clouds (link is external), real-time ad auctions, cross-device tracking tools and advertising techniques (link is external) that further commodify our lives, shred our privacy, and transform the Internet into a hall of mirrors that can amplify our fears and splinter democratic norms. It’s people, of course, who decide how the Internet operates—especially those from companies such as Google, Facebook, Amazon, and those working for trade groups as the IAB. We must not let them decide how cookies may or may not be used or what new data standard should be adopted by the most powerful corporate interests on the planet to profit from our “identity.” It’s time for action by the FTC and Congress. Part 1. (1)For the uninitiated, TURTLEDOVE stands for “Two Uncorrelated Requests, Then Locally-Executed Decision On Victory”; FLEDGE is short for “First Locally-Executed Decision over Groups Experiment”; SPARROW is “Secure Private Advertising Remotely Run On Webserver”; and FLoC is “Federated Learning of Cohorts”). (2) In January 2021, the UK’s Competition and Markets Authority (CMA) opened up an investigation (link is external) into Google privacy sandbox and cookie plans.
    Jeff Chester
  • CONSUMER AND CITIZEN GROUPS CONTINUE TO HAVE SERIOUS CONCERNS ABOUT GOOGLE FITBIT TAKEOVER Joint Statement on Possible Remedies (link is external)
  • Press Release

    USDA Online Buying Program for SNAP Participants Threatens Their Privacy and Can Exacerbate Racial and Health Inequities, Says New Report

    Digital Rights, Civil Rights and Public Health Groups Call for Reforms from USDA, Amazon, Walmart, Safeway/Albertson’s and Other Grocery Retailers - Need for Safeguards Urgent During Covid-19 Crisis

    Contact: Jeff Chester jeff@democraticmedia.org (link sends e-mail) 202-494-7100 Katharina Kopp kkopp@democraticmedia.org (link sends e-mail) https://www.democraticmedia.org/ USDA Online Buying Program for SNAP Participants Threatens Their Privacy and Can Exacerbate Racial and Health Inequities, Says New Report Digital Rights, Civil Rights and Public Health Groups Call for Reforms from USDA, Amazon, Walmart, Safeway/Albertson’s and Other Grocery Retailers Need for Safeguards Urgent During Covid-19 Crisis Washington, DC, July 16, 2020—A pilot program designed to enable the tens of millions of Americans who participate in the USDA’s Supplemental Nutrition Assistance Program (SNAP) to buy groceries online is exposing them to a loss of their privacy through “increased data collection and surveillance,” as well as risks involving “intrusive and manipulative online marketing techniques,” according to a report from the Center for Digital Democracy (CDD). The report reveals how online grocers and retailers use an orchestrated array of digital techniques—including granular data profiling, predictive analytics, geolocation tracking, personalized online coupons, AI and machine learning —to promote unhealthy products, trigger impulsive purchases, and increase overall spending at check-out. While these practices affect all consumers engaged in online shopping, the report explains, “they pose greater threats to individuals and families already facing hardship.” E-commerce data practices “are likely to have a disproportionate impact on SNAP participants, which include low-income communities, communities of color, the disabled, and families living in rural areas. The increased reliance on these services for daily food and other household purchases could expose these consumers to extensive data collection, as well as unfair and predatory techniques, exacerbating existing disparities in racial and health equity.” The report was funded by the Robert Wood Johnson Foundation, as part of a collaboration among four civil rights, digital rights, and health organizations: Color of Change, UnidosUS, Center for Digital Democracy, and Berkeley Media Studies Group. The groups issued a letter today to Secretary of Agriculture Sonny Perdue, urging the USDA to take immediate action to strengthen online protections for SNAP participants. USDA launched (link is external) its e-commerce pilot last year in a handful of states, with an initial set of eight retailers approved for participation: Amazon, Dash’s Market, FreshDirect, Hy-Vee, Safeway, ShopRite, Walmart and Wright’s Market. The program has rapidly expanded (link is external) to a majority of states, in part as a result of the current Covid-19 health crisis, in order to enable SNAP participants to shop more safely from home by following “shelter-in-place” rules. Through an analysis of the digital marketing and grocery ecommerce practices of the eight companies, as well as an assessment of their privacy policies, CDD found that SNAP participants and other online shoppers confront an often manipulative and nontransparent online grocery marketplace, which is structured to leverage the tremendous amounts of data gathered on consumers via their mobile devices, loyalty cards, and shopping transactions. E-commerce grocers deliberately foreground the brands and products that partner with them (which include some of the most heavily advertised, processed foods and beverages), making them highly visible on store home pages and on “digital shelves,” as well as through online coupons and well-placed reminders at the point of sale. Grocers working with the SNAP pilot have developed an arsenal of “adtech” (advertising technology) techniques, including those that use machine learning and behavioral science to foster “frictionless shopping” and impulsive purchasing of specific foods and beverages. The AI and Big Data operations documented in the report may also lead to unfair and discriminatory data practices, such as targeting low-income communities and people of color with aggressive promotions for unhealthy food. Data collected and profiles created during online shopping may be applied in other contexts as well, leading to increased exposure to additional forms of predatory marketing, or to denial of opportunities in housing, education, employment, and financial services. “The SNAP program is one of our nation’s greatest success stories because it puts food on the table of hungry families and money in the communities where they live,” explained Dr. Lori Dorfman, Director of the Berkeley Media Studies Group. “Shopping for groceries should not put these families in danger of being hounded by marketers intent on selling products that harm health. Especially in the time of coronavirus when everyone has to stay home to keep themselves and their communities safe, the USDA should put digital safeguards in place so SNAP recipients can grocery shop without being manipulated by unfair marketing practices.” CDD’s research also found that the USDA relied on the flawed and misleading privacy policies of the participating companies, which fail to provide sufficient data protections. According to the pilot’s requirement for participating retailers, privacy policies should clearly explain how a consumer’s data is gathered and used, and provide “optimal” protections. A review of these long, densely worded documents, however, reveals the failure of the companies to identify the extent and impact of their actual data operations, or the risks to consumers. The pilot’s requirements also do not adequately limit the use of SNAP participant’s data for marketing. In addition, CDD tested the companies’ data practices for tracking customers’ behavior online, and compared them to the USDA’s requirements. The research found widespread use of so-called “third party” tracking software (such as “cookies”), which can expose an individual’s personal data to others. “In the absence of strong baseline privacy and ecommerce regulations in the US, the USDA’s weak safeguards are placing SNAP recipients at substantial risk,” explained Dr. Katharina Kopp, one of the report’s authors. “The kinds of e-commerce and Big Data practices we have identified through our research could pose even greater threats to communities of color, including increased commercial surveillance and further discrimination.” “Being on SNAP, or any other assistance program, should not give corporations free rein to use intrusive and manipulative online marketing techniques on Black communities,” said Jade Magnus Ogunnaike, Senior Campaign Director at Color of Change. “Especially in the era of COVID, where online grocery shopping is a necessity, Black people should not be further exposed to a corporate surveillance system with unfair and predatory practices that exacerbate disparities in racial and health equity just because they use SNAP. The USDA should act aggressively to protect SNAP users from unfair, predatory, and discriminatory data practices.” “The SNAP program helps millions of Latinos keep food on the table when times are tough and our nation’s public health and economic crises have highlighted that critical role,” said Steven Lopez, Director of Health Policy at UnidosUS. “Providing enhanced access to healthy and nutritious foods at the expense of the privacy and health of communities of color is too high of a price. Predatory marketing practices have been linked to increased health disparities for communities of color. The USDA must not ignore that fact and should take strong and meaningful steps to treat all participants fairly, without discriminatory practices based on the color of their skin.” The report calls on the USDA to “take an aggressive role in developing meaningful and effective safeguards” before moving the SNAP online purchasing system beyond its initial trial. The agency needs to ensure that contemporary e-commerce, retail and digital marketing applications treat SNAP participants fairly, with strong privacy protections and safeguards against manipulative and discriminatory practices. The USDA should work with SNAP participants, civil rights, consumer and privacy groups, as well as retailers like Amazon and Walmart, to restructure its program to ensure the safety and well-being of the millions of people enrolled in the program. ###