Newsroom
Program Areas
-
"Surveillance" Marketing meets what Google calls "embedded finance" online [excerpt]USPIRG and CDD believe the U.S. is at an especially critical inflection point regarding digital platforms, digital payment services and online consumer protection: the pervasive tracking of data on individuals, families and groups, online and off; the nearly real-time ability to target a consumer with financial and other product offers regardless of where they are or device they use; and the development of a highly sophisticated and now machine-driven apparatus to deliver personalized marketing and communications, have all led to a largely unaccountable digital marketplace. A handful of digital platform giants and their partners stealthily operate what is known as a “surveillance marketing” system, which now pervades every aspect of our lives—increasingly affecting how the public engages with the financial services sector. As the Bureau’s Request for Comment illustrates, it is aware of the serious ramifications to consumers and small businesses as the U.S. accelerates its transition to what Google calls “embedded finance.” The leading platforms and online services, as they accelerate their roles as America’s new bankers and lenders, bring with them a host of critical issues that the Bureau must address. Moreover, the industry’s “closed-loop” business model, where platforms and online data and ad practices are able to operate in a non-transparent manner, which has already caused an uproar from global marketers, is poised to have even greater consequences as it assumes greater control over our daily financial experiences. The growing role of platforms to leverage their market positions to shape the digital payment system is now disintermediating “banks and credit card companies from consumers.” These platforms are poised to dominate consumer and small business financial markets as much as they now do ecommerce, entertainment, and communications. In the process, these platforms and their online financial and other service partners will pose a series of threats. Their operating model, as we discuss below, engages in far-reaching forms of consumer manipulation, relying on a host of online marketing tactics designed to trigger a range of responses. There is a very real risk that without Bureau action, the digital payments and platform complex will aggressively push Americans to new levels of debt, as the Big Data and artificial intelligence (AI) apparatus now at the core of the consumer digital economy encourages impulse buying and other potentially consequential practices. These entities have so much information on individuals, communities and commerce, they easily dislodge smaller and locally-based businesses. Since data analyzed by the platforms is used to identify commercial opportunities across the range of their product offerings—which is basically making everything available for sale—a consumer will not be aware, let alone control, how this information can be used to target them with other products and services. As Alphabet/Google highlighted in a recent report on “embedded finance,” “online finance” has altered what people think is banking and managing finances: “now, most financial transactions happen via mobile apps, websites, email, text messages and other digital communications.” Google’s “white paper” suggests that embedded finance may be “the new gold rush” for financial services. Among the competitive benefits claimed by Google are that embedded finance “enables business to reach new customers at the moment when they need your services.” There is also an added “bonus”: “the data you collect from each transaction….”bigtechpaymentplatforms121921final.pdf
-
CDD's Jeff Chester contributed to report's focus on online marketing practices, inc. use of big data analytics, by alcoholic beverage companies(Excerpt from WHO release): Just as with tobacco, a global and comprehensive approach is required to restrict digital marketing of alcohol.“The vast majority of alcohol advertising online is “dark”Children and young people are especially at risk from the invasion of their social spaces by communication promoting alcohol consumption, normalising alcohol in all social contexts and linked to development of adult identities.“Current policies across the WHO European Region are insufficient to protect people from new formats of alcohol marketing. Age verification schemes, where they exist, are usually inadequate to protect minors from exposure to alcohol marketing. The fact that the vast majority of alcohol advertising online is “dark”, in the sense that it is only visible to the consumer to whom it is marketed, is challenging for policy makers thus requiring new mechanisms and a new approach,” said Dr Carina Ferreira-Borges, Acting Director for Noncommunicable Diseases and Programme Manager for Alcohol and Illicit Drugs at WHO/Europe.Link to releaseLink to report
-
Groups urge Congress to stop Big Tech’s manipulation of young people BOSTON – Thursday, December 2, 2021 – Today a coalition of leading advocacy groups launched Designed With Kids in Mind, a campaign demanding a design code in the US to protect young people from online manipulation and harm. The campaign seeks to secure protections for US children and teens similar to the UK’s groundbreaking Age-Appropriate Design Code (AADC), which went into effect earlier this year. The campaign brings together leading advocates for child development, privacy, and a healthier digital media environment, including Fairplay, Accountable Tech, American Academy of Pediatrics, Center for Digital Democracy, Center for Humane Technology, Common Sense, ParentsTogether, RAINN, and Exposure Labs, creators of The Social Dilemma. The coalition will advocate for legislation and new Federal Trade Commission rules that protect children and teens from a business model that puts young people at risk by prioritizing data collection and engagement.The coalition has launched a website that explains how many of the most pressing problems faced by young people online are directly linked to platform’s design choices. They cite features that benefit platforms at the expense of young people’s wellbeing, such as: Autoplay: increases time on platforms, and excessive time on screens is linked to mental health challenges, physical risks like less sleep, and promotes family conflict.Algorithmic recommendations: risks exposure to self-harm, racist content, pornography, and mis/disinformation.Location tracking: makes it easier for strangers to track and contact children.Nudges to share: leads to loss of privacy, risks of sexual predation and identity theft.The coalition is promoting three bills which would represent a big step forward in protecting US children and teens online: the Children and Teens’ Online Privacy Protection Act S.1628; the Kids Internet Design and Safety (KIDS) Act S. 2918; and the Protecting the Information of our Vulnerable Children and Youth (PRIVCY) Act H.R. 4801. Taken together, these bills would expand privacy protections to teens for the first time and incorporate key elements of the UK’s AADC, such as requiring the best interest of children to be a primary design consideration for services likely to be accessed by young people. The legislation backed by the coalition would also protect children and teens from manipulative design features and harmful data processing. Members of the coalition on the urgent need for a US Design Code to protect children and teens:Josh Golin, Executive Director, Fairplay:We need an internet that helps children learn, connect, and play without exploiting their developmental vulnerabilities; respects their need for privacy and safety; helps young children disconnect at the appropriate time rather than manipulating them into spending even more time online; and prioritizes surfacing high-quality content instead of maximizing engagement. The UK’s Age-Appropriate Design Code took an important step towards creating that internet, and children and teens in the US deserve the same protections and opportunities. It’s time for Congress and regulators to insist that children come before Big Tech’s profits.Nicole Gill, Co-Founder and Executive Director of Accountable Tech:You would never put your child in a car seat that wasn't designed for them and met all safety standards, but that's what we do every day when our children go online using a network of apps and websites that were never designed with them in mind. Our children should be free to learn, play, and connect online without manipulative platforms like Facebook and Google's YouTube influencing their every choice. We need an age appropriate design code that puts kids and families first and protects young people from the exploitative practices and the perverse incentives of social media.Lee Savio Beers, MD, FAAP, President of the American Academy of Pediatrics:The American Academy of Pediatrics is proud to join this effort to ensure digital spaces are safe for children and supportive of their healthy development. It is in our power to create a digital ecosystem that works better for children and families; legislative change to protect children is long overdue. We must be bold in our thinking and ensure that government action on technology addresses the most concerning industry practices while preserving the positive aspects of technology for young people.Jeff Chester, Executive Director, Center for Digital Democracy:The “Big Tech” companies have long treated young people as just a means to generate vast profits – creating apps, videos and games designed to hook them to an online world designed to surveil and manipulate them. It’s time to stop children and teens from being victimized by the digital media industry. Congress and the Federal Trade Commission should adopt commonsense safeguards that ensure America’s youth reap all the benefits of the online world without having to constantly expose themselves to the risks.Randima Fernando, Executive Director, Center for Humane Technology:We need technology that respects the incredible potential – and the incredible vulnerability – of our kids' minds. And that should guide technology for adults, who can benefit from those same improvements.Irene Ly, Policy Counsel, Common Sense:This campaign acknowledges harmful features of online platforms and apps like autoplay, algorithms amplifying harmful content, and location tracking for what they are: intentional design choices. For too long, online platforms and apps have chosen to exploit children’s vulnerabilities through these manipulative design features. Common Sense has long supported designing online spaces with kids in mind, and strongly supports US rules that would finally require companies to put kids’ well-being first.Julia Hoppock, The Social Dilemma Partnerships Director, Exposure Labs:For too long, Big Social has put profits over people. It's time to put our kids first and build an online world that works for them.Dalia Hashad, Online Safety Director, ParentsTogether: From depression to bullying to sexual exploitation, tech companies knowingly expose children to unacceptable harms because it makes the platforms billions in profit. It's time to put kids first.Scott Berkowitz, President of RAINN (Rape, Abuse & Incest National Network):Child exploitation has reached crisis levels, and our reliance on technology has left children increasingly vulnerable. On our hotline, we hear from children every day who have been victimized through technology. An age-appropriate design code will provide overdue safeguards for children across the U.S.launch_-_design_code_to_protect_kids_online.pdf
-
Time for the FTC to intervene as marketers create new ways to leverage our “identity” data as cookies “crumble” For decades, the U.S. has allowed private actors to basically create the rules regarding how our data is gathered and used online. A key reason that we do not have any real privacy for digital media is precisely because it has principally been online marketing interests that have shaped how the devices, platforms and applications we use ensnare us in the commercial surveillance complex. The Interactive Advertising Bureau (IAB) has long played this role through an array of standards committees that address everything from mobile devices to big data-driven targeting to ads harnessing virtual reality, to name a few. As this blog has previously covered, U.S. commercial online advertising, spearheaded by Google, the Trade Desk and others, is engaged in a major transformation of how it processes and characterizes data used for targeted marketing. For various reasons, the traditional ways we are profiled and tracked through the use of “cookies” are being replaced by a variety of schemes that enable advertisers to know and take advantage of our identities, but which they believe will (somehow!) pass muster with any privacy regulations now in force or potentially enacted. What’s important is that regardless of the industry rhetoric that these approaches will empower a person’s privacy, at the end of the day they are designed to ensure that the comprehensive tracking and targeting system remains firmly in place.As an industry trade organization, the IAB serves as a place to generate consensus, or agreed-upon formats, for digital advertising practices. To help the industry’s search for a way to maintain its surveillance business model approach, it has created what’s called “Project Rearc” to “re-architect digital marketing.” The IAB explains that Project Rearc “is a global call-to-action for stakeholders across the digital supply chain to re-think and re-architect digital marketing to support core industry use cases, while balancing consumer privacy and personalization.” It has set up a number of industry-run working groups to advance various components of this “re-architecting,” including what’s called an “Accountability Working Group.” Its members include Experian, Facebook, Google, Axel Springer, Nielsen, Pandora, TikTok, Nielsen, Publicis, Group M, Amazon, IABs from the EU, Australia, and Canada, Disney, Microsoft, Adobe, News Corp., Roku and many more (including specialist companies with their own “identity” for digital marketing approaches, such as Neustar and LiveRamp).The IAB Rearc effort has put out for “public comment” a number of proposed approaches for addressing elements of the new ways to target us via identifiers, cloud processing, and machine learning. Earlier this year, for example, it released for comment proposed standards on a “Global Privacy Platform;” an “Accountability Platform,” “Best Practices for User-Enabled Identity Tokens,” and a “Taxonomy and Data Transparency Standards to Support seller-defined Audience and Context Signaling.”Now it has released for public comment (due by November 12, 2021) a proposed method to “Increase Transparency Across Entire Advertising Supply Chain for New ID usage.” This proposal involves critical elements on the data collected about us and how it can be used. It is designed to “provide a standard way for companies to declare which user identity sources they use” and “ease ad campaign execution between advertisers, publishers, and their chosen technology providers….” This helps online advertisers use “different identity solutions that will replace the role of the third-party cookie,” explains the IAB. While developed in part for a “transparent supply chain” and to help build “auditable data structures to ensure consumer privacy,” its ultimate function is to enable marketers to “activate addressable audiences.” In other words, it’s all about continuing to ensure that digital marketers are able to build and leverage numerous individual and group identifiers to empower their advertising activities, and withstand potential regulatory threats about privacy violations.The IAB’s so-called public comment system is primarily designed for the special interests whose business model is the mass monetization of all our data and behaviors. We should not allow these actors to define how our everyday experiences with data operate, especially when privacy is involved. The longstanding role in which the IAB and online marketers have set many of the standards for our online lives should be challenged—by the FTC, Congress, state AGs and everyone else working on these issues.We—the public—should be determining our “digital destiny”—not the same people that gave us surveillance marketing in the first place.
-
Blog
The Big Data Merger Gold Rush to Control Your “Identity” Information
Will the DoJ ensure that both competition and consumer protection in data markets are addressed?
There is a digital data “gold rush” fever sweeping the data and marketing industry, as the quest to find ways to use data to determine a person’s “identity” for online marketing becomes paramount. This is triggered, in part, by the moves made by Google and others to replace “cookies” and other online identifiers with new, allegedly pro-privacy data-profiling methods to get the same results. We’ve addressed this privacy charade in other posts. In order to better position themselves in a world where knowing who we are and what we do is a highly valuable global currency, there are an increasing number of mergers and acquisitions in the digital marketing and advertising sector.For example, last week data-broker giant TransUnion announced it is buying identity data company Neustar for $3.1 billion dollars, to further expand its “powerful digital identity capabilities.” This is the latest in TransUnion’s buying spree to acquire data services companies that give it even more information on the U.S. public, including what we do on streaming media, via its 2020 takeovers of connected and streaming video data company Tru Optik (link is external) and the data-management-focused Signal. (link is external)In reviewing some of the business practices touted by TransUnion and Neustar, it’s striking that so little has changed in the decades CDD has been sounding the alarm about the impacts data-driven online marketing services have on society. These include the ever-growing privacy threats, as well as the machine-driven sorting of people and the manipulation of our behaviors. So far, nothing has derailed the commercial Big Data marketing.With this deal, TransUnion is obtaining a treasure trove of data assets and capabilities. For Neustar, “identity is an actionable understanding of who or what is on the other end of every interaction and transaction.” Neustar’s “OneID system provides a single lens on the consumer across their dynamic omnichannel journey.” This involves: (link is external) data management services featuring the collection, identification, tagging, tracking, analyzing, verification, correcting and sorting of business data pertaining to the identities, locations and personal information of and about consumers, including individuals, households, places, businesses, business entities, organizations, enterprises, schools, governments, points of interest, business practice characteristics, movements and behaviors of and about consumers via media devices, computers, mobile phones, tablets and internet connected devices.Neustar keeps close track of people, saying that it knows that “the average person has approximately 15 distinct identifiers with an average of 8 connected devices” (and notes that an average household has more than 45 such distinct identifiers). Neustar has an especially close business partnership with Facebook, (link is external) which enables marketers to better analyze how their ads translate into sales made on and spurred by that platform. Its “Customer Scoring and Segmentation” system enables advertisers to identify and classify targets so they can “reach the right customer with the right message in the right markets.” Neustar has a robust data-driven ad-targeting system called AdAdvisor, which reaches 220 million adults in “virtually every household in the U.S.” AdAdvisor (link is external) “uses past behavior to predict likelihood of future behavior” and involves “thousands of data points available for online targeting” (including the use of “2 billion records a month from authoritative offline sources”). Its “Propensity Audiences” service helps marketers predict the behaviors of people, incorporating such information (link is external) as “customer-level purchase data for more than 230 million US consumers; weekly in-store transaction data from over 4,500 retailers; actual catalog purchases by more than 18 million households”; and “credit information and household-level demographics, used to build profiles of the buying power, disposable income and access to credit a given household has available.” Neustar offers its customers the ability to reach “propensity audiences” in order to target such product categories as alcohol, automotive, education, entertainment, grocery, life events, personal finance, and more. For example, companies can target people who have used their debit or credit cards, by the amount of insurance they have on their homes or cars, by the “level of investable assets,” including whether they have a pension or other retirement funds. One also can discover people who buy a certain kitty litter or candy bar—the list of AdAdvisor possibilities is far-reaching.Another AdAdvisor application, “ElementOne,” (link is external) comprises 172 segments that can be “leveraged in real time for both online and offline audience targeting.” The targeting categories should be familiar to anyone who is concerned about how groups of people are characterized by data-brokers and others. For example, one can select “Segment 058—high income rural younger renters with and without children—or “Segment 115—middle income city older home owners without children; or any Segment from 151-172 to reach “low income” Americans who are renters, homeowners, have or don’t have kids, live in rural or urban areas, and the like.Marketers can also use AdAdvisor to determine the geolocation behaviors of their targets, through partnerships that provide Neustar with “10 billion daily location signals from 250+ million opted-in consumers.” In other words, Neustar knows whether you walked into that liquor store, grocery chain, hotel, entertainment venue, or shop. It also has data on what you view on TV, streaming video, and gaming. And it’s not just consumers who Neustar tracks and targets. Companies can access its “HealthLink Dimensions Doctor Data to target 1.7 million healthcare professionals who work in more than 400 specialties, including acute care, family practice, pediatrics, cardiovascular surgery.”TransUnion is already a global data and digital marketing powerhouse, with operations in 30 countries, 8,000 clients that include 60 of the Fortune 100. What is calls its “TruAudience Marketing Solutions (link is external)” is built on a foundation of “insight into 98% of U.S. adults and more than 127 million homes, including 80 million connected homes.” Its “TruAudience Identity” product provides “a three-dimensional, omnichannel view of individuals, devices and households… [enabling] precise, scalable identity across offline, digital and streaming environments.” It offers marketers and others a method to secure what it terms is an “identity resolution,” (link is external) which is defined as “the process of matching identifiers across devices and touchpoints to a single profile [that] helps build a cohesive, omnichannel view of a consumer….”TransUnion, known historically as one of the Big Three credit bureaus, has pivoted to become a key source for data and applications for digital marketing. It isn’t the only company expanding what is called an “ID Graf (link is external)”—the ways all our data are gathered for profiling. However, given its already vast storehouse of information on Americans, it should not be allowed to devour another major data-focused marketing enterprise.Since this merger is now before the U.S. Department of Justice—as opposed to the Federal Trade Commission—there isn’t a strong likelihood that in addition to examining the competitive implications of the deal, there will also be a focus on what this really means for people, in terms of further loss of privacy, their autonomy and their potential vulnerability to manipulative and stealthy marketing applications that classify and segment us in a myriad of invisible ways. Additionally, the use of such data systems to identify communities of color and other groups that confront historic and current obstacles to their well-being should also be analyzed by any competition regulator.In July, the Biden Administration issued (link is external) an Executive Order on competition that called for a more robust regime to deal with mergers such as TransUnion and Neustar. According to that order, “It is also the policy of my Administration to enforce the antitrust laws to meet the challenges posed by new industries and technologies, including the rise of the dominant Internet platforms, especially as they stem from serial mergers, the acquisition of nascent competitors, the aggregation of data, unfair competition in attention markets, the surveillance of users, and the presence of network effects.”We hope the DOJ will live up to this call to address mergers such as this one, and other data-driven deals that are a key reason why these kind of buyouts happen with regularity. There should also be a way for the FTC—especially under the leadership of Chair Lina Khan—to play an important role evaluating this and similar transactions. There’s more at stake than competition in the data-broker or digital advertising markets. Who controls our information and how that information is used are the fundamental questions that will determine our freedom and our economic opportunities. As the Big Data marketplace undergoes a key transition, developing effective policies to protect public privacy and corporate competition is precisely why this moment is so vitally important. -
Blog
Who is Really “Pushing” Your Online Shopping Cart as You Buy Groceries? It’s Big Data, Machine Learning and Lots of Cash from Advertisers
The FTC and States Should Investigate “Grocery Tech”
Online grocery shopping became a pandemic response necessity for those who could afford it, with revenues to exceed $100 billion (link is external) in 2021. Leading supermarket chains such as Kroger, big box stores like Walmart, online specialist companies such as Instacart, and the ubiquitous Amazon, have all experienced greater demand from the public to have groceries ordered and then quickly delivered or available for pick up. The pandemic has spurred “record” (link is external) downloads of grocery shopping apps from Instacart, Walmart Grocery and Target, among others. Consequently, this marketplace is now rapidly expanding its data collection and digital marketing operations, to generate significant revenues from advertisers and food and beverage brand sponsors.So it’s not a surprise that Instacart’s new (link is external) CEO comes from Facebook (link is external), and the company has also just hired that social network’s former head of advertising. Walmart, Kroger, Amazon and others are also further adding (link is external) adtech and data marketing experts. There has been a spate of announcements involving new grocery-focused alliances to improve the role digital data play in marketing and sales, including by Albertson’s (involving Google (link is external)) (link is external) and Hy-Vee (link is external) (also with Google). Albertson’s (link is external) (which includes the Safeway, Vons and Jewel-Osco divisions) deal with Google is designed to include “shoppable digital maps to make it easier for consumers to find and purchase products online; AI-powered conversation commerce” technologies for shopping, and “predictive grocery list building….” Similarly, Hy-Vee’s work with the Google Cloud will help enable “predictive (link is external) shopping carts,” among other services. (Hy-Vee is also one of the supermarket chains participating in the USDA’s online SNAP (link is external) pilot project, raising questions regarding how its alliance with Google will impact the privacy and well-being of people enrolled in SNAP).All the data that is flowing into these companies, how it is being analyzed, its use by advertisers and product sponsors, and how it impacts the products we see and purchase, should all be subject to scrutiny from consumer protection and privacy regulators.A good example is Instacart. Its Instacart (link is external) Advertising service allows brands to pay to become “featured products” and more (link is external). Featured Product ads are a form of paid search advertising. As Instacart tells its clients, if a consumer is searching (link is external)for “chocolate ice cream” or just “ice cream,” and you have bought such ads, “your product can appear as one of the first products in the search result.” And even after “consumers place an order, we’ll make some suggestions for last-minute additions to the order that the consumer might be interested in. Among these suggestions, the system can include Featured Product ads.”But it’s all the data and connections to their consuming customers that is the real “secret sauce” for Instacart’s ad-targeting and influence operations. The company knows what’s in and out of everyone’s shopping carts, explaining that “Instacart tracks (link is external) the source or ‘path to cart’ for all items purchased through Instacart marketplace, differentiating between three main groups—items bought from search results, browsing departments, aisles, and other discovery areas of Instacart, or from a list of previous purchases, which we call ‘buy it again.’” As it explains, the “Instacart Ads solution (link is external)” offers “a full suite of advertising products that animate the entire customer journey—from search through purchase.” These include opportunities for marketers to become part of “the ‘buy it again’ lists where consumers are shown a list of products that they have bought on previous orders. These can act as reminders of meals or recipes they’ve made before and items they tend to stock up on. Our brand partners can leverage Instacart Ads products to appear on ‘buy it again’ lists so they stay top of mind with their customers and aid in retaining valuable customers.”The company’s advertising blog discusses its use of what is increasingly the most valuable information a company can have—known as “first-party” data, where (allegedly) a consumer has given their consent to have all their information used. Instacart explains (link is external) this data “encompasses intent and purchase signals… from users signed up to an online grocery app,” and that it can be leveraged by its brand and advertising clients. The online ordering company explains that its “rich and diverse data sources” include “access to millions of orders over time from over 600 retail partners, across 45,000 stores, involving millions of households…. [A] tremendously valuable data set…this data is updated every night….” Instacart analyzes this trove of data, including point-of-sale, transaction log and out-of-stock information, to help it zero in on a key goal—to enable its advertisers to better understand and take advantage of what it terms “basket affinities.” In a webinar, Instacart defined (link is external) that concept:Basket affinities helps Instacart, and its brand partners, to create consumer ‘types’, discover product interactions, understand mission-dominant baskets, and identify trigger products — ultimately building a picture of how brands are bought online that we then share with our partners to build a better plan to acquire and retain valuable consumers. The term ‘basket affinity’ covers examining the one-to-one, group-to-group, or many-to-many relationships between products, as well as identifying consumer shopping missions and consumer profiles. [We note that Instacart says it “aggregates and anonymizes” this information. However, given its ability to target individuals, we will rely on regulators to determine how well such personal information is actually handled].Instacart also touts its ability to track the actual impact of advertising on its site, noting that it is a “closed loop” service “where ads are served to consumers in the same ‘space’ any resulting sales occur.” In a blog post (link is external) on how it empowers its advertiser partners, the company notes that Our advertising products provide options that can put their products in front of consumers on every ‘discovery surface’ on the platform to catch their eye when they are in this mode. Plus, our data shows that the majority of add to carts from these discovery surfaces are the first time the consumer has added that item to their cart — meaning it’s a great place for brands to acquire new customers.As with other major data (link is external)-driven digital marketers, Instacart has many grocery tech and ecommerce specialist partners, who provide brands and advertisers with a myriad of ways to promote, sell and otherwise “optimize” their products on its platform (such as Perpetua, (link is external) Tinuiti, (link is external) Skai (link is external) and Commerce IQ, (link is external) to only name several).The dramatic and recent growth of what’s called grocery tech is shaping the way consumers buy products and what prices they may pay. With companies such as Amazon, (link is external) Kroger, (link is external) Walmart, (link is external) Albertson’s (link is external) and Instacart now in essence Big Data-driven digital advertising companies, the public is being subjected to various practices that warrant regulatory scrutiny, oversight and public policy. We call on the Federal Trade Commission and state regulators to act. Among the key questions are how, if at all, are racial, ethnic and income data being used to target a consumer; are health data, including the buying of drugs and over-the-counter medications, being leveraged; and what measurement and performance information is being made available to partners and advertisers? We don’t want to have to “drop” our privacy and autonomy when we shop in the 21st Century. -
Blog
Surveillance Marketing Industry Claims Future of an “Open Internet” Requires Massive Data Gathering
New ways to take advantage of your “identity” raise privacy, consumer-protection and competition issues
The Trade Desk is a leading (link is external) AdTech company, providing data-driven digital advertising services (link is external) to major brands and agencies. It is also playing an outsized role responding to the initiative led by Google (link is external) to create new, allegedly “privacy-friendly” approaches to ad targeting, which include ending the use of what are called “third-party” cookies. These cookies enable the identification and tracking of individuals, and have been an essential building block for surveillance advertising since the dawn (link is external) of the commercial Internet. As we explained in a previous post about the so-called race to “end” the use of cookies, the online marketing industry is engaged in a full-throated effort to redefine how our privacy is conceptualized and privately governed. Pressure from regulators (such as the EU’s GDPR) and growing concerns about privacy from consumers are among the reasons why this is happening now. But the real motivation, in my view, is that the most powerful online ad companies and global brands (such as Google, Amazon and the Trade Desk) don’t need these antiquated cookies anymore. They have so much of our information that they collect directly, and also available from countless partners (such as global brands). Additionally, they now have many new ways to determine who we are—our “identity”—including through the use of AI, machine learning and data clouds (link is external). “Unified ID 2.0” is what The Trade Desk calls its approach to harvesting our identity information for advertising. Like Google, they claim to be respectful of data protection principles. Some of the most powerful companies in the U.S. are supporting the Unified ID standard, including Walmart, Washington Post, P&G, Comcast, CBS, Home Depot, Oracle, and Nielsen. But more than our privacy is at stake as data marketing giants fight over how best to reap the financial rewards (link is external) of what is predicted eventually to become a trillion dollar global ad marketplace. This debate is increasingly focused on the very future of the Internet itself, including how it is structured and governed. Only by ensuring that advertisers can continue to successfully operate powerful data-gathering and ad-targeting systems, argues Trade Desk CEO Jeff Green, can the “Open (link is external) Internet” be preserved. His argument, of course, is a digital déjà vu version of what media moguls have said in the U.S. dating back to commercial radio in the 1930’s. Only with a full-blown, ad-supported (and regulation-free) electronic media system, whether it was broadcast radio, broadcast TV, or cable TV, could the U.S. be assured it would enjoy a democratic and robust communications environment. (I was in the room at the Department of Commerce back in the middle 1990’s when advertisers were actually worried that the Internet would be largely ad-free; the representative from P&G leaned over to tell me that they never would let that happen—and he was right.) Internet operations are highly influenced to serve the needs of advertisers, who have reworked its architecture to ensure we are all commercially surveilled. For decades, the online ad industry has continually expanded ways to monetize our behaviors, emotions, location and much more. (link is external) Last week, The Trade Desk unveiled its latest iteration using Unified ID 2.0—called Solimar (see video (link is external) here). Solimar uses “an artificial intelligence tool called Koa (link is external), which makes suggestions” to help ensure effective marketing campaigns. Reflecting the serial partnerships that operate to provide marketers with a gold mine of information on any individual, The Trade Desk has a “Koa Identity (link is external) Alliance,” a “cross-device graph that incorporates leading and emerging ID solutions such as LiveRamp Identity Link, Oracle Cross Device, Tapad (link is external) Device Graph, and Adbrain Device Graf.” This system, they say, creates an effective way for marketers to develop a data portrait of individual consumers. It’s useful to hear what companies such as The Trade Desk say as we evaluate claims that “big data” consumer surveillance operations are essential for a democratically structured Internet. In its most recent Annual Report (link is external), the company explains that “Through our self-service, cloud-based platform, ad buyers can create, manage, and optimize more expressive data-driven digital advertising campaigns across ad formats and channels, including display, video, audio, in-app, native and social, on a multitude of devices, such as computers, mobile devices, and connected TV (‘CTV’)…. We use the massive data captured by our platform to build predictive models around user characteristics, such as demographic, purchase intent or interest data. Data from our platform is continually fed back into these models, which enables them to improve over time as the use of our platform increases.” And here’s how The Trade Desk’s Koa’s process is described in the trade publication Campaign (link is external) Asia: …clients can specify their target customer in the form of first-party or third-party data, which will serve as a seed audience that Koa will model from to provide recommendations. A data section provides multiple options for brands to upload first-party data including pixels, app data, and IP addresses directly into the platform, or import data from a third-party DMP or CDP. If a client chooses to onboard CRM data in the form of email addresses, these will automatically be converted into UID2s. Once converted, the platform will scan the UID2s to evaluate how many are ‘active UID2s’, which refers to how many of these users have been active across the programmatic universe in the past week. If the client chooses to act on those UID2s, they will be passed into the programmatic ecosystem to match with the publisher side, building the UID2 ecosystem in tandem. For advertisers that don't have first-party data… an audiences tab allows advertisers to tap into a marketplace of second- and third-party data so they can still use interest segments, purchase intent segments and demographics. In other words, these systems have a ton of information about you. They can easily get even more data and engage in the kinds of surveillance advertising that regulators (link is external) and consumer (link is external) advocates around the world are demanding be stopped. There are now dozens of competing “identity solutions”—including those from Google, Amazon (link is external), data brokers (link is external), telephone (link is external) companies, etc. (See visual at bottom of page here (link is external)). The stakes here are significant—how will the Internet evolve in terms of privacy, and will its core “DNA” be ever-growing forms of surveillance and manipulation? How do we decide the most privacy-protective ways to ensure meaningful monetization of online content—and must funding for such programming only be advertising-based? In what ways are some of these identity proposals a way for powerful platforms such as Google to further expand its monopolistic control of the ad market? These and other questions require a thoughtful regulator in the U.S. to help sort this out and make recommendations to ensure that the public truly benefits. That’s why it’s time for the U.S. Federal Trade Commission to step in. The FTC should analyze these advertising-focused identity efforts; assess their risks and the benefits; address how to govern the collection and use of data where a person has supposedly given permission to a brand or store to use it (known as “first-party” data). A key question, given today’s technologies, is whether meaningful personal consent for data collection is even possible in a world driven by sophisticated and real-time AI systems that personalize content and ads? The commission should also investigate the role of data-mining clouds and other so-called “clean” rooms where privacy is said to prevail despite their compilation of personal information for targeted advertising. The time for private, special interests (and conflicted) actors to determine the future of the Internet, and how our privacy is to be treated, is over. -
Press Release
Against surveillance-based advertising
CDD joins an international coalition of more than 50 NGOs and scholars in a call for a surveillance-based advertising ban in its Digital Services Act and for the U.S. to enact a federal digital privacy and civil rights law
International coalition calls for action against surveillance-based advertising Every day, consumers are exposed to extensive commercial surveillance online. This leads to manipulation, fraud, discrimination and privacy violations. Information about what we like, our purchases, mental and physical health, sexual orientation, location and political views are collected, combined and used under the guise of targeting advertising. In a new report, the Norwegian Consumer Council (NCC) sheds light on the negative consequences that this commercial surveillance has on consumers and society. Together with [XXX] organizations and experts, NCC is asking authorities on both sides of the Atlantic to consider a ban. In Europe, the upcoming Digital Services Act can lay the legal framework to do so. In the US, legislators should seize the opportunity to enact comprehensive privacy legislation that protects consumers. - The collection and combination of information about us not only violates our right to privacy, but renders us vulnerable to manipulation, discrimination and fraud. This harms individuals and society as a whole, says the director of digital policy in the NCC, Finn Myrstad. In a Norwegian population survey conducted by YouGov on behalf of the NCC, consumers clearly state that they do not want commercial surveillance. Just one out of ten respondents were positive to commercial actors collecting personal information about them online, while only one out of five thought that ads based on personal information is acceptable. - Most of us do not want to be spied on online, or receive ads based on tracking and profiling. These results mirror similar surveys from Europe and the United States, and should be a powerful signal to policymakers looking at how to better regulate the internet, Myrstad says. Policymakers and civil society organisations on both sides of the Atlantic are increasingly standing up against these invasive practices. For example, The European Parliament and the European Data Protection Supervisor (EDPS) have already called for phasing out and banning surveillance-based advertising. A coalition of consumer and civil rights organizations in the United States has called for a similar ban. Significant consequences The NCC report ’Time to ban surveillance-based advertising’ exposes a variety of harmful consequences that surveillance-based advertising can have on individuals and on society: 1. Manipulation Companies with comprehensive and intimate knowledge about us can shape their messages in attempts to reach us when we are susceptible, for example to influence elections or to advertise weight loss products, unhealthy food or gambling. 2. Discrimination The opacity and automation of surveillance-based advertising systems increase the risk of discrimination, for example by excluding consumers based on income, gender, race, ethnicity or sexual orientation, location, or by making certain consumers pay more for products or services. 3. Misinformation The lack of control over where ads are shown can promote and finance false or malicious content. This also poses significant challenges to publishers and advertisers regarding revenue, reputational damage, and opaque supply chains. 4. Undermining competition The surveillance business model favours companies that collect and process information across different services and platforms. This makes it difficult for smaller actors to compete, and negatively impacts companies that respect consumers’ fundamental rights. 5. Security risks When thousands of companies collect and process enormous amounts of personal data, the risk of identity theft, fraud and blackmail increases. NATO has described this data collection as a national security risk. 6. Privacy violations The collection and use of personal data is happening with little or no control, both by large companies and by companies that are unknown to most consumers. Consumers have no way to know what data is collected, who the information is shared with, and how it may be used. - It is very difficult to justfy the negative consequences of this system. A ban will contribute to a healthier marketplace that helps protect individuals and society, Myrstad comments. Good alternatives In the report, the NCC points to alternative digital advertising models that do not depend on the surveillance of consumers, and that provide advertisers and publishers more oversight and control over where ads are displayed and which ads are being shown. - It is possible to sell advertising space without basing it on intimate details about consumers. Solutions already exist to show ads in relevant contexts, or where consumers self-report what ads they want to see, Myrstad says. - A ban on surveillance-based advertising would also pave the way for a more transparent advertising marketplace, diminishing the need to share large parts of ad revenue with third parties such as data brokers. A level playing field would contribute to giving advertisers and content providers more control, and keep a larger share of the revenue. The coordinated push behind the report and letter illustrates the growing determination of consumer, digital rights, human rights and other civil society groups to end the widespread business model of spying on the public. -
The Center for Digital Democracy and 23 other leading civil society groups sent a letter to President Biden today asking his Administration to ensure that any new transatlantic data transfer deal is coupled with the enactment of U.S. laws that reform government surveillance practices and provide comprehensive privacy protections.
-
Contact: Jeff Chester, CDD (jeff@democraticmedia.org (link sends e-mail); 202-494-7100) David Monahan, CCFC (david@commercialfreechildhood.org (link sends e-mail);) Advocates Ask FTC to Protect Youth From Manipulative “Dark Patterns” Online BOSTON, MA and WASHINGTON, DC — May 28, 2021—Two leading advocacy groups protecting children from predatory practices online filed comments today asking the FTC to create strong safeguards to ensure that internet “dark patterns” don’t undermine children’s well-being and privacy. Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) cited leading authorities on the impacts of internet use on child development in their comments prepared by the Communications & Technology Law Clinic at Georgetown University Law Center. These comments follow testimony given by representatives of both groups last month at a FTC workshop spearheaded by FTC Acting Chair Rebecca Slaughter. CCFC and CDD say tech companies are preying upon vulnerable kids, capitalizing on their fear of missing out, desire to be popular, and inability to understand the value of misleading e-currencies, as well as putting them on an endless treadmill on their digital devices. They urged the FTC to take swift and strong action to protect children from the harms of dark patterns. Key takeaways include: - A range of practices, often called “dark patterns” are pervasive in the digital marketplace, manipulate children, are deceptive and unfair and violate Section 5 of the FTC Act. They take advantage of a young person’s psycho-social development, such as the need to engage with peers. - The groups explained the ways children are vulnerable to manipulation and other harms from “dark patterns,” including that they have “immature and developing executive functioning,” which leads to impulse behaviors. - The FTC should prohibit the use of dark pattern practices in the children’s marketplace; issue guidance to companies to ensure they do not develop or deploy such applications, and include new protections under their Children’s Online Privacy Protection Act (COPPA) rulemaking authority to better regulate them. The commission must bring enforcement actions against the developers using child-directed dark patterns. - The FTC should prohibit the use of micro-transactions in apps serving children, including the buying of virtual currency to participate in game playing. - The FTC should adopt a definition of dark patterns to include all “nudges” designed to use a range of behavioral techniques to foster desired responses from users. The groups’ filing was in response to the FTC’s call for comments (link is external) on the use of digital “dark patterns” — deceptive and unfair user interface designs — on websites and mobile apps. Comment of Jeff Chester, executive Director of the Center for Digital Democracy: “Dark Patterns” are being used in the design of child-directed services to manipulate them to spend more time and money on games and other applications, as well as give up more of their data. It’s time the FTC acted to protect young people from being unfairly treated by online companies. The commission should issue rules that prohibit the use of these stealth tactics that target kids and bring legal action against the companies promoting their use. Comment of Josh Golin, executive Director of the Campaign for a Commercial-Free Childhood: In their rush to monetize children, app and game developers are using dark patterns that take advantage of children’s developmental vulnerabilities. The FTC has all the tools it needs to stop unethical, harmful, and illegal conduct. Doing so would be a huge step forward towards creating a healthy media environment for children. Comment of Michael Rosenbloom, Staff Attorney & Clinical Teaching Fellow, Communications and Technology Law Clinic, Georgetown University Law Center: Software and game companies are using dark patterns to pressure children into playing more and paying more. Today, many apps and games that children play use dark patterns like arbitrary virtual currencies, encouragement from in-game characters, and ticking countdown timers, to get children to spend more time and money on microtransactions. These dark patterns harm children and violate Section 5 of the FTC Act, and we urge the FTC to act to stop these practices. ###
-
Press Release
“Big Food” and “Big Data” Online Platforms Fueling Youth Obesity Crisis as Coronavirus Pandemic Rages
New Report Calls for Action to Address Saturation of Social Media, Gaming Platforms, and Streaming Video with Unhealthy Food and Beverage Products
“Big Food” and “Big Data” Online Platforms Fueling Youth Obesity Crisis as Coronavirus Pandemic RagesNew Report Calls for Action to Address Saturation of Social Media, Gaming Platforms, and Streaming Video with Unhealthy Food and Beverage Products Contact: Jeff Chester (202-494-7100) For Immediate ReleaseWashington, DC, May 12, 2021A report released today calls for federal and global action to check the growth of digital marketing of food and beverage products that target children and teens online. Tech platforms especially popular with young people—including Facebook’s Instagram, Amazon’s Twitch, ByteDance’s TikTok, and Google’s YouTube – are working with giant food and beverage companies, such as Coca Cola, KFC, Pepsi and McDonald’s, to promote sugar-sweetened soda, energy drinks, candy, fast food, and other unhealthy products across social media, gaming, and streaming video. The report offers fresh new analysis and insight into the most recent industry practices, documenting how “Big Food” and “Big Tech” are using AI, machine learning, and other data-driven techniques to ensure that food marketing permeates all of the online cultural spaces where children and teenagers congregate. The pandemic has dramatically increased exposure to these aggressive new forms of marketing, further increasing young people’s risks of becoming obese. Black and Brown youth are particularly vulnerable to new online promotional strategies. Noting that concerns about youth obesity have recently fallen off the public radar in the U.S., the report calls for both international and domestic policies to rein in the power of the global technology and food industries. The report and an executive summary are available at the Center for Digital Democracy’s (CDD) website, along with other background material.“Our investigation found that there is a huge amount of marketing for unhealthy foods and beverages all throughout the youth digital media landscape, and it has been allowed to flourish with no government oversight,” explained Kathryn C. Montgomery, PhD, the report’s lead author, Professor Emerita at American University and CDD’s Senior Strategist. “We know from decades of research that marketing of these products contributes to childhood obesity and related illnesses. And we’ve witnessed how so many children, teens, and young adults suffering from these conditions have been particularly vulnerable to the coronavirus. Both the technology industry and the food and beverage industry need to be held accountable for creating an online environment that undermines young people’s health.”The report examines an array of Big Data strategies and AdTech tools used by the food industry, focusing on three major sectors of digital culture that attract large numbers of young people -- the so-called “influencer economy,” gaming and esports platforms, and the rapidly expanding streaming and online video industry.Dozens of digital campaigns by major food and beverage companies, many of which have won prestigious ad industry awards, illustrate some of the latest trends and techniques in digital marketing:The use of influencers is one of the primary ways that marketers reach and engage children and teens. Campaigns are designed to weave branded material “seamlessly into the daily narratives” shared on social media. Children and teens are particularly susceptible to influencer marketing, which taps into their psycho-social development. Marketing researchers closely study how young people become emotionally attached to celebrities and other influencers through “parasocial” relationships.McDonald’s enlisted rapper Travis Scott, to promote the “Travis Scott Meal” to young people, featuring “a medium Sprite, a quarter pounder with bacon, and fries with barbecue sauce.” The campaign was so successful that some restaurants in the chain sold out of supplies within days of its launch. This and other celebrity endorsements have helped boost McDonald’s stock price, generated a trove of valuable consumer data, and triggered enormous publicity across social media.Food and beverage brands have flocked to Facebook-owned Instagram, which is considered one of the best ways to reach and engage teens.According to industry research, nearly all influencer campaigns (93%) are conducted on Instagram. Cheetos’ Chester Cheetah is now an “Instagram creator,” telling his own “stories” along with millions of other users on the platform.One Facebook report, “Quenching Today’s Thirsts: How Consumers Find and Choose Drinks,” found that “64% of people who drink carbonated beverages use Instagram for drinks-related activities, such as sharing or liking posts and commenting on drinks content,” and more than a third of them report following or “liking” soft drink “brands, hashtags, or influencer posts.”The online gaming space generates more revenue than TV, film or music, and attracts viewers and players – including many young people -- who are “highly engaged for a considerable length of time.” Multiplayer online battle arena (MOBA) and first-person shooter games are considered one of the best marketing environments, offering a wide range of techniques for “monetization,” including in-game advertising, sponsorship, product placement, use of influencers, and even “branded games” created by advertisers. Twitch, the leading gaming platform, owned by Amazon, has become an especially important venue for food and beverage marketers. Online gamers and fans are considered prime targets for snack, soft drink, and fast food brands, all products that lend themselves to uninterrupted game play and spectatorship.PepsiCo’s energy drink, MTN DEW Amp Game Fuel, is specifically “designed with gamers in mind.” To attract influencers, it was featured on Twitch’s “Bounty Board,” a one-stop-shopping tool for “streamers,” enabling them to accept paid sponsorship (or “bounties”) from brands that want to reach the millions of gamers and their followers.Red Bull recently partnered with Ninja“the most popular gaming influencer in the world with over 13 million followers on Twitch, over 21 million YouTube subscribers, and another 13 million followers on Instagram.”Dr. Pepper featured the faces of players of the popular Fortnite game on its bottles, with an announcement on Twitter that this campaign resulted in “the most engaged tweet” the soft-drink company had ever experienced.Wendy’s partnered with “five of the biggest Twitch streamers,” as well as food delivery app Uber Eats, to launch its “Never Stop Gaming” menu, with the promise of “five days of non-stop gaming, delicious meal combos and exclusive prizes.” Branded meals were created for each of the five streamers, who offered their fans the opportunity to order directly through their Twitch channels and have the food delivered to their doors.One of the newest marketing frontiers is streaming and online video, which have experienced a boost in viewership during the pandemic. Young people are avid users, accessing video on their mobile devices, gaming consoles, personal computers, and online connections to their TV sets.Concerned that teens “are drinking less soda,” Coca-Cola’s Fanta brand developed a comprehensive media campaign to trigger “an ongoing conversation with teen consumers through digital platforms” by creating four videos based on the brand’s most popular flavors, and targeting youth on YouTube, Hulu, Roku, Crackle, and other online video platforms. “From a convenience store dripping with orange flavor and its own DJ cat, to an 8-bit videogame-ified pizza parlor, the digital films transport fans to parallel universes of their favorite hangout spots, made more extraordinary and fantastic once a Fanta is opened.”New video ad formats allow virtual brand images to be inserted into the content and tailored to specific viewers. “Where one customer sees a Coca-Cola on the table,” explained a marketing executive, “the other sees green tea. Where one customer sees a bag of chips, another sees a muesli bar… in the exact same scene.”The major technology platforms are facilitating and profiting from the marketing of unhealthy food and beverage products.Facebook’s internal “creative shop” has helped Coca-Cola, PepsiCo, Unilever, Nestle and hundreds of other brands develop global marketing initiatives to promote their products across its platform. The division specializes in “building data-driven advertising campaigns, branded content, branded entertainment, content creation, brand management, social design,” and similar efforts.Google regularly provides a showcase for companies such as Pepsi, McDonald’s and Mondelez to tout their joint success promoting their respective products throughout the world.For example, Pepsi explained in a “Think with Google” post that it used Google’s “Director’s Mix” personalization video advertising technology to further what it calls its ability to “understand the consumer’s DNA,” meaning their “needs, context, and location in the shopping journey.” Pepsi could leverage Google’s marketing tools to help its goal of combining “insights with storytelling and drive personalized experiences at scale.”Hershey’s has been working closely with Amazon to market its candy products via streaming video, as well as through its own ecommerce marketplace. In a case study published online, Amazon explained that “…as viewing consumption began to fragment, the brand [Hershey’s] realized it was no longer able to reach its audience with linear TV alone.” Amazon gave Hershey’s access to its storehouse of data so the candy company could market its products on Amazon’s streaming services, such as IMDbTV. Amazon allowed Hershey’s to use Amazon’s data to ensure the candy brands would “be positioned to essentially ‘win’ search in that category on Amazon and end up as the first result….” Hershey’s also made use of “impulse buy” strategies on the Amazon platform, including “cart intercepts,” which prompt a customer to “add in snacks as the last step in their online shopping trip, mimicking the way someone might browse for candy during the checkout at a physical store.”Some of the largest food and beverage corporations—including Coca-Cola, McDonald’s, and Pepsi—have, in effect, transformed themselves into Big Data businesses.Coca-Cola operates over 40 interconnected social media monitoring facilities worldwide, which use AI to follow customers, analyze their online conversations, and track their behaviors.PepsiCo has developed a “fully addressable consumer database” (called “Consumer DNA”) that enables it to “see a full 360 degree view of our consumers.”McDonald’s made a significant investment in Plexure, a “mobile engagement” company specializing in giving fast food restaurants the ability “to build rich consumer profiles” and leverage the data “to provide deeply personalized offers and content that increase average transaction value” and help generate other revenues. One of its specialties is designing personalized messaging that triggers the release of the brain chemical, dopamine.The report raises particularly strong concerns about the impact of all these practices on youth of color, noting that food and beverage marketers “are appropriating some of the most powerful ‘multicultural’ icons of youth pop culture and enlisting these celebrities in marketing campaigns for sodas, ‘branded’ fast-food meals, and caffeine-infused energy drinks.” These promotions can “compound health risks for young Blacks and Hispanics,” subjecting them to “multiple layers of vulnerability, reinforcing existing patterns of health disparity that many of them experience.”“U.S. companies are infecting the world’s young people with invasive, stealth, and incessant digital marketing for junk food,” commented Lori Dorfman, DrPH, director, Berkeley Media Studies Group, one of CDD’s partners on the project. “And they are targeting Black and Brown youth because they know kids of color are cultural trendsetters,” she explained. “Big Food and Big Tech run away with the profits after trampling the health of children, youth, and families.”The Center for Digital Democracy and its allies are calling for a comprehensive and ambitious set of policies for limiting the marketing of unhealthy food and beverages to young people, arguing that U.S. policymakers must work with international health and youth advocacy organizations to develop a coordinated agenda for regulating these two powerful global industries. As the report explains, other governments in the UK, Europe, Canada, and Latin America have already developed policies for limiting or banning the promotion of foods that are high in fat, sugar, and salt, including on digital platforms. Yet, the United States has continued to rely on an outdated self-regulatory model that does not take into account the full spectrum of Big Data and AdTech practices in today’s contemporary digital marketplace, places too much responsibility on parents, and offers only minimal protections for the youngest children.“Industry practices have become so sophisticated, widespread, and entangled that only a comprehensive public policy approach will be able to produce a healthier digital environment for young people,” explained Katharina Kopp, PhD, CDD’s Deputy Director and Director of Research.The report lays out an eight-point research-based policy framework:Protections for adolescents as well as young children.Uniform, global, science-based nutritional criteria.Restrictions on brand promotion.Limits on the collection and use of data.Prohibition of manipulative and unfair marketing techniques and design features.Market research protections for children and teens.Elimination of digital racial discrimination.Transparency, accountability, and enforcement.### -
Reports
“Big Food” and “Big Data” Online Platforms Fueling Youth Obesity Crisis as Coronavirus Pandemic Rages
New Report Calls for Action to Address Saturation of Social Media, Gaming Platforms, and Streaming Video with Unhealthy Food and Beverage Products
The coronavirus pandemic triggered a dramatic increase in online use. Children and teens whose schools have closed relied on YouTube for educational videos, attending virtual classes on Zoom and Google Classroom, and flocking to TikTok, Snapchat, and Instagram for entertainment and social interaction. This constant immersion in digital culture has exposed them to a steady flow of marketing for fast foods, soft drinks, and other unhealthy products, much of it under the radar of parents and teachers. Food and beverage companies have made digital media ground zero for their youth promotion efforts, employing a growing spectrum of new strategies and high-tech tools to penetrate every aspect of young peoples’ lives.Our latest report, Big Food, Big Tech, and the Global Childhood Obesity Pandemic, takes an in-depth look at this issue. Below we outline just three of the many tactics the food industry is using to market unhealthy products to children and teens in digital settings.1. Influencer marketing - Travis Scott & McDonald'sMcDonald’s enlisted rapper Travis Scott, to promote the “Travis Scott Meal” to young people, featuring “a medium Sprite, a quarter pounder with bacon, and fries with barbecue sauce.” The campaign was so successful that some restaurants in the chain sold out of supplies within days of its launch. This and other celebrity endorsements have helped boost McDonald’s stock price, generated a trove of valuable consumer data, and triggered enormous publicity across social media.2. Gaming Platforms - MTN DEW Amp Game Fuel - TwitchPepsiCo’s energy drink, MTN DEW Amp Game Fuel, is specifically “designed with gamers in mind.” Each 16 oz can of MTN DEW Amp Game Fuel delivers a powerful “vitamin-charged and caffeine-boosted” formula, whose ingredients of high fructose corn syrup, grape juice concentrate, caffeine, and assorted herbs “have been shown to improve accuracy and alertness.” The can itself features a “no-slip grip that mirrors the sensory design of accessories and hardware in gaming.” It is also “easier to open and allows for more uninterrupted game play.”To attract influencers, the product was featured on Twitch’s “Bounty Board,” a one-stop-shopping tool for “streamers,” enabling them to accept paid sponsorship (or “bounties”) from brands that want to reach the millions of gamers and their followers.3. Streaming and Digital Video - "It's a Thing" Campaign - FantaConcerned that teens were “drinking less soda,” Coca-Cola’s Fanta brand developed a comprehensive media campaign to trigger “an ongoing conversation with teen consumers through digital platforms” by creating four videos based on the brand’s most popular flavors, and targeting youth on YouTube, Hulu, Roku, Crackle, and other online video platforms. “From a convenience store dripping with orange flavor and its own DJ cat, to an 8-bit videogame-ified pizza parlor, the digital films transport fans to parallel universes of their favorite hangout spots, made more extraordinary and fantastic once a Fanta is opened.” The campaign, which was aimed at Black and Brown teens, also included use of Snapchat’s augmented-reality technology to creative immersive experiences, as well as promotional efforts on Facebook-owned Instagram, which generated more than a half a million followers. -
To watch the full FTC Dark Patterns Workshop online visit the FTC website here (link is external).
-
Press Release
Advocates say Google Play continues to disregard children’s privacy law and urge FTC to act
Contact: Jeff Chester, CDD jeff@democraticmedia.org (link sends e-mail); 202-494-7100David Monahan, CCFC, david@commercialfreechildhood.org (link sends e-mail)Advocates say Google Play continues to disregard children’s privacy law and urge FTC to act BOSTON, MA and WASHINGTON, DC — March 31, 2021—Today, advocacy groups Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) called on the Federal Trade Commission (FTC) to investigate Google’s promotion of apps which violate the Children’s Online Privacy Protection Act (COPPA). In December 2018, CCFC and CDD led a coalition of 22 consumer and health advocacy groups in asking the FTC to investigate these same practices. Since then Google has made changes to the Play Store, but the advocates say these changes fail to address the core problem: Google is certifying as safe and appropriate for children apps that violate COPPA and put children at risk. Recent studies found that a significant number of apps in Google Play violate COPPA by collecting and sharing children’s personal information without getting parental consent. For instance, a JAMA Pediatrics study found that 67% of apps used by children aged 5 and under were transmitting personal identifiers to third parties.Comment of Angela Campbell, Chair of the Board of Directors, Campaign for a Commercial-Free Childhood, Professor Emeritus, Communications & Technology Law Clinic, Georgetown University Law Center:“Parents reasonably expect that Google Play Store apps designated as ‘Teacher approved’ or appropriate for children under age 13 comply with the law protecting children’s privacy. But far too often, that is not the case. The FTC failed to act when this problem was brought to its attention over two years ago. Because children today are spending even more time using mobile apps, the FTC must hold Google accountable for violating children’s privacy.”Comment of Jeff Chester, executive Director of the Center for Digital Democracy:"The Federal Trade Commission must swiftly act to stop Google’s ongoing disregard of the privacy and well-being of children. For too long, the Commission has allowed Google’s app store, and the data marketing practices that are its foundation, to operate without enforcing the federal law that is designed to protect young people under 13. With children using apps more than ever as a consequence of the pandemic, the FTC should enforce the law and ensure Google engages with kids and families in a responsible manner."### -
Blog
Contextual Advertising—Now Driven by AI and Machine Learning—Requires Regulatory Review for Privacy and Marketing Fairness
Contextual Advertising—Now Driven by AI and Machine Learning—Requires Regulatory Review for Privacy and Marketing FairnessWhat’s known as contextual advertising is receiving a big boost from marketers and some policymakers, who claim that it provides a more privacy-friendly alternative to the dominant global surveillance-based “behavioral” marketing model. Google’s plans to eliminate cookies and other third-party trackers used for much of online ad delivery are also spurring greater interest in contextual marketing, which is being touted especially as safe for children.Until several years ago, contextual ads meant that you would see an ad based on the content of the page you were on—so there might be ads for restaurants on web pages about food, or cars would be pitched if you were reading about road trips. The ad tech involved was basic: keywords found on the page would help trigger an ad.Today’s version of what’s called “contextual intelligence (link is external), “Contextual 2.0 (link is external),” or Google’s “Advanced Contextual (link is external)” is distinct. Contextual marketing uses artificial intelligence (AI (link is external)) and machine learning technologies, including computer vision and natural language processing, to provide “targeting precision.” AI-based techniques, the industry explains, allow marketers to read “between the lines” of online content. Contextual advertising is now capable of comprehending “the holistic and subtle meaning of all text and imagery,” enabling predictions and decisions on ad design and placement by “leveraging deep neural (link is external) networks” and “proprietary data sets.” AI is used to decipher the meaning of visuals “on a massive scale, enabling advertisers to create much more sophisticated links between the content and the advertising.” Computer vision (link is external) technologies identify every visual element, and “natural language processing” minutely classifies all the concepts found on each page. Millions of “rules (link is external)” are applied in an instant, using software that helps advertisers take advantage of the “multiple meanings” that may be found on a page.For example, one leading contextual marketing company, GumGum (link is external), explains that its “Verity” algorithmic and AI-based service “combines natural language processing with computer vision technology to execute a multi-layered reading process. First, it finds the meat of the article on the page, which means differentiating it from any sidebar and header ads. Next, it parses the body text, headlines, image captions with natural language processing; at the same time, it uses computer vision to parse the main visuals.… [and then] blends its textual and visual analysis into one cohesive report, which it then sends off to an adserver,” which determines whether “Verity’s report on a given page matches its advertisers campaign criteria.”Machine learning also enables contextual intelligence services to make predictions about the best ways to structure and place marketing content, taking advantage of real-time events and the ways consumers interact with content. It enables segmentation of audience targets to be fine-tuned. It also incorporates a number of traditional behavioral marketing concepts, gathering a range of data “signals (link is external)” that ensure more effecting targeting. There are advanced measurement (link is external) technologies; custom methods to influence what marketers term our “customer journey,” structuring ad-buying in similar ways to behavioral, data-driven approaches, as “bids” are made to target—and retarget—the most desirable people. And, of course, once the contextual ad “works” and people interact with it, additional personal and other information is then gathered.Contextual advertising, estimated to generate (link is external) $412 billion in spending by 2025, requires a thorough review by the FTC and data regulators. Regulators, privacy advocates and others must carefully examine how the AI and machine-learning marketing systems operate, including for Contextual 2.0. We should not accept marketers’ claims that it is innocuous and privacy-appropriate. We need to pull back the digital curtain and carefully examine the data and impact of contextual systems. -
Blog
The Whole World will Still be Watching You: Google & Digital Marketing Industry “Death-of-the-Cookie” Privacy Initiatives Require Scrutiny from Public Policymakers
The Whole World will Still be Watching You: Google & Digital Marketing Industry “Death-of-the-Cookie” Privacy Initiatives Require Scrutiny from Public Policymakers Jeff Chester One would think, in listening to the language used by Google, Facebook, and other ad and data companies to discuss the construction and future of privacy protection, that they are playing some kind of word game. We hear terms (link is external) such as “TURTLEDOVE,” “FLEDGE,” SPARROW and “FLoC.” Such claims should be viewed with skepticism, however. Although some reports make it appear that Google and its online marketing compatriots propose to reduce data gathering and tracking, we believe that their primary goal is still focused on perfecting the vast surveillance system they’ve well-established. A major data marketing industry effort is now underway to eliminate—or diminish—the role of the tracking software known as “third-party” cookies. Cookies were developed (link is external) in the very earliest days of the commercial “World Wide Web,” and have served as the foundational digital tether connecting us to a sprawling and sophisticated data-mining complex. Through cookies—and later mobile device IDs and other “persistent” identifiers—Google, Facebook, Amazon, Coca-Cola and practically everyone else have been able to surveil and target us—and our communities. Tracking cookies have literally helped engineer a “sweet spot (link is external)” for online marketers, enabling them to embed spies into our web browsers, which help them understand our digital behaviors and activities and then take action based on that knowledge. Some of these trackers—placed and used by a myriad (link is external) of data marketing companies on various websites—are referred to as “third-party” cookies, to distinguish them from what online marketers claim, with a straight face, are more acceptable forms of tracking software—known as “first-party” cookies. According to the tortured online advertiser explanation, “first-party” trackers are placed by websites on which you have affirmatively given permission to be tracked while you are on that site. These “we-have-your-permission-to-use” first-party cookies would increasingly become the foundation for advances in digital tracking and targeting. Please raise your hand if you believe you have informed Google or Amazon, to cite the two most egregious examples, that they can surveil what you do via these first-party cookies, including engaging in an analysis of your actions, background, interests and more. What the online ad business has developed behind its digital curtain—such as various ways to trigger your response, measure your emotions (link is external), knit together information on device (link is external) use, and employ machine learning (link is external) to predict your behaviors (just to name a few of the methods currently in use)—has played a fundamental role in personal data gathering. Yet these and other practices—which have an enormous impact on privacy, autonomy, fairness, and so many other aspects of our lives—will not be affected by the “death-of-the-cookie” transition currently underway. On the contrary, we believe that a case to be made that the opposite is true. Rather than strengthening data safeguards, we are seeing unaccountable platforms such as Google actually becoming more dominant, as so-called “privacy preserving (link is external)” systems actually enable enhanced data profiling. In a moment, we will briefly discuss some of the leading online marketing industry work underway to redefine privacy. But the motivation for this post is to sound the alarm that we should not—once again—allow powerful commercial interests to determine the evolving structure of our online lives. The digital data industry has no serious track record of protecting the public. Indeed, it was the failure of regulators to rein in this industry over the years that led to the current crisis. In the process, the growth of hate speech, the explosion of disinformation, and the highly concentrated control over online communications and commerce—to name only a few— now pose serious challenges to the fate of democracies worldwide. Google, Facebook and the others should never be relied on to defer their principal pursuit of monetization out of respect to any democratic ideal—let alone consumer protection and privacy. One clue to the likely end result of the current industry effort is to see how they frame it. It isn’t about democracy, the end of commercial surveillance, or strengthening human rights. It’s about how best to preserve what they call the “Open Internet.” (link is external)Some leading data marketers believe we have all consented to a trade-off, that in exchange for “free” content we’ve agreed to a pact enabling them to eavesdrop on everything we do—and then make all that information available to anyone who can pay for it—primarily advertisers. Despite its rhetoric about curbing tracking cookies, the online marketing business intends to continue to colonize our devices and monitor our online experiences. This debate, then, is really about who can decide—and under what terms—the fate of the Internet’s architecture, including how it operationalizes privacy—at least in the U.S. It illustrates questions that deserve a better answer than the “industry-knows-best” approach we have allowed for far. That’s why we call on the Biden Administration, the Federal Trade Commission (FTC) and the Congress to investigate these proposed new approaches for data use, and ensure that the result is truly privacy protective, supporting democratic governance and incorporating mechanisms of oversight and accountability. Here’s a brief review (link is external) of some of the key developments, which illustrate the digital “tug-of-war” ensuing over the several industry proposals involving cookies and tracking. In 2019, Google announced (link is external) that it would end the role of what’s known as “third-party cookies.” Google has created a “privacy sandbox (link is external)” where it has researched various methods it claims will protect privacy, especially for people who rely on its Chrome browser. It is exploring “ways in which a browser can group together people with similar browsing habits, so that ad tech companies can observe the habits of large groups instead of the activity of individuals. Ad targeting could then be partly based on what group the person falls into.” This is its “Federated Learning of Cohorts (FLoC) approach, where people are placed into “clusters” based on the use of “machine learning algorithms” that analyze the data generated from the sites a person visited and their content. Google says these clusters would “each represent thousands of people,” and that the “input features” used to generate the targeting algorithm, such as our “web history,” would be stored on our browsers. There would be other techniques deployed, to add “noise” to the data sets and engage in various “anonymization methods” so that the exposure of a person’s individual information is limited. Its TURTLEDOVE initiative is designed to enable more personalized targeting, where web browsers will be used to help ensure our data is available for the real-time auctions that sell us to advertisers. The theory is that by allowing the data to remain within our devices, as well using clusters of people for targeting, our privacy is protected. But the goal of the process— to have sufficient data and effective digital marketing techniques—is still at the heart of this process. Google recently (link is external) reported that “FLoC can provide an effective replacement signal for third-party cookies. Our tests of FLoC to reach in-market and affinity Google Audiences show that advertisers can expect to see at least 95% of the conversions per dollar spent when compared to cookie-based advertising.” Google’s 2019 announcement caused an uproar in the digital marketing business. It was also perceived (correctly, in my view) as a Google power grab. Google operates basically as a “Walled Garden (link is external)” and has so much data that it doesn’t really need third-party data cookies to hone in on its targets. The potential “death of the cookie” ignited a number of initiatives from the Interactive (link is external) Advertising Bureau, as well as competitors (link is external) and major advertisers, who feared that Google’s plan would undermine their lucrative business model. They include such groups as the Partnership for Addressable Media (PRAM), (link is external) whose 400 members include Mastercard, Comcast/NBCU, P&G, the Association of National Advertisers, IAB and other ad and data companies. PRAM issued a request (link is external) to review proposals (link is external) that would ensure the data marketing industry continues to thrive, but could be less reliant on third-party cookies. Leading online marketing company Trade Desk is playing a key role here. It submitted (link is external) its “United ID 2.0 (link is external),” plan to PRAM, saying that it “represents an alternative to third party cookies that improves consumer transparency, privacy and control, while preserving the value exchange of relevant advertising across channels and devices.” There are also a number of other ways now being offered that claim both to protect privacy yet take advantage of our identity (link is external), such as various collaborative (link is external) data-sharing efforts. The Internet standards groups Worldwide Web Consortium (W3C) has created (link is external) a sort of neutral meeting ground where the industry can discuss proposals and potentially seek some sort of unified approach. The rationale for the [get ready for this statement] “Improving Web Advertising Business Group goal is to provide monetization opportunities that support the open web while balancing the needs of publishers and the advertisers that fund them, even when their interests do not align, with improvements to protect people from the individual and societal impacts of tracking content consumption over time.” Its participants (link is external) are another “Who’s Who” in data-driven marketing, including Google, AT&T, Verizon, NYT, IAB, Apple, Group M, Axel Springer, Facebook, Amazon, Washington Post, Verizon, and Criteo. DuckDuckGo is also a member (and both Google and Facebook have multiple representatives in this group). The sole NGO listed as a member is the Center for Democracy and Technology. W3Cs ad business group has a number of documents (link is external) about the digital marketing business that illustrate why the issue of the future of privacy and data collection and targeting should be a public—and not just data industry—concern. In an explainer (link is external) on digital advertising, they make the paradigm so many are working to defend very clear: Marketing’s goal can be boiled down to the "5 Rights": Right Message to the Right Person at the Right Time in the Right Channel and for the Right Reason. Achieving this goal in the context of traditional marketing (print, live television, billboards, et al) is impossible. In digital realm, however, not only can marketers achieve this goal, they can prove it happened. This proof is what enables marketing activities to continue, and is important for modern marketers to justify their advertising dollars, which ultimately finance the publishers sponsoring the underlying content being monetized.” Nothing I’ve read says it better. Through a quarter century of work to perfect harvesting our identity for profit, the digital ad industry has created a formidable complex of data clouds (link is external), real-time ad auctions, cross-device tracking tools and advertising techniques (link is external) that further commodify our lives, shred our privacy, and transform the Internet into a hall of mirrors that can amplify our fears and splinter democratic norms. It’s people, of course, who decide how the Internet operates—especially those from companies such as Google, Facebook, Amazon, and those working for trade groups as the IAB. We must not let them decide how cookies may or may not be used or what new data standard should be adopted by the most powerful corporate interests on the planet to profit from our “identity.” It’s time for action by the FTC and Congress. Part 1. (1)For the uninitiated, TURTLEDOVE stands for “Two Uncorrelated Requests, Then Locally-Executed Decision On Victory”; FLEDGE is short for “First Locally-Executed Decision over Groups Experiment”; SPARROW is “Secure Private Advertising Remotely Run On Webserver”; and FLoC is “Federated Learning of Cohorts”). (2) In January 2021, the UK’s Competition and Markets Authority (CMA) opened up an investigation (link is external) into Google privacy sandbox and cookie plans. -
Press Release
Press Statement RE FTC Announcement on New Study into the Data Collection Practices of Nine Major Tech Platforms and Companies
Press Statement, Center for Digital Democracy (CDD) and Campaign for a Commercial-Free Childhood (CCFC), 12-14-20 Today, the Federal Trade Commission announced (link is external) it will use its to 6(b) authority to launch a major new study into the data collection practices of nine major tech platforms and companies: ByteDance (TikTok), Amazon, Discord, Facebook, Reddit, Snap, Twitter, WhatsApp and YouTube. The Commission’s study includes a section on children and teens. In December, 2019, the Campaign for a Commercial-Free Childhood (CCFC), Center for Digital Democracy (CDD) and their attorneys at Georgetown Law’s Institute for Public Representation urged the Commission to use its 6(b) authority to better understand how tech companies collect and use data from children. Twenty-seven consumer and child advocacy organizations joined that request. Below are statements from CDD and CCFC on today’s announcement. Josh Golin, Executive Director, CCFC: “We are extremely pleased that the FTC will be taking a hard look at how platforms like TikTok, Snap, and YouTube collect and use young people’s data. These 6(b) studies will provide a much-needed window into the opaque data practices that have a profound impact on young people’s wellbeing. This much-needed study will not only provide critical public education, but lay the groundwork for evidence-based policies that protect young people’s privacy and vulnerabilities when they use online services to connect, learn, and play.” Jeff Chester, Executive Director, CDD: "The FTC is finally holding the social media and online video giants accountable, by requiring leading companies to reveal how they stealthily gather and use information that impacts our privacy and autonomy. It is especially important the commission is concerned about also protecting teens— who are the targets of a sophisticated and pervasive marketing system designed to influence their behaviors for monetization purposes." For questions, please contact: jeff@democraticmedia.org (link sends e-mail) See also: https://www.markey.senate.gov/news/press-releases/senator-markey-stateme... (link is external) -
General Comment submission Children’s rights in relation to the digital environment • Professor Amandine Garde, Law & Non-Communicable Research Unit, School of Law and Social Justice, University of Liverpool • Dr Mimi Tatlow-Golden, Senior Lecturer, Developmental Psychology and Childhood, The Open University • Dr Emma Boyland, Senior Lecturer, Psychology, University of Liverpool • Professor Emerita Kathryn C. Montgomery, School of Communication, American University; Senior Strategist, Center for Digital Democracy • Jeff Chester, Center for Digital Democracy • Josh Golin, Campaign for a Commercial Free Childhood • Kaja Lund-Iversen and Ailo Krogh Ravna, Norwegian Consumer Council • Pedro Hartung and Marina Reina, Alana Institute • Dr Marine Friant-Perrot, University of Nantes • Professor Emerita Wenche Barth Eide, University of Oslo; Coordinator, FoHRC • Professor Liv Elin Torheim, Oslo Metropolitan University • Professor Alberto Alemanno, HEC Paris Business School and The Good Lobby • Marianne Hammer, Norwegian Cancer Society • Nikolai Pushkarev, European Public Health Alliance 13 November 2020 Dear Members of the Committee on the Rights of the Child, We very much welcome the Committee’s Draft General Comment No25 on children’s rights in relation to the digital environment (the Draft) and are grateful for the opportunity to comment. We are a group of leading scholars and NGO experts on youth, digital media, child rights and public health who work to raise awareness and promote regulation of marketing (particularly of harmful goods, services and brands) to which children are exposed. We argue this infringes many of the rights enshrined in the UN Convention on the Rights of the Child (CRC) and other international instruments and should be strictly regulated. Based on our collective expertise, we call on the Committee to recognise more explicitly the fundamentally transformed nature of marketing in new digital environments, the harms stemming therefrom, and the corresponding need to protect children from targeting and exposure. Without such recognition, children will not be able to fully enjoy the many opportunities for learning, civic participation, creativity and communication that the digital environment offers for their development and fulfilment of their rights. Facilitating children’s participation in this environment should not come at the price of violations of any children's rights. Before making specific comments, we wish to highlight our support for much of this Draft. In particular, we strongly support the provisions in the following paragraphs of the General Comment: 11, 13, 14, 52, 54, 62, 63, 64, 67, 72, 74, 75, 88, 112, and 119. We also note concerns regarding provisions that will require mandatory age verification: e.g., paragraphs 56, 70, 120, 122. We call on the Committee to consider provisions that this be applied proportionately, as this will certainly have the effect of increasing the processing of children’s personal data - which should not happen to the detriment of the best interests of the child. The rest of this contribution, following the structure of the Draft, proposes specific additions / modifications (underlined, in italics), with brief explanations (in boxes). Numbers refer to original paragraphs in the Draft; XX indicates a new proposed paragraph. Hoping these comments are useful to finalise the General Comment, we remain at your disposal for further information. Yours faithfully, Amandine Garde and Mimi Tatlow-Golden On behalf of those listed above [See full comments in attached document]