CDD

Blog

  • Meta’s Virtual Reality-based Marketing Apparatus Poses Risks to Teens and OthersWhether it’s called Facebook or Meta, or known by its Instagram, WhatsApp, Messenger or Reels services, the company has always seen children and teens as a key target. The recent announcement opening(link is external) up the Horizon Worlds metaverse(link is external) to teens, despite calls to first ensure it will be a safe and healthy experience, is lifted out of Facebook’s well-worn political playbook—make whatever promises necessary to temporarily quell any political opposition to its monetization plans. Meta’s priorities are intractably linked to its quarterly shareholder revenue reports. Selling our “real” and “virtual” selves to marketers is their only real source of revenue, a higher priority than any self-regulatory scheme Meta offers(link is external) claiming to protect children and teens.Meta’s focus on creating more immersive, AI/VR, metaverse-connected experiences for advertisers should serve as a “wake-up” call for regulators. Meta has unleashed a digital environment designed to trigger the “engagement(link is external)” of young people with marketing, data collection and commercially driven manipulation. Action is required to ensure that young people are treated fairly, and not exposed to data surveillance, threats to their health and other harms.Here are a few recent developments that should be part of any regulatory review of Meta and young people:Expansion of “immersive(link is external)” video and advertising-embedded applications: Meta tells marketers it provides “seamless video experiences that are immersive and fueled by discovery,” including the “exciting(link is external) opportunity for advertisers” with its short-video “Reels” system. Through virtual reality (VR) and augmented reality (AR(link is external)) technologies, we are exposed to advertising content designed to have a greater impact by influencing our subconscious and emotional processes. With AR ads, Meta tells(link is external) marketers, they can “create immersive experiences, encourage people to virtually try out your products and inspire people to interact with your brand,” including encouraging “people who interact with your ad… [to]take photos or videos to share their experience on Facebook Feed, on Facebook and Instagram Stories or in a message on Instagram.” Meta has also been researching(link is external) the use of AR(link is external) and VR(link is external) that will ensure that its ad and marketing messaging becomes even more compelling.Expanded integration of ads throughout Meta applications: Meta allows advertisers to “turn organic image and video posts into ads in Ads Manager on Facebook Reels,” including adding a “call-to-action” feature. It permits marketers to “boost their Reels within the Instagram app to turn them into ads….” It enables marketers “to add a “Send Message” button to their Facebook Reels ads [that] give people an option to start a conversation in WhatsApp(link is external) right from the ad.” This follows last year’s Meta “Boosted Reels” product(link is external) release, allowing Instagram Reels to be turned into ads as well.“Ads Manager” “optimization(link is external) goals” that are inappropriate when used for targeting young people: These include “impressions, reach, daily unique reach, link clicks and offsite conversions.” “Ad placements” to target teens are available for the “Facebook Marketplace, Facebook Feed, … Facebook Stories, Facebook-instream video (mobile), Instagram Feed, Instagram Explore, Instagram Stories, Facebook Reels and Instagram Reels.”The use of metrics for delivering and measuring the impact of augmented reality ads: As Meta explains, it uses:(link is external)Instant Experience View Time: The average total time in seconds that people spent viewing an Instant Experience. An Instant Experience can include videos, images, products from a catalog, an augmented reality effect and more. For an augmented reality ad, this metric counts the average time people spent viewing your augmented reality effect after they tapped your ad.Instant Experience Clicks to Open: The number of clicks on your ad that open an Instant Experience. For an augmented reality ad, this metric counts the number of times people tapped your ad to open your augmented reality effect.Instant Experience Outbound Clicks: The number of clicks on links in an Instant Experience that take people off Meta technologies. For an augmented reality ad, this metric counts the number of times people tapped the call to action button in your augmented reality effect.Effect Share: The number of times someone shared an image or video that used an augmented reality effect from your ad. Shares can be to Facebook or Instagram Stories, to Facebook Feed or as a message on Instagram.These ad effects can be designed and tested(link is external) through Meta’s “Spark Hub” and ad manager. Such VR and other measurement systems require regulators to analyze their role and impact on youth.Expanded use of machine learning/AI to promote shopping via Advantage(link is external)+: Last year, Meta rolled out “Advantage+ shopping campaigns, Meta’s machine-learning capabilities [that] save advertisers(link is external) time and effort while creating and managing campaigns. For example, advertisers can set up a single Advantage+ shopping campaign, and the machine learning-powered automation automatically combines prospecting and retargeting audiences, selects numerous ad creative and messaging variations, and then optimizes for the best-performing ads.” While Meta says that Advantage+ isn’t used to target teens, it deploys(link is external) it for “Gen Z” audiences. How Meta uses machine learning/AI to target families should also be on the regulatory agenda.Immersive advertising will shape the near-term evolution of marketing, where brands will be “world agnostic and transcend the limitations of the current physical and digital space.” The Advertising Research Foundation (ARF) predicts(link is external) that “in the next decade, AR and VR hardware and software will reach ubiquitous status.” One estimate is that by 2030, the metaverse will “generate(link is external) up to $5 trillion in value.”In the meantime, Meta’s playbook in response to calls from regulators and advocates is to promise some safeguards, often focused on encouraging the use of what it calls “safety(link is external) tools.” But these tools(link is external) do not ensure that teens aren’t reached and influenced by AI- and VR-driven marketing technologies and applications. Meta also knows that today, ad-targeting is less important than so-called “discovery(link is external),” where its purposeful melding of its video content, AR effects, social interactions and influencer marketing will snare young people into its marketing “conversion”(link is external) net.Last week, Mark Zuckerberg told(link is external) investors his vision of bringing “AI agents to billions of people,” as well as into his “metaverse” that will be populated by “avatars, objects, worlds, and codes to tie” online and offline together. There will be, as previously reported, an AI-driven “discovery(link is external) engine” that will “increase the amount of suggested content to users.”These developments reflect just a few of the AI- and VR-marketing-driven changes to the Meta system. They illustrate why responsible regulators and advocates must be in the forefront of holding this company accountable, especially with regard to its youth-targeting apparatus.Please also read(link is external) Fairplay for Kids’ account of Meta’s long history of failing to protect children online.   metateensaivr0523fin.pdf
    Jeff Chester
  • FTC Commercial Surveillance Filing from CDD focuses on how pharma & other health marketers target consumers, patients, prescribers “Acute Myeloid Lymphoma,” “ADHD,” “Brain Cancer,” “High Cholesterol,” “Lung Cancer,” “Overweight,” “Pregnancy,” “Rheumatoid Arthritis,” “Stroke,” and “Thyroid Cancer.” These are just a handful of the digitally targetable medical condition “audience segments” available to surveillance advertisers  While health and medical condition marketers—including pharmaceutical companies and drug store chains—may claim that such commercial data-driven marketing is “privacy-compliant,” in truth it reveals how vulnerable U.S. consumers are to having some of their most personal and sensitive data gathered, analyzed, and used for targeted digital advertising. It also represents how the latest tactics leveraging data to track and target the public—including “identity graphs,” artificial intelligence, surveilling-connected or smart TV devices, and a focus on so-called permission-based “first-party data”—are now broadly deployed by advertisers—including pharma and medical marketers. Behind the use of these serious medical condition “segments” is a far-reaching commercial surveillance complex including giant platforms, retailers, “Adtech” firms, data brokers, marketing and “experience” clouds, device manufacturers (e.g., streaming), neuromarketing and consumer research testing entities, “identity” curation specialists and advertisers...We submit as representative of today’s commercial surveillance complex the treatment of medical condition and health data. It incorporates many of the features that can answer the questions the commission seeks. There is widespread data gathering on individuals and communities, across their devices and applications; techniques to solicit information are intrusive, non-transparent, and out of meaningful scope for consumer control; these methods come at a cost to a person’s privacy and pocketbook, and potentially has significant consequences to their welfare. There are also societal impacts here, for the country’s public health infrastructure as well as with the expenditures the government must make to cover the costs for prescription drugs and other medical services...Health and pharma marketers have adopted the latest data-driven surveillance-marketing tactics—including targeting on all of a consumer’s devices (which today also includes streaming video delivered by Smart TVs); the integration of actual consumer purchase data for more robust targeting profiles; leveraging programmatic ad platforms; working with a myriad of data marketing partners; using machine learning to generate insights for granular consumer targeting; conducting robust measurement to help refine subsequent re-targeting; and taking advantage of new ways to identify and reach individuals—such as “Identity Graphs”— across devices. [complete filing for the FTC's Commercial Surveillance rulemaking attached]cddsurveillancehealthftc112122.pdf
    Jeff Chester
  • Commercial Surveillance expands via the "Big" Screen in the Home Televisions now view and analyze us—the programs we watch, what shows we click on to consider or save, and the content reflected on the “glass” of our screens. On “smart” or connected TVs, streaming TV applications have been engineered to fully deliver the forces of commercial surveillance. Operating stealthily inside digital television sets and streaming video devices is an array of sophisticated “adtech” software. These technologies enable programmers, advertisers and even TV set manufacturers to build profiles used to generate data-driven, tailored ads to specific individuals or households. These developments raise important questions for those concerned about the transparency and regulation of political advertising in the United States.Also known as “OTT” (“over-the-top” since the video signal is delivered without relying on traditional set-top cable TV boxes), the streaming TV industry incorporates the same online advertising techniques employed by other digital marketers. This includes harvesting a cornucopia of information on viewers through alliances with leading data-brokers. More than 80 percent of Americans now use some form of streaming or Smart TV-connected video service. Given such penetration, it is no surprise that streaming TV advertising is playing an important role in the upcoming midterm elections. And, streaming TV will be an especially critical channel for campaigns to vie for voters in 2024. Unlike political advertising on broadcast television or much of cable TV, which is generally transmitted broadly to a defined geographic market area, “addressable” streaming video ads appear in programs advertisers know you actually watch (using technologies such as dynamic ad insertion). Messaging for these ads can also be fine-tuned as a campaign progresses, to make the message more relevant to the intended viewer. For example, if you watch a political ad and then sign up to receive campaign literature, the next TV commercial from a candidate or PAC can be crafted to reflect that action. Or, if your data profile says you are concerned about the costs of healthcare, you may see a different pitch than your nextdoor neighbor who has other interests. Given the abundance of data available on households, including demographic details such as race and ethnicity, there will also be finely tuned pitches aimed at distinct subcultures produced in multiple languages.An estimated $1.4 billion dollars will be spent on streaming political ads for the midterms (part of an overall $9 billion in ad expenditures). With more people “cutting the cord” by signing up for cheaper, ad-supported streaming services, advances in TV technologies to enable personalized data-driven ad targeting, and the integration of streaming TV as a key component of the overall online marketing apparatus, it is evident that the TV business has changed. Even what’s considered traditional broadcasting has been transformed by digital ad technologies. That’s why it’s time to enact policy safeguards to ensure integrity, fairness, transparency and privacy for political advertising on streaming TV. Today, streaming TV  political ads already combine information from voter records with online and offline consumer profile data in order to generate highly targeted messages. By harvesting information related to a person’s race and ethnicity, finances, health concerns, behavior, geolocation, and overall digital media use, marketers can deliver ads tied to our needs and interests. In light of this unprecedented marketing power and precision, new regulations are needed to protect consumer privacy and civic discourse alike. In addition to ensuring voter privacy, so personal data can’t be as readily used as it is today, the messaging and construction of streaming political ads must also be accountable. Merely requiring the disclosure of who is buying these ads is insufficient. The U.S. should enact a set of rules to ensure that the tens of thousands of one-to-one streaming TV ads don’t promote misleading or false claims, or engage in voter suppression and other forms of manipulation. Journalists and campaign watchdogs must have the ability to review and analyze ads, and political campaigns need to identify how they were constructed—including the information provided by data brokers and how a potential voter’s viewing behaviors were analyzed (such as with increasingly sophisticated machine learning and artificial intelligence algorithms). For example, data companies such as Acxiom, Experian, Ninth Decimal, Catalina and LiveRamp help fuel the digital video advertising surveillance apparatus. Campaign-spending reform advocates should be concerned. To make targeted streaming TV advertising as effective as possible will likely require serious amounts of money—for the data, analytics, marketing and distribution. Increasingly, key gatekeepers control much of the streaming TV landscape, and purchasing rights to target the most “desirable” people could face obstacles. For example, smart TV makers– such as LG, Roku, Vizio and Samsung– have developed their own exclusive streaming advertising marketplaces. Their smart TVs use what’s called ACR—”automated content recognition”—to collect data that enables them to analyze what appears on our screens—“second by second.” An “exclusive partnership to bring premium OTT inventory to political clients” was recently announced by LG and cable giant Altice’s ad division. This partnership will enable political campaigns that qualify to access 30 million households via Smart TVs, as well as the ability to reach millions of other screens in households known to Altice. Connected TVs also provide online marketers with what is increasingly viewed as essential for contemporary digital advertising—access to a person’s actual identity information (called “first-party” data). Streaming TV companies hope to gain permission to use subscriber information in many other ways. This practice illustrates why the Federal Trade Commission’s (FTC) current initiative designed to regulate commercial surveillance, now in its initial stage, is so important. Many of the critical issues involving streaming political advertising could be addressed through strong rules on privacy and online consumer protection. For example, there is absolutely no reason why any marketer can so easily obtain all the information used to target us, such as our ethnicity, income, purchase history, and education—to name only a few of the variables available for sale. Nor should the FTC allow online marketers to engage in unfair and largely stealth tactics when creating digital ads—including the use of neuroscience to test messages to ensure they respond directly to our subconscious. The Federal Communications Commission (FCC), which has largely failed to address 21st century video issues, should conduct its own inquiry “in the public interest.” There is also a role here for the states, reflecting their laws on campaign advertising as well as ensuring the privacy of streaming TV viewers.This is precisely the time for policies on streaming video, as the industry becomes much more reliant on advertising and data collection. Dozens of new ad-supported streaming TV networks are emerging—known as FAST channels (Free Ad Supported TV)—which offer a slate of scheduled shows with commercials. Netflix and Disney+, as well as Amazon, have or are soon adopting ad-supported viewing. There are also coordinated industry-wide efforts to perfect ways to more efficiently target and track streaming viewers that involve advertisers, programmers and device companies. Without regulation, the U.S. streaming TV system will be a “rerun” of what we historically experienced with cable TV—dashed expectations of a medium that could be truly diverse—instead of a monopoly—and also offer both programmers and viewers greater opportunities for creative expression and public service. Only those with the economic means will be able to afford to “opt-out” of the advertising and some of the data surveillance on streaming networks. And political campaigns will be allowed to reach individual voters without worry about privacy and the honesty of their messaging. Both the FTC and FCC, and Congress if it can muster the will, have an opportunity to make streaming TV a well-regulated, important channel for democracy. Now is the time for policymakers to tune in.***This essay was originally published by Tech Policy Press.Support for the Center for Digital Democracy’s review of the streaming video market is provided by the Rose Foundation for Communities and the Environment.
    Jeff Chester
  • CDD Comments to FTC for "Stealth" Marketing Inquiry The Center for Digital Democracy (CDD) urges the FTC to develop and implement a set of policies designed to protect minors under 18 from being subjected to a host of pervasive, sophisticated and data-driven digital marketing practices. Children and teens are targeted by an integrated set of online marketing operations that are manipulative, unfair, invasive and can be especially harmful to their mental and physical health. The commission should make abundantly clear at the forthcoming October workshop that it understands that the many problems generated by contemporary digital marketing to youth transcend narrow categories such as “stealth advertising” and “blurred content.” Nor should it propose “disclosures” as a serious remedy, given the ways advertising is designed using data science, biometrics, social relationships and other tactics. Much of today’s commercially supported online system is purposefully developed to operate as “stealth”—from product development, to deployment, to targeting, tracking and measurement. Age-based cognitive development capacities to deal with advertising, largely based on pre-digital (especially TV) research, simply don’t correspond to the methods used today to market to young people. CDD calls on the commission to acknowledge that children and teenagers have been swept into a far reaching commercial surveillance apparatus.The commission should propose a range of safeguards to protect young people from the current “wild west” of omnichannel directed at them. These safeguards should address, for example, the role market research and testing of child and teen-directed commercial applications and messaging play in the development of advertising; how neuromarketing[pdf] practices designed to leverage a young person’s emotions and subconscious are used to deliver “implicit persuasion”; the integration by marketers and platforms of “immersive” applications, including augmented and virtual reality, designed to imprint brand and other commercial messages; the array of influencer-based strategies, including the extensive infrastructure used by platforms and advertisers to deliver, track and measure their impact; the integration of online marketing with Internet of Things objects, including product packaging and the role of QR codes, (experiential marketing) and digital out-of-the-home advertising screens; as well as contemporary data marketing operations that use machine learning and artificial intelligence to open up new ways for advertisers to reach young people online. AI services increasingly deliver personalized content online, further automating the advertising process to respond in real-time.It is also long overdue for the FTC to investigate and address how online marketing targets youth of color, who are subjected to a variety of advertising practices little examined by privacy and other regulators.The FTC should use all its authority and power to stop data-driven surveillance marketing to young people under 18; end the role sponsored influencers play; enact rules designed to protect the online privacy for teens 13-17 who are now subjected to ongoing tracking by marketers; and propose policies to redress the core methods employed by digital advertisers and online platforms to lure both children and teens. For more than 20 years, CDD and its allies have urged the FTC to address the ways digital marketing has undermined consumer protection and privacy, especially for children and adolescents. Since the earliest years of the commercial internet, online marketers have focused on young people, both for the revenues they deliver as well as to secure loyalty from what the commercial marketing industry referred to as “native” users. The threat to their privacy, as well as to their security and well-being, led to the complaint our predecessor organization filed in 1996, which spurred the passage of the Children’s Online Privacy Protection Act (COPPA) in 1998. COPPA has played a modest role protecting some younger children from experiencing the totality of the commercial surveillance marketing system. However, persistent failures of the commission to enforce COPPA; the lack of protections for adolescents (despite decades-long calls by advocates for the agency to act on this issue); and a risk-averse approach to addressing the methods employed by the digital advertising, even when applied to young people, have created ongoing threats to their privacy, consumer protection and public health. In this regard, we urge the commission to closely review the comments submitted in this proceeding by our colleague Fairplay and allies. We are pleased Fairplay supports these comments.If the FTC is to confront how the forces of commercial digital surveillance impact the general public, the building blocks to help do so can be found in this proceeding. Young people are exposed to the same unaccountable forces that are everywhere online: a largely invisible, ubiquitous, and machine-intelligence-driven system that tracks and assesses our every move, using an array of direct and indirect techniques to influence behaviors. If done correctly, this proceeding can help inform a larger policy blueprint for what policy safeguards are needed—for young people and for everyone else.The commission should start by reviewing how digital marketing and data-gathering advertising applications are “baked in” at the earliest stages of online content and device development. These design and testing practices have a direct impact on young people. Interactive advertising standards groups assess and certify a host of approved ad formats, including for gaming, mobile, native advertising, and streaming video. Data practices for digital advertising, including ways that ads are delivered through the behavioral/programmatic surveillance engines, as well as their measurement, are developed through collaborative work involving trade organizations and leading companies. Platforms such as Meta, as well as ad agencies, adtech companies, and brands, also have their own variations of these widely adopted formats and approaches. The industry-operated standards process for identifying new methods for digital advertising, including the real-world deployment of applications such “playable” ads or the ways advertisers can change its personalized messaging in real-time, have never been seriously investigated by the commission. A review of the companies involved show that many are engaged in digital marketing to young people.Another critical building block of contemporary digital marketing to address when dealing with youth-directed advertising is the role of “engagement.” As far back as 2006, the Interactive Advertising Bureau (IAB) recognized that to effectively secure the involvement of individuals with marketing communications, at both the subconscious and conscious levels, it was necessary to define and measure the concept of engagement. IAB initially defined “Engagement… [as] turning on a prospect to a brand idea enhanced by the surrounding context..” By 2012, there were more elaborate definitions identifying “three major forms of engagement… cognitive, physical and emotional.” A set of corresponding metrics, or measurement tools, were used, including those tracking “attention” (“awareness, interest, intention”); emotional and motor functioning identified through biometrics (“heart palpitations, pupil dilation, eye tracking”); and through omnipresent tracking of online behaviors (“viewability and dwell time, user initiated interaction, clicks, conversions, video play rate, game play”). Today, research and corresponding implementation strategies for engagement are an ongoing feature for the surveillance-marketing economy. This includes conducting research and implementing data-driven and other ad strategies targeting children—known as “Generation Alpha”—children 11 and younger—and teens—“Generation Z.”We will briefly highlight some crucial areas this proceeding should address:Marketing and product research on children and adolescents: An extensive system designed to ensure that commercial online content, including advertising and marketing, effectively solicits the interest and participation of young people, is a core feature of the surveillance economy. A host of companies are engaged in multi-dimensional market research, including panels, labs, platforms, streaming media companies, studios and networks, that have a direct impact on the methods used to advertise and market to youth. CDD believes that such product testing, which can rely on a range of measures designed to promote “implicit persuasion” should be considered an unfair practice generally. Since CDD and U.S. PIRG first urged the commission to investigate neuromarketing more than a decade ago, this practice has in ways that enable it to play a greater role influencing how content and advertising is delivered to young people.For example, MediaScience (which began as the Disney Media and Advertising Lab), serves major clients including Disney, Google, Warner Media, TikTok, Paramount, Fox and Mars. It conducts research for platforms and brands using such tools as “neurometrics (skin conductivity and heart rate), eye tracking, facial coding, and EEGs, among others, that assess a person’s responses across devices. Research is also conducted outside of the lab setting, such as directly through a subject’s “actual Facebook feed.” It has a panel of 80,000 households in the U.S., where it can deliver digital testing applications using a “variety of experimental designs… facilitated in the comfort of people’s homes.” The company operates a “Kids” and “Teens” media research panel. Emblematic of the far-reaching research conducted by platforms, agencies and brands, in 2021 TikTok’s “Marketing Science team” commissioned MediaScience to use neuromarketing research to test “strong brand recall and positive sentiment across various view durations.” The findings indicated that “ads on TikTok see strong brand recall regardless of view duration…. Regardless of how long an ad stays on screen, TikTok draws early attention and physiological engagement in the first few seconds.”NBCUniversal is one of the companies leveraging the growing field of “emotional analytics” to help advance advertising for streaming and other video outlets. Comcast’s NBCU is using “facial coding and eye-tracking AI to learn an audience’s emotional response to a specific ad.” Candy company Mars just won a “Best Use of Artificial Intelligence” award for its “Agile Creative Expertise (ACE) tool that “tracks attentional and emotional response to digital video ads.” Mars is partnering with neuromarketer Realeyes to “measure how audience’s attention levels respond as they view Mars' ads.Knowing what captures and retains attention or even what causes distraction, generated intelligence that enabled Mars to optimize the creative itself or the selection of the best performing ads across platforms including TikTok, Facebook, Instagram and YouTube.” TikTok, Meta/Facebook, and Google have all used a variety of neuromarketing measures. The Neuromarketing Science and Business Association (NMSBA) includes many of the leading companies in this field as members. There is also an “Attention Council” within the digital marketing industry to help advance these practices, involving Microsoft, Mars, Coca-Cola, AB/InBev, and others. A commercial research infrastructure provides a steady drumbeat of insights so that marketers can better target young people on digital devices. Children’s streaming video company Wildbrain, for example, partnered with Ipsos for its 2021 research report, “The Streaming Generation,” which explained that “Generation Alpha [is] the most influential digital generation yet…. They have never known a world without digital devices at their fingertips, and for Generation Alpha (Gen A), these tech-first habits are now a defining aspect of their daily lives.” More than 2,000 U.S. parents and guardians of children 2-12 were interviewed for the study, which found that “digital advertising to Gen A influences the purchasing decisions of their parents…. Their purchasing choices, for everything from toys to the family car, are heavily influenced by the content kids are watching and the ads they see.” The report explains that among the “most popular requests” are toys, digital games, clothing, tech products and “in-game currencies” for Roblox and Fortnite.Determining the levels of “brand love” by children and teens, such as the use of “Kidfinity” and “Teenfinity” scores—“proprietary measures of brand awareness, popularity and love”—are regularly provided to advertisers. Other market researchers, such as Beano Studios, offer a “COPPA-compliant” “Beano Brain Omnibus” website that, through “games, quizzes, and bespoke questions” for children and teens, “allows bands to access answers to their burning questions.” These tools help marketers better identify, for example, the sites—such as TikTok—where young people spend time. Among the other services Beano provides, which reflect many other market-research companies’ capabilities, are “Real-time UX/UI and content testing—in the moment, digital experience exploration and evaluation of brands websites and apps with kids and teens in strawman, beta or live stages,” and “Beano at home—observing and speaking to kids in their own homes. Learning how and what content they watch.” Adtech and other data marketing applications: In order to conduct any “stealth” advertising inquiry, the FTC should review the operations of contemporary “Big Data”-driven ad systems that can impact young people. For example, Disney has an extensive and cutting-edge programmatic apparatus called DRAX(Disney Real-Time Ad Exchange) that is delivering thousands of video-based campaigns. DRAX supports “Disney Select,” a "suite of ad tech solutions, providing access to an extensive library of first-party segments that span the Disney portfolio, including streaming, entertainment and sports properties…. Continuously refined and enhanced based on the countless ways Disney connects with consumers daily. Millions of data inputs validated through data science…. Advertisers can reach their intended audiences by tapping into Disney’s proprietary Audience Graph, which unifies Disney’s first party data and audience modeling capabilities….” As of March 2022, Disney Select contained more than 1,800 “audience segments built from more than 100,000 audience attributes that fuel Disney’s audience graph.” According to Disney Advertising, its “Audience Graph” includes 100 million households, 160 million connected TV devices and 190 million device IDs, which enables modeling to target households and families. Children and teens are a core audience for Disney, and millions of their households receive its digital advertising. Many other youth-directed leading brands have developed extensive internal adtech applications designed to deliver ongoing and personalized campaigns. For example, Pepsi, Coca-Cola, McDonald’s, and Mondelez have in-house capabilities and extensive partnerships that create targeted marketing to youth and others. The ways that “Big Data” analytics affect marketing, especially how insights can be used to target youth, should be reviewed. Marketers will say to the FTC that they are only targeting 18-year-olds and over, but an examination of their actual targets, and asking for child-related brand-safety data they collect, should provide the agency with a robust response to such claims.New methods to leverage a person’s informational details and then target them, especially without “cookies,” requires the FTC to address how this is being used to market to children and teens. This review should also be extended to “contextual” advertising, since that method has been transformed through the use of machine learning and other advanced tactics—called “Contextual 2.0.”Targeting youth of color: Black, Hispanic, Asian-American and other “multicultural” youth, as the ad industry has termed it, are key targets for digital advertising. An array of research, techniques, and services is focused on these young people, whose behaviors online are closely monitored by advertisers. A recent case study to consider is the McDonald’s U.S. advertising campaign designed to reverse its “decline with multicultural youth.” The goal of its campaign involving musician Travis Scott was to “drive penetration by bringing younger, multicultural customers to the brands… and drive immediate behavior too.” As a case study explains, “To attract multicultural youth, a brand… must have cultural cachet. Traditional marketing doesn’t work with them. They don’t watch cable TV; they live online and on social media, and if you are not present there you’re out of sight, out of mind.”It’s extremely valuable to identify some of the elements involved in this case, which are emblematic of the integrated set of marketing and advertising practices that accompany so many campaigns aimed at young people. These included working with a celebrity/influencer who is able to “galvanize youth and activate pop culture”; offering “coveted content—keepsakes and experiences to fuel the star’s fanbase, driving participation and sales”; employing digital strategies through a proprietary (and data-collecting) “app to bring fans something extra and drive digital adoption”; and focusing on “affordability”—to ensure “youth with smaller wallets” would participate. To illustrate how expenditures for paid advertising are much less relevant with digital marketing, McDonald’s explains that “Before a single dollar had been spent on paid media, purely on the strength of a few social posts by McDonald’s and Travis Scott, and reporting in the press, youth were turning up at restaurants across the country, asking for the Travis Scott meal.” This campaign was a significant financial success for McDonald’s. Its partnership with this influencer was effective as well in terms of “cultural response: hundreds of thousands of social media mentions and posts, fan-art and memes, unboxing videos of the meal…, fans selling food and stolen POS posters on eBay…, the multi merch drops that sold out in seconds, the framed receipts.” Online ads targeted to America’s diverse communities of young people, who can also be a member of a group at risk (due to finances, health, and the like) have long required an FTC investigation. The commission should examine the data-privacy and marketing practices on these sites, including those that communicate via languages other than English.Video and Video Games: Each of these applications have developed an array of targeted advertising strategies to reach young people. Streaming video is now a part of the integrated surveillance-marketing system, creating a pivotal new place to reach young people, as well as generate data for further targeting. Children and teens are viewing video content on Smart TVs, other streaming devices, mobile phones, tablets as well as computers. Household data where young people reside, which is amplified through the use of a growing number of “identity” tools that permit cross-device tracking, enable an array of marketing practices to flourish. The commission should review the data-gathering, ad-formatting, and other business practices that have been identified for these “OTT” services and how they impact children and teens. There are industry-approved ad-format guidelines for digital video and Connected TV. Digital video ads can use “dynamic overlays,” “shoppable and actionable video,” “voice-integrated video ads,” “sequential CTV creative,” and “creative extensions,” for example. Such ad formats and preferred practices are generally not vetted in terms of how they impact the interests of young people.Advertisers have strategically embedded themselves within the video game system, recognizing that it’s a key vantage point to surveil and entice young people. One leading quick-service restaurant chain that used video games to “reach the next generation of fast-food fans” explained that “gaming has become the primary source of entertainment for the younger generation. Whether playing video games or watching others play games on social platforms, the gaming industry has become bigger than the sports and music industries combined. And lockdowns during the global pandemic accelerated the trend. Gaming is a vital part of youth culture.” Illustrating that marketers understand that traditional paid advertising strategies aren’t the most effective to reach young people, the fast-food company decided to “approach gaming less like an advertising channel and more like an earned social and PR platform…. [V]ideo games are designed as social experiences.” As Insider Intelligence/eMarketer reported in June 2022, “there’s an ad format for every brand” in gaming today, including interstitial ads, rewarded ads, offerwalls, programmatic in-game ads, product placement, advergames, and “loot boxes.” There is also an “in-game advertising measurement” framework, recently released for public comment by the IAB and the Media Ratings Council. This is another example where leading advertisers, including Google, Microsoft, PepsiCo and Publicis, are determining how “ads that appear within gameplay” operate. These guidelines will impact youth, as they will help determine the operations of such ad formats as “Dynamic In-Game Advertising (DIGA)—Appear inside a 3D game environment, on virtual objects such as billboards, posters, etc. and combine the customization of web banners where ads rotate throughout the play session”; and “Hardcoded In-Game Ad Objects: Ads that have not been served by an ad server and can include custom 3D objects or static banners. These ads are planned and integrated into a video game during its design and development stage.” Leading advertising platforms such as Amazon sell as a package video ads reaching both streaming TV and gaming audiences. The role of gaming and streaming should be a major focus in October, as well as in any commission follow-up report.Influencers: What was once largely celebrity-based or word-of mouth style endorsements has evolved into a complex system including “nano-influencers (between 1,000 and 10,000 followers); micro-influencers (between 10,000 and 100,000); macro-influencers (between 100,000 and a million); and mega or celebrity influencers (1 million-plus followers). According to a recent report in the Journal of Advertising Research, “75 percent of marketers are now including social-media influencers in their marketing plans, with a worldwide market size of $2.3 billion in 2020.” Influencer marketing is also connected to social media marketing generally, where advertisers and others have long relied on a host of surveillance-related systems to “listen,” analyze and respond to people’s social online communications.Today, a generation of “content creators” (aka influencers) is lured into becoming part of the integrated digital sales force that sells to young people and others. From “unboxing videos” and “virtual product placement” in popular content, to “kidfluencers” like Ryan’s World and “brand ambassadors” lurking in video games, to favorite TikTok creators pushing fast-food, this form of digital “payola” is endemic online.Take Ryan’s World. Leveraging “more than one billion views” on YouTube, as well as a Nickelodeon show, has “catapulted him... to a global multi-category force,” notes his production and licensing firm. The deals include a “preschool product line in multiple categories, “best in class partnerships, and a “Tag with Ryan” app that garnered 16 million downloads. Brands seeking help selling products, says Ryan’s media agency, “can connect with its kid fanbase of millions that leverages our world-class portfolio of kid-star partners to authentically and seamlessly connect your brand with Generation Alpha across YouTube, social media, mobile games, and OTT channels—everywhere kids tune in!... a Generation Alpha focused agency that delivers more than 8 BILLION views and 100 MILLION unique viewers every month!” (its emphasis). Also available is a “custom content and integrations” feature that can “create unique brand experiences with top-tier kid stars.” Ryan’s success is not unique, as more and more marketers create platforms and content, as well as merge companies, to deliver ads and marketing to children and teens. An array of influencer marketing platforms that offer “one-stop” shopping for brands to employ influencers, including through the use of programmatic marketing-like data practices (to hire people to place endorsements, for example) is a core feature of the influencer economy. There are also software programs so brands and marketers can automate their social influencer operations, as well as social media “dashboards” that help track and analyze social online conversations, brand mentions and other communications. The impact of influencers is being measured through a variety of services, including neuromarketing. Influencers are playing a key role in “social commerce,” where they promote the real-time sales of products and services on “shoppable media.” U.S. social commerce sales are predicted to grow to almost $80 billion in 2025 from its 2022 estimated total of $45.74 billion. Google, Meta, TikTok, Amazon/Twitch and Snapchat all have significant influencer marketing operations. As Meta/Facebook recently documented, there is also a growing role for “virtual” influencers that are unleashed to promote products and services. While there may be claims that many promotions and endorsements should be classified as “user generated content” (UGC), we believe the commission will find that the myriad influencer marketing techniques often play a role spurring such product promotion.The “Metaverse”: The same forces of digital marketing that have shaped today’s online experience for young people are already at work organizing the structure of the “metaverse.” There are virtual brand placements, advertisements, and industry initiatives on ad formats and marketing experiences. Building on work done for gaming and esports, this rapidly emerging marketing environment poses additional threats to young people and requires timely commission intervention.Global Standards: Young people in the U.S. have fewer protections than they do in other countries and regions, including the European Union and the United Kingdom. In the EU, for example, protections are required for young people until they are 18 years of age. The impact of the GDPR, the UK’s Design Code, the forthcoming Digital Services Act (and even some self-regulatory EU initiatives by companies such as Google) should be assessed. In what ways do U.S.-based platforms and companies provider higher or more thorough safeguards for children when they are required to do so outside of this country? The FTC has a unique role to ensure that U.S. companies operating online are in the forefront—not in the rear—of protecting the privacy and interests of children.The October Workshop: Our review of the youth marketing landscape is just a partial snapshot of the marketplace. We have not discussed “apps” and mobile devices, which pose many concerns, including those related to location, for example. But CDD hopes this comment will help inform the commission about the operations of contemporary marketing and its relationship to young people. We call on the FTC to ensure that this October, we are presented with an informed and candid discussion of the nature and impact of today’s marketing system on America’s youth.ftcyouthmarketing071822.pdf
    Jeff Chester
  • Considering Privacy Legislation in the context of contemporary digital data marketing practices Last week, the leading global advertisers, online platforms and data marketers gathered for the most important awards given by the ad industry—the “Cannes Lions.” Reviewing the winners and the “shortlist” of runners-up—competing in categories such as “Creative Data,” “Social and Influencer,” “Brand Experience & Activation,” “Creative Commerce” and “Mobile”—is essential to learn where the data-driven marketing business—and ultimately much of our digital experiences—is headed. An analysis of the entries reveals a growing role for machine learning and artificial intelligence in the creation of online marketing, along with geolocation tracking, immersive content and other “engagement” technologies. One takeaway, not surprisingly, is that the online ad industry continues to perfect techniques to secure our interest in its content so it can to gather more data from us.A U.S.-based company that also generated news during Cannes was The Trade Desk, a relatively unknown data marketing service that is playing a major role assisting advertisers and content providers to overcome any new privacy challenges posed by emerging or future legislation. The Trade Desk announced last week a further integration of its data and ad-targeting service with Amazon’s cloud AWS division, as well as a key role assisting grocer Albertsons new digital ad division. The Trade Desk has brokered a series of alliances and partnerships with Walmart, the Washington Post, Los Angeles Times, Gannett, NBC Universal, and Disney—to name only a few.There are several reasons these marketers and content publishing companies are aligning themselves with The Trade Desk. One of the most important is the company’s leadership in developing a method to collect and monetize a person’s identity for ongoing online marketing. “Unified ID 2.0” is touted to be a privacy-focused method that enables surveillance and effective ad targeting. The marketing industry refers to these identity approaches as “currencies” that enable the buying and selling of individuals for advertising. There are now dozens of identity “graph” or “identity spine” services, in addition to UDID, which reflect far-reaching partnerships among data brokers, publishers, adtech specialists, advertisers and marketing agencies. Many of these approaches are interoperable, such as the one involving Acxiom spin-off LiveRamp and The Trade Desk. A key goal, when you listen to what these identity brokers say, is that they would like to establish a universal identifier for each of us, to directly capture our attention, reap our data, and monetize our behavior. For the last several years, as a result of the enactment of the GDPR in the EU, the passage of privacy legislation in California, and the potential of federal privacy legislation, Google, Apple, Firefox and others have made changes or announced plans related to their online data practices. So-called “third party cookies,” which have long enabled commercial surveillance, are being abandoned—especially since their role has repeatedly raised concerns from data-protection regulators. Taking their place are what the surveillance marketing business believes are privacy-regulation-proof strategies. There are basically two major, but related, efforts that have been underway—here in the U.S. and globally.The first tactic is for a platform or online publisher to secure the use of our information through an affirmative consent process—called a “first-party” data relationship in the industry. The reasoning goes is that an individual wants an ongoing interaction with the site—for news, videos, groceries, drugs and other services, etc. Under this rationale, we are said to understand and approve how platforms and publishers will use our information as part of the value exchange. First-party data is becoming the most valuable asset in the global digital marketing business, enabling ongoing collection, generating insights, and helping maintain the surveillance model. It is considered to have few privacy problems. All the major platforms that raise so many troubling issues—including Google, Amazon, Meta/Facebook—operate through extensive first-party data relationships. It’s informative to see how the lead digital marketing trade group—the Interactive Advertising Bureau (IAB)—explains it: “ “first party data is your data…presents the least privacy concerns because you have full control over its collection, ownership and use.”The second tactic is a variation on the first, but also relies on various forms of identity-resolution strategies. It’s a response in part to the challenges posed by the dominance of the “walled garden” digital behemoths (Google, etc.) as well the need to overcome the impact of privacy regulation. These identity services are the replacement for cookies. Some form of first-party data is captured (and streaming video services are seen as a gold mine here to secure consent), along with additional information using machine learning to crunch data from public sources and other “signals.” Multimillion member panels of consumers who provide ongoing feedback to marketers, including information about their online behaviors, also help better determine how to effectively fashion the digital targeting elements. The Trade Desk-led UDID is one such identity framework. Another is TransUnion’s “Fabrick,” which “provides marketers with a sustainable, privacy-first foundation for all their data management, marketing and measurement needs.” Such rhetoric is typical of how the adtech/data broker/digital marketing sectors are trying to reframe how they conduct surveillance.Another related development, as part of the restructuring of the commercial surveillance economy, is the role of “data clean rooms.” Clean rooms enable data to be processed under specific rules set up by a marketer. As Advertising Agerecently explained, clean rooms enable first-party and other marketers to provide “access to their troves of data.” For Comcast’s NBCU division and Disney, this treasure chest of information comes from “set-top boxes, streaming platforms, theme parks and movie studios.” Various privacy rules are supposed to be applied; in some cases where they have consent, two or more parties will exchange their first-party data. In other cases, where they may not have such open permission, they will be able to “create really interesting ad products; whether it's a certain audience slice, or audience taxonomy, or different types of ad units….” As an NBCU executive explained about its clean room activity, “we match the data, we build custom audiences…we plan, activate and we measure. The clean room is now the safe neutral sandbox where all the parties can feel good sharing first party data without concerns of data leakage.”We currently have at least one major privacy bill in Congress that includes important protections for civil rights and restricts data targeting of children and teens, among other key provisions. It’s also important when examining these proposals to see how effective they will be in dealing with the surveillance marketing industry’s current tactics. If they don’t effectively curtail what is continuous and profound surveillance and manipulation by the major digital marketers, and also fail to rein in the power of the most dominant platforms, will such a federal privacy promise really deliver? We owe it to the public to determine whether such bills will really “clean up” the surveillance system at the core of our online lives.
    Jeff Chester
  • Deal reflects Big Tech move to grab more data for omnipresent tracking & targeting Microsoft is rapidly expanding its surveillance advertising complex—first acquiring AT&T’s powerful Xandr targeting system last December, and adding a few weeks later the online gaming and eSports giant Activision Blizzard. The combination of Microsoft, AT&T and Activision assets raises a set of concerns regarding competition in the gaming and eSports marketplaces; privacy/surveillance protections, given the pervasive data gathering on users; and consumer protection, such as the methods that Microsoft and Activision (and other gaming services) implement to monetize players (including youth) through in-stream advertising and other marketing efforts. It also has implications for the ways we protect privacy in streaming media as well as in the evolving “metaverse.”The FTC must review this proposed deal, with the agency’s privacy and consumer-protection roles at the fore. This proposed Microsoft/Activision combination is emblematic of the ongoing transformation of how Big Tech companies track and target people across all their devices and applications. In order to continue its surveillance-advertising-based model, the online industry is undergoing a massive shift in tactics. It is pivoting to what’s called a “First-Party” data use strategy, claiming that it is obtaining our permission to continue to follow us online and deliver personalized ads and marketing. Getting our consent is the Big Tech plan to undermine any privacy legislation in the U.S. and elsewhere. For example, if this merger goes through, users of Activision games will likely be asked to consent to data collection and tracking on all of Microsoft’s services—such as Bing and LinkedIn. Given that Microsoft and Activision have already baked into its ad services relationships with Google and Meta/Facebook, this acquisition also illustrates the numerous deals that are aligned in all of these digital giants. Owning Xandr will bring a host of additional surveillance advertising resources to Microsoft’s already robust consumer-profiling and marketing infrastructure (including information contributed by AT&T’s own data practices). As explained in the data marketing newsletter AdExchanger, the Xandr and Activision acquisitions, if approved, will enable the leveraging of Microsoft’s already “strong first-party data set and monetize inventory across its wide portfolio of platforms, including its video game business, LinkedIn, Bing, Edge, Office 365, Skype and more.” Microsoft was already working with AT&T’s Xandr surveillance ad targeting apparatus, including for its gaming division. For example Xandr explains that its enables marketers to “Access real users in immersive and engaging environments” via its current ability to target people thru the Microsoft Advertising Exchange.Microsoft’s data targeting currently involves its “Microsoft Search Network,” which “sees 14.6 billion monthly searches globally across nearly 700 million users. Its Audience Network engages in an array of targeting tactics, including via leveraging a person’s identity, location, use of LinkedIn or other sites and a variety of “custom” approaches. Advertisers are able to “target audiences…across more than 1 billion Window devices.” Microsoft also offers its “Dynamics 365 Customer Insights” data platform to help marketers package their own data to use on its ad platform. Activision engages in an array of ad practices that raise concerns about unfairness and privacy, from in-stream ads to “rewarded videos” to product placement. As it explains, “Activision Blizzard Media connects brands and players with fan-first integrated advertising experiences across gaming and esports…. We create user-initiated in-game advertising experiences that allow brands to reward 245M+ players at key moments of gameplay to drive reach, frequency and engagement…. In-game User-initiated video ads allow brands to reward players at key moments of gameplay.”In this context, the FTC needs to review all the third-party tracking YouTube-related companies serving ads to Activision Blizzard Esports, including Google Campaign Manager 360, Flashtalking, Adform, Innovid and Extreme Reach. For example, Flashtalking explains that it helps gaming services “drive customer lifetime value…, understand who bought your games, how they interact with your brand, and which touch points drove engagement.” Innovid helps marketers create “accurate, persistent identity across devices.” For measurement, which is also a privacy issue long overlooked by previous FTC commissions, we have Activision partners that include Oracle’s MOAT, Kantor, Google Campaign Manager, and others. Everything from potato chips, candy and toilet paper is pushed via its gaming services. Activision uses neuromarketing and other research-related online ad industry tactics to figure out how best to deliver marketing to its users (including teens)—all of which have privacy and consumer protection implications. For decades, the Federal Trade Commission has approved Big Tech mergers without examining their impact on consumer protection and privacy (and also on competition—think of all the Google and Facebook takeovers the commission has okayed). This is unacceptable. Gaming is a hugely important market, with a set of data-gathering tactics that impact both consumers and competition. We expect this FTC to do much better than what we have witnessed for the past several decades. 
    Jeff Chester
  • Time for the FTC to intervene as marketers create new ways to leverage our “identity” data as cookies “crumble” For decades, the U.S. has allowed private actors to basically create the rules regarding how our data is gathered and used online. A key reason that we do not have any real privacy for digital media is precisely because it has principally been online marketing interests that have shaped how the devices, platforms and applications we use ensnare us in the commercial surveillance complex. The Interactive Advertising Bureau (IAB) has long played this role through an array of standards committees that address everything from mobile devices to big data-driven targeting to ads harnessing virtual reality, to name a few. As this blog has previously covered, U.S. commercial online advertising, spearheaded by Google, the Trade Desk and others, is engaged in a major transformation of how it processes and characterizes data used for targeted marketing. For various reasons, the traditional ways we are profiled and tracked through the use of “cookies” are being replaced by a variety of schemes that enable advertisers to know and take advantage of our identities, but which they believe will (somehow!) pass muster with any privacy regulations now in force or potentially enacted. What’s important is that regardless of the industry rhetoric that these approaches will empower a person’s privacy, at the end of the day they are designed to ensure that the comprehensive tracking and targeting system remains firmly in place.As an industry trade organization, the IAB serves as a place to generate consensus, or agreed-upon formats, for digital advertising practices. To help the industry’s search for a way to maintain its surveillance business model approach, it has created what’s called “Project Rearc” to “re-architect digital marketing.” The IAB explains that Project Rearc “is a global call-to-action for stakeholders across the digital supply chain to re-think and re-architect digital marketing to support core industry use cases, while balancing consumer privacy and personalization.” It has set up a number of industry-run working groups to advance various components of this “re-architecting,” including what’s called an “Accountability Working Group.” Its members include Experian, Facebook, Google, Axel Springer, Nielsen, Pandora, TikTok, Nielsen, Publicis, Group M, Amazon, IABs from the EU, Australia, and Canada, Disney, Microsoft, Adobe, News Corp., Roku and many more (including specialist companies with their own “identity” for digital marketing approaches, such as Neustar and LiveRamp).The IAB Rearc effort has put out for “public comment” a number of proposed approaches for addressing elements of the new ways to target us via identifiers, cloud processing, and machine learning. Earlier this year, for example, it released for comment proposed standards on a “Global Privacy Platform;” an “Accountability Platform,” “Best Practices for User-Enabled Identity Tokens,” and a “Taxonomy and Data Transparency Standards to Support seller-defined Audience and Context Signaling.”Now it has released for public comment (due by November 12, 2021) a proposed method to “Increase Transparency Across Entire Advertising Supply Chain for New ID usage.” This proposal involves critical elements on the data collected about us and how it can be used. It is designed to “provide a standard way for companies to declare which user identity sources they use” and “ease ad campaign execution between advertisers, publishers, and their chosen technology providers….” This helps online advertisers use “different identity solutions that will replace the role of the third-party cookie,” explains the IAB. While developed in part for a “transparent supply chain” and to help build “auditable data structures to ensure consumer privacy,” its ultimate function is to enable marketers to “activate addressable audiences.” In other words, it’s all about continuing to ensure that digital marketers are able to build and leverage numerous individual and group identifiers to empower their advertising activities, and withstand potential regulatory threats about privacy violations.The IAB’s so-called public comment system is primarily designed for the special interests whose business model is the mass monetization of all our data and behaviors. We should not allow these actors to define how our everyday experiences with data operate, especially when privacy is involved. The longstanding role in which the IAB and online marketers have set many of the standards for our online lives should be challenged—by the FTC, Congress, state AGs and everyone else working on these issues.We—the public—should be determining our “digital destiny”—not the same people that gave us surveillance marketing in the first place.
    Jeff Chester
  • Blog

    The Big Data Merger Gold Rush to Control Your “Identity” Information

    Will the DoJ ensure that both competition and consumer protection in data markets are addressed?

    There is a digital data “gold rush” fever sweeping the data and marketing industry, as the quest to find ways to use data to determine a person’s “identity” for online marketing becomes paramount. This is triggered, in part, by the moves made by Google and others to replace “cookies” and other online identifiers with new, allegedly pro-privacy data-profiling methods to get the same results. We’ve addressed this privacy charade in other posts. In order to better position themselves in a world where knowing who we are and what we do is a highly valuable global currency, there are an increasing number of mergers and acquisitions in the digital marketing and advertising sector.For example, last week data-broker giant TransUnion announced it is buying identity data company Neustar for $3.1 billion dollars, to further expand its “powerful digital identity capabilities.” This is the latest in TransUnion’s buying spree to acquire data services companies that give it even more information on the U.S. public, including what we do on streaming media, via its 2020 takeovers of connected and streaming video data company Tru Optik (link is external) and the data-management-focused Signal. (link is external)In reviewing some of the business practices touted by TransUnion and Neustar, it’s striking that so little has changed in the decades CDD has been sounding the alarm about the impacts data-driven online marketing services have on society. These include the ever-growing privacy threats, as well as the machine-driven sorting of people and the manipulation of our behaviors. So far, nothing has derailed the commercial Big Data marketing.With this deal, TransUnion is obtaining a treasure trove of data assets and capabilities. For Neustar, “identity is an actionable understanding of who or what is on the other end of every interaction and transaction.” Neustar’s “OneID system provides a single lens on the consumer across their dynamic omnichannel journey.” This involves: (link is external) data management services featuring the collection, identification, tagging, tracking, analyzing, verification, correcting and sorting of business data pertaining to the identities, locations and personal information of and about consumers, including individuals, households, places, businesses, business entities, organizations, enterprises, schools, governments, points of interest, business practice characteristics, movements and behaviors of and about consumers via media devices, computers, mobile phones, tablets and internet connected devices.Neustar keeps close track of people, saying that it knows that “the average person has approximately 15 distinct identifiers with an average of 8 connected devices” (and notes that an average household has more than 45 such distinct identifiers). Neustar has an especially close business partnership with Facebook, (link is external) which enables marketers to better analyze how their ads translate into sales made on and spurred by that platform. Its “Customer Scoring and Segmentation” system enables advertisers to identify and classify targets so they can “reach the right customer with the right message in the right markets.” Neustar has a robust data-driven ad-targeting system called AdAdvisor, which reaches 220 million adults in “virtually every household in the U.S.” AdAdvisor (link is external) “uses past behavior to predict likelihood of future behavior” and involves “thousands of data points available for online targeting” (including the use of “2 billion records a month from authoritative offline sources”). Its “Propensity Audiences” service helps marketers predict the behaviors of people, incorporating such information (link is external) as “customer-level purchase data for more than 230 million US consumers; weekly in-store transaction data from over 4,500 retailers; actual catalog purchases by more than 18 million households”; and “credit information and household-level demographics, used to build profiles of the buying power, disposable income and access to credit a given household has available.” Neustar offers its customers the ability to reach “propensity audiences” in order to target such product categories as alcohol, automotive, education, entertainment, grocery, life events, personal finance, and more. For example, companies can target people who have used their debit or credit cards, by the amount of insurance they have on their homes or cars, by the “level of investable assets,” including whether they have a pension or other retirement funds. One also can discover people who buy a certain kitty litter or candy bar—the list of AdAdvisor possibilities is far-reaching.Another AdAdvisor application, “ElementOne,” (link is external) comprises 172 segments that can be “leveraged in real time for both online and offline audience targeting.” The targeting categories should be familiar to anyone who is concerned about how groups of people are characterized by data-brokers and others. For example, one can select “Segment 058—high income rural younger renters with and without children—or “Segment 115—middle income city older home owners without children; or any Segment from 151-172 to reach “low income” Americans who are renters, homeowners, have or don’t have kids, live in rural or urban areas, and the like.Marketers can also use AdAdvisor to determine the geolocation behaviors of their targets, through partnerships that provide Neustar with “10 billion daily location signals from 250+ million opted-in consumers.” In other words, Neustar knows whether you walked into that liquor store, grocery chain, hotel, entertainment venue, or shop. It also has data on what you view on TV, streaming video, and gaming. And it’s not just consumers who Neustar tracks and targets. Companies can access its “HealthLink Dimensions Doctor Data to target 1.7 million healthcare professionals who work in more than 400 specialties, including acute care, family practice, pediatrics, cardiovascular surgery.”TransUnion is already a global data and digital marketing powerhouse, with operations in 30 countries, 8,000 clients that include 60 of the Fortune 100. What is calls its “TruAudience Marketing Solutions (link is external)” is built on a foundation of “insight into 98% of U.S. adults and more than 127 million homes, including 80 million connected homes.” Its “TruAudience Identity” product provides “a three-dimensional, omnichannel view of individuals, devices and households… [enabling] precise, scalable identity across offline, digital and streaming environments.” It offers marketers and others a method to secure what it terms is an “identity resolution,” (link is external) which is defined as “the process of matching identifiers across devices and touchpoints to a single profile [that] helps build a cohesive, omnichannel view of a consumer….”TransUnion, known historically as one of the Big Three credit bureaus, has pivoted to become a key source for data and applications for digital marketing. It isn’t the only company expanding what is called an “ID Graf (link is external)”—the ways all our data are gathered for profiling. However, given its already vast storehouse of information on Americans, it should not be allowed to devour another major data-focused marketing enterprise.Since this merger is now before the U.S. Department of Justice—as opposed to the Federal Trade Commission—there isn’t a strong likelihood that in addition to examining the competitive implications of the deal, there will also be a focus on what this really means for people, in terms of further loss of privacy, their autonomy and their potential vulnerability to manipulative and stealthy marketing applications that classify and segment us in a myriad of invisible ways. Additionally, the use of such data systems to identify communities of color and other groups that confront historic and current obstacles to their well-being should also be analyzed by any competition regulator.In July, the Biden Administration issued (link is external) an Executive Order on competition that called for a more robust regime to deal with mergers such as TransUnion and Neustar. According to that order, “It is also the policy of my Administration to enforce the antitrust laws to meet the challenges posed by new industries and technologies, including the rise of the dominant Internet platforms, especially as they stem from serial mergers, the acquisition of nascent competitors, the aggregation of data, unfair competition in attention markets, the surveillance of users, and the presence of network effects.”We hope the DOJ will live up to this call to address mergers such as this one, and other data-driven deals that are a key reason why these kind of buyouts happen with regularity. There should also be a way for the FTC—especially under the leadership of Chair Lina Khan—to play an important role evaluating this and similar transactions. There’s more at stake than competition in the data-broker or digital advertising markets. Who controls our information and how that information is used are the fundamental questions that will determine our freedom and our economic opportunities. As the Big Data marketplace undergoes a key transition, developing effective policies to protect public privacy and corporate competition is precisely why this moment is so vitally important.
    Jeff Chester
    three person pointing the silver laptop computer by John Schnobrich
  • Online grocery shopping became a pandemic response necessity for those who could afford it, with revenues to exceed $100 billion (link is external) in 2021. Leading supermarket chains such as Kroger, big box stores like Walmart, online specialist companies such as Instacart, and the ubiquitous Amazon, have all experienced greater demand from the public to have groceries ordered and then quickly delivered or available for pick up. The pandemic has spurred “record” (link is external) downloads of grocery shopping apps from Instacart, Walmart Grocery and Target, among others. Consequently, this marketplace is now rapidly expanding its data collection and digital marketing operations, to generate significant revenues from advertisers and food and beverage brand sponsors.So it’s not a surprise that Instacart’s new (link is external) CEO comes from Facebook (link is external), and the company has also just hired that social network’s former head of advertising. Walmart, Kroger, Amazon and others are also further adding (link is external) adtech and data marketing experts. There has been a spate of announcements involving new grocery-focused alliances to improve the role digital data play in marketing and sales, including by Albertson’s (involving Google (link is external)) (link is external) and Hy-Vee (link is external) (also with Google). Albertson’s (link is external) (which includes the Safeway, Vons and Jewel-Osco divisions) deal with Google is designed to include “shoppable digital maps to make it easier for consumers to find and purchase products online; AI-powered conversation commerce” technologies for shopping, and “predictive grocery list building….” Similarly, Hy-Vee’s work with the Google Cloud will help enable “predictive (link is external) shopping carts,” among other services. (Hy-Vee is also one of the supermarket chains participating in the USDA’s online SNAP (link is external) pilot project, raising questions regarding how its alliance with Google will impact the privacy and well-being of people enrolled in SNAP).All the data that is flowing into these companies, how it is being analyzed, its use by advertisers and product sponsors, and how it impacts the products we see and purchase, should all be subject to scrutiny from consumer protection and privacy regulators.A good example is Instacart. Its Instacart (link is external) Advertising service allows brands to pay to become “featured products” and more (link is external). Featured Product ads are a form of paid search advertising. As Instacart tells its clients, if a consumer is searching (link is external)for “chocolate ice cream” or just “ice cream,” and you have bought such ads, “your product can appear as one of the first products in the search result.” And even after “consumers place an order, we’ll make some suggestions for last-minute additions to the order that the consumer might be interested in. Among these suggestions, the system can include Featured Product ads.”But it’s all the data and connections to their consuming customers that is the real “secret sauce” for Instacart’s ad-targeting and influence operations. The company knows what’s in and out of everyone’s shopping carts, explaining that “Instacart tracks (link is external) the source or ‘path to cart’ for all items purchased through Instacart marketplace, differentiating between three main groups—items bought from search results, browsing departments, aisles, and other discovery areas of Instacart, or from a list of previous purchases, which we call ‘buy it again.’” As it explains, the “Instacart Ads solution (link is external)” offers “a full suite of advertising products that animate the entire customer journey—from search through purchase.” These include opportunities for marketers to become part of “the ‘buy it again’ lists where consumers are shown a list of products that they have bought on previous orders. These can act as reminders of meals or recipes they’ve made before and items they tend to stock up on. Our brand partners can leverage Instacart Ads products to appear on ‘buy it again’ lists so they stay top of mind with their customers and aid in retaining valuable customers.”The company’s advertising blog discusses its use of what is increasingly the most valuable information a company can have—known as “first-party” data, where (allegedly) a consumer has given their consent to have all their information used. Instacart explains (link is external) this data “encompasses intent and purchase signals… from users signed up to an online grocery app,” and that it can be leveraged by its brand and advertising clients. The online ordering company explains that its “rich and diverse data sources” include “access to millions of orders over time from over 600 retail partners, across 45,000 stores, involving millions of households…. [A] tremendously valuable data set…this data is updated every night….” Instacart analyzes this trove of data, including point-of-sale, transaction log and out-of-stock information, to help it zero in on a key goal—to enable its advertisers to better understand and take advantage of what it terms “basket affinities.” In a webinar, Instacart defined (link is external) that concept:Basket affinities helps Instacart, and its brand partners, to create consumer ‘types’, discover product interactions, understand mission-dominant baskets, and identify trigger products — ultimately building a picture of how brands are bought online that we then share with our partners to build a better plan to acquire and retain valuable consumers. The term ‘basket affinity’ covers examining the one-to-one, group-to-group, or many-to-many relationships between products, as well as identifying consumer shopping missions and consumer profiles. [We note that Instacart says it “aggregates and anonymizes” this information. However, given its ability to target individuals, we will rely on regulators to determine how well such personal information is actually handled].Instacart also touts its ability to track the actual impact of advertising on its site, noting that it is a “closed loop” service “where ads are served to consumers in the same ‘space’ any resulting sales occur.” In a blog post (link is external) on how it empowers its advertiser partners, the company notes that Our advertising products provide options that can put their products in front of consumers on every ‘discovery surface’ on the platform to catch their eye when they are in this mode. Plus, our data shows that the majority of add to carts from these discovery surfaces are the first time the consumer has added that item to their cart — meaning it’s a great place for brands to acquire new customers.As with other major data (link is external)-driven digital marketers, Instacart has many grocery tech and ecommerce specialist partners, who provide brands and advertisers with a myriad of ways to promote, sell and otherwise “optimize” their products on its platform (such as Perpetua, (link is external) Tinuiti, (link is external) Skai (link is external) and Commerce IQ, (link is external) to only name several).The dramatic and recent growth of what’s called grocery tech is shaping the way consumers buy products and what prices they may pay. With companies such as Amazon, (link is external) Kroger, (link is external) Walmart, (link is external) Albertson’s (link is external) and Instacart now in essence Big Data-driven digital advertising companies, the public is being subjected to various practices that warrant regulatory scrutiny, oversight and public policy. We call on the Federal Trade Commission and state regulators to act. Among the key questions are how, if at all, are racial, ethnic and income data being used to target a consumer; are health data, including the buying of drugs and over-the-counter medications, being leveraged; and what measurement and performance information is being made available to partners and advertisers? We don’t want to have to “drop” our privacy and autonomy when we shop in the 21st Century.
    Jeff Chester
  • Blog

    Surveillance Marketing Industry Claims Future of an “Open Internet” Requires Massive Data Gathering

    New ways to take advantage of your “identity” raise privacy, consumer-protection and competition issues

    The Trade Desk is a leading (link is external) AdTech company, providing data-driven digital advertising services (link is external) to major brands and agencies. It is also playing an outsized role responding to the initiative led by Google (link is external) to create new, allegedly “privacy-friendly” approaches to ad targeting, which include ending the use of what are called “third-party” cookies. These cookies enable the identification and tracking of individuals, and have been an essential building block for surveillance advertising since the dawn (link is external) of the commercial Internet. As we explained in a previous post about the so-called race to “end” the use of cookies, the online marketing industry is engaged in a full-throated effort to redefine how our privacy is conceptualized and privately governed. Pressure from regulators (such as the EU’s GDPR) and growing concerns about privacy from consumers are among the reasons why this is happening now. But the real motivation, in my view, is that the most powerful online ad companies and global brands (such as Google, Amazon and the Trade Desk) don’t need these antiquated cookies anymore. They have so much of our information that they collect directly, and also available from countless partners (such as global brands). Additionally, they now have many new ways to determine who we are—our “identity”—including through the use of AI, machine learning and data clouds (link is external). “Unified ID 2.0” is what The Trade Desk calls its approach to harvesting our identity information for advertising. Like Google, they claim to be respectful of data protection principles. Some of the most powerful companies in the U.S. are supporting the Unified ID standard, including Walmart, Washington Post, P&G, Comcast, CBS, Home Depot, Oracle, and Nielsen. But more than our privacy is at stake as data marketing giants fight over how best to reap the financial rewards (link is external) of what is predicted eventually to become a trillion dollar global ad marketplace. This debate is increasingly focused on the very future of the Internet itself, including how it is structured and governed. Only by ensuring that advertisers can continue to successfully operate powerful data-gathering and ad-targeting systems, argues Trade Desk CEO Jeff Green, can the “Open (link is external) Internet” be preserved. His argument, of course, is a digital déjà vu version of what media moguls have said in the U.S. dating back to commercial radio in the 1930’s. Only with a full-blown, ad-supported (and regulation-free) electronic media system, whether it was broadcast radio, broadcast TV, or cable TV, could the U.S. be assured it would enjoy a democratic and robust communications environment. (I was in the room at the Department of Commerce back in the middle 1990’s when advertisers were actually worried that the Internet would be largely ad-free; the representative from P&G leaned over to tell me that they never would let that happen—and he was right.) Internet operations are highly influenced to serve the needs of advertisers, who have reworked its architecture to ensure we are all commercially surveilled. For decades, the online ad industry has continually expanded ways to monetize our behaviors, emotions, location and much more. (link is external) Last week, The Trade Desk unveiled its latest iteration using Unified ID 2.0—called Solimar (see video (link is external) here). Solimar uses “an artificial intelligence tool called Koa (link is external), which makes suggestions” to help ensure effective marketing campaigns. Reflecting the serial partnerships that operate to provide marketers with a gold mine of information on any individual, The Trade Desk has a “Koa Identity (link is external) Alliance,” a “cross-device graph that incorporates leading and emerging ID solutions such as LiveRamp Identity Link, Oracle Cross Device, Tapad (link is external) Device Graph, and Adbrain Device Graf.” This system, they say, creates an effective way for marketers to develop a data portrait of individual consumers. It’s useful to hear what companies such as The Trade Desk say as we evaluate claims that “big data” consumer surveillance operations are essential for a democratically structured Internet. In its most recent Annual Report (link is external), the company explains that “Through our self-service, cloud-based platform, ad buyers can create, manage, and optimize more expressive data-driven digital advertising campaigns across ad formats and channels, including display, video, audio, in-app, native and social, on a multitude of devices, such as computers, mobile devices, and connected TV (‘CTV’)…. We use the massive data captured by our platform to build predictive models around user characteristics, such as demographic, purchase intent or interest data. Data from our platform is continually fed back into these models, which enables them to improve over time as the use of our platform increases.” And here’s how The Trade Desk’s Koa’s process is described in the trade publication Campaign (link is external) Asia: …clients can specify their target customer in the form of first-party or third-party data, which will serve as a seed audience that Koa will model from to provide recommendations. A data section provides multiple options for brands to upload first-party data including pixels, app data, and IP addresses directly into the platform, or import data from a third-party DMP or CDP. If a client chooses to onboard CRM data in the form of email addresses, these will automatically be converted into UID2s. Once converted, the platform will scan the UID2s to evaluate how many are ‘active UID2s’, which refers to how many of these users have been active across the programmatic universe in the past week. If the client chooses to act on those UID2s, they will be passed into the programmatic ecosystem to match with the publisher side, building the UID2 ecosystem in tandem. For advertisers that don't have first-party data… an audiences tab allows advertisers to tap into a marketplace of second- and third-party data so they can still use interest segments, purchase intent segments and demographics. In other words, these systems have a ton of information about you. They can easily get even more data and engage in the kinds of surveillance advertising that regulators (link is external) and consumer (link is external) advocates around the world are demanding be stopped. There are now dozens of competing “identity solutions”—including those from Google, Amazon (link is external), data brokers (link is external), telephone (link is external) companies, etc. (See visual at bottom of page here (link is external)). The stakes here are significant—how will the Internet evolve in terms of privacy, and will its core “DNA” be ever-growing forms of surveillance and manipulation? How do we decide the most privacy-protective ways to ensure meaningful monetization of online content—and must funding for such programming only be advertising-based? In what ways are some of these identity proposals a way for powerful platforms such as Google to further expand its monopolistic control of the ad market? These and other questions require a thoughtful regulator in the U.S. to help sort this out and make recommendations to ensure that the public truly benefits. That’s why it’s time for the U.S. Federal Trade Commission to step in. The FTC should analyze these advertising-focused identity efforts; assess their risks and the benefits; address how to govern the collection and use of data where a person has supposedly given permission to a brand or store to use it (known as “first-party” data). A key question, given today’s technologies, is whether meaningful personal consent for data collection is even possible in a world driven by sophisticated and real-time AI systems that personalize content and ads? The commission should also investigate the role of data-mining clouds and other so-called “clean” rooms where privacy is said to prevail despite their compilation of personal information for targeted advertising. The time for private, special interests (and conflicted) actors to determine the future of the Internet, and how our privacy is to be treated, is over.
    Jeff Chester
  • To watch the full FTC Dark Patterns Workshop online visit the FTC website here (link is external).
  • Contextual Advertising—Now Driven by AI and Machine Learning—Requires Regulatory Review for Privacy and Marketing FairnessWhat’s known as contextual advertising is receiving a big boost from marketers and some policymakers, who claim that it provides a more privacy-friendly alternative to the dominant global surveillance-based “behavioral” marketing model. Google’s plans to eliminate cookies and other third-party trackers used for much of online ad delivery are also spurring greater interest in contextual marketing, which is being touted especially as safe for children.Until several years ago, contextual ads meant that you would see an ad based on the content of the page you were on—so there might be ads for restaurants on web pages about food, or cars would be pitched if you were reading about road trips. The ad tech involved was basic: keywords found on the page would help trigger an ad.Today’s version of what’s called “contextual intelligence (link is external), “Contextual 2.0 (link is external),” or Google’s “Advanced Contextual (link is external)” is distinct. Contextual marketing uses artificial intelligence (AI (link is external)) and machine learning technologies, including computer vision and natural language processing, to provide “targeting precision.” AI-based techniques, the industry explains, allow marketers to read “between the lines” of online content. Contextual advertising is now capable of comprehending “the holistic and subtle meaning of all text and imagery,” enabling predictions and decisions on ad design and placement by “leveraging deep neural (link is external) networks” and “proprietary data sets.” AI is used to decipher the meaning of visuals “on a massive scale, enabling advertisers to create much more sophisticated links between the content and the advertising.” Computer vision (link is external) technologies identify every visual element, and “natural language processing” minutely classifies all the concepts found on each page. Millions of “rules (link is external)” are applied in an instant, using software that helps advertisers take advantage of the “multiple meanings” that may be found on a page.For example, one leading contextual marketing company, GumGum (link is external), explains that its “Verity” algorithmic and AI-based service “combines natural language processing with computer vision technology to execute a multi-layered reading process. First, it finds the meat of the article on the page, which means differentiating it from any sidebar and header ads. Next, it parses the body text, headlines, image captions with natural language processing; at the same time, it uses computer vision to parse the main visuals.… [and then] blends its textual and visual analysis into one cohesive report, which it then sends off to an adserver,” which determines whether “Verity’s report on a given page matches its advertisers campaign criteria.”Machine learning also enables contextual intelligence services to make predictions about the best ways to structure and place marketing content, taking advantage of real-time events and the ways consumers interact with content. It enables segmentation of audience targets to be fine-tuned. It also incorporates a number of traditional behavioral marketing concepts, gathering a range of data “signals (link is external)” that ensure more effecting targeting. There are advanced measurement (link is external) technologies; custom methods to influence what marketers term our “customer journey,” structuring ad-buying in similar ways to behavioral, data-driven approaches, as “bids” are made to target—and retarget—the most desirable people. And, of course, once the contextual ad “works” and people interact with it, additional personal and other information is then gathered.Contextual advertising, estimated to generate (link is external) $412 billion in spending by 2025, requires a thorough review by the FTC and data regulators. Regulators, privacy advocates and others must carefully examine how the AI and machine-learning marketing systems operate, including for Contextual 2.0. We should not accept marketers’ claims that it is innocuous and privacy-appropriate. We need to pull back the digital curtain and carefully examine the data and impact of contextual systems.
    Jeff Chester
    black laptop computer turned on by Lewis Kang'ethe Ngugi
  • The Whole World will Still be Watching You: Google & Digital Marketing Industry “Death-of-the-Cookie” Privacy Initiatives Require Scrutiny from Public Policymakers Jeff Chester One would think, in listening to the language used by Google, Facebook, and other ad and data companies to discuss the construction and future of privacy protection, that they are playing some kind of word game. We hear terms (link is external) such as “TURTLEDOVE,” “FLEDGE,” SPARROW and “FLoC.” Such claims should be viewed with skepticism, however. Although some reports make it appear that Google and its online marketing compatriots propose to reduce data gathering and tracking, we believe that their primary goal is still focused on perfecting the vast surveillance system they’ve well-established. A major data marketing industry effort is now underway to eliminate—or diminish—the role of the tracking software known as “third-party” cookies. Cookies were developed (link is external) in the very earliest days of the commercial “World Wide Web,” and have served as the foundational digital tether connecting us to a sprawling and sophisticated data-mining complex. Through cookies—and later mobile device IDs and other “persistent” identifiers—Google, Facebook, Amazon, Coca-Cola and practically everyone else have been able to surveil and target us—and our communities. Tracking cookies have literally helped engineer a “sweet spot (link is external)” for online marketers, enabling them to embed spies into our web browsers, which help them understand our digital behaviors and activities and then take action based on that knowledge. Some of these trackers—placed and used by a myriad (link is external) of data marketing companies on various websites—are referred to as “third-party” cookies, to distinguish them from what online marketers claim, with a straight face, are more acceptable forms of tracking software—known as “first-party” cookies. According to the tortured online advertiser explanation, “first-party” trackers are placed by websites on which you have affirmatively given permission to be tracked while you are on that site. These “we-have-your-permission-to-use” first-party cookies would increasingly become the foundation for advances in digital tracking and targeting. Please raise your hand if you believe you have informed Google or Amazon, to cite the two most egregious examples, that they can surveil what you do via these first-party cookies, including engaging in an analysis of your actions, background, interests and more. What the online ad business has developed behind its digital curtain—such as various ways to trigger your response, measure your emotions (link is external), knit together information on device (link is external) use, and employ machine learning (link is external) to predict your behaviors (just to name a few of the methods currently in use)—has played a fundamental role in personal data gathering. Yet these and other practices—which have an enormous impact on privacy, autonomy, fairness, and so many other aspects of our lives—will not be affected by the “death-of-the-cookie” transition currently underway. On the contrary, we believe that a case to be made that the opposite is true. Rather than strengthening data safeguards, we are seeing unaccountable platforms such as Google actually becoming more dominant, as so-called “privacy preserving (link is external)” systems actually enable enhanced data profiling. In a moment, we will briefly discuss some of the leading online marketing industry work underway to redefine privacy. But the motivation for this post is to sound the alarm that we should not—once again—allow powerful commercial interests to determine the evolving structure of our online lives. The digital data industry has no serious track record of protecting the public. Indeed, it was the failure of regulators to rein in this industry over the years that led to the current crisis. In the process, the growth of hate speech, the explosion of disinformation, and the highly concentrated control over online communications and commerce—to name only a few— now pose serious challenges to the fate of democracies worldwide. Google, Facebook and the others should never be relied on to defer their principal pursuit of monetization out of respect to any democratic ideal—let alone consumer protection and privacy. One clue to the likely end result of the current industry effort is to see how they frame it. It isn’t about democracy, the end of commercial surveillance, or strengthening human rights. It’s about how best to preserve what they call the “Open Internet.” (link is external)Some leading data marketers believe we have all consented to a trade-off, that in exchange for “free” content we’ve agreed to a pact enabling them to eavesdrop on everything we do—and then make all that information available to anyone who can pay for it—primarily advertisers. Despite its rhetoric about curbing tracking cookies, the online marketing business intends to continue to colonize our devices and monitor our online experiences. This debate, then, is really about who can decide—and under what terms—the fate of the Internet’s architecture, including how it operationalizes privacy—at least in the U.S. It illustrates questions that deserve a better answer than the “industry-knows-best” approach we have allowed for far. That’s why we call on the Biden Administration, the Federal Trade Commission (FTC) and the Congress to investigate these proposed new approaches for data use, and ensure that the result is truly privacy protective, supporting democratic governance and incorporating mechanisms of oversight and accountability. Here’s a brief review (link is external) of some of the key developments, which illustrate the digital “tug-of-war” ensuing over the several industry proposals involving cookies and tracking. In 2019, Google announced (link is external) that it would end the role of what’s known as “third-party cookies.” Google has created a “privacy sandbox (link is external)” where it has researched various methods it claims will protect privacy, especially for people who rely on its Chrome browser. It is exploring “ways in which a browser can group together people with similar browsing habits, so that ad tech companies can observe the habits of large groups instead of the activity of individuals. Ad targeting could then be partly based on what group the person falls into.” This is its “Federated Learning of Cohorts (FLoC) approach, where people are placed into “clusters” based on the use of “machine learning algorithms” that analyze the data generated from the sites a person visited and their content. Google says these clusters would “each represent thousands of people,” and that the “input features” used to generate the targeting algorithm, such as our “web history,” would be stored on our browsers. There would be other techniques deployed, to add “noise” to the data sets and engage in various “anonymization methods” so that the exposure of a person’s individual information is limited. Its TURTLEDOVE initiative is designed to enable more personalized targeting, where web browsers will be used to help ensure our data is available for the real-time auctions that sell us to advertisers. The theory is that by allowing the data to remain within our devices, as well using clusters of people for targeting, our privacy is protected. But the goal of the process— to have sufficient data and effective digital marketing techniques—is still at the heart of this process. Google recently (link is external) reported that “FLoC can provide an effective replacement signal for third-party cookies. Our tests of FLoC to reach in-market and affinity Google Audiences show that advertisers can expect to see at least 95% of the conversions per dollar spent when compared to cookie-based advertising.” Google’s 2019 announcement caused an uproar in the digital marketing business. It was also perceived (correctly, in my view) as a Google power grab. Google operates basically as a “Walled Garden (link is external)” and has so much data that it doesn’t really need third-party data cookies to hone in on its targets. The potential “death of the cookie” ignited a number of initiatives from the Interactive (link is external) Advertising Bureau, as well as competitors (link is external) and major advertisers, who feared that Google’s plan would undermine their lucrative business model. They include such groups as the Partnership for Addressable Media (PRAM), (link is external) whose 400 members include Mastercard, Comcast/NBCU, P&G, the Association of National Advertisers, IAB and other ad and data companies. PRAM issued a request (link is external) to review proposals (link is external) that would ensure the data marketing industry continues to thrive, but could be less reliant on third-party cookies. Leading online marketing company Trade Desk is playing a key role here. It submitted (link is external) its “United ID 2.0 (link is external),” plan to PRAM, saying that it “represents an alternative to third party cookies that improves consumer transparency, privacy and control, while preserving the value exchange of relevant advertising across channels and devices.” There are also a number of other ways now being offered that claim both to protect privacy yet take advantage of our identity (link is external), such as various collaborative (link is external) data-sharing efforts. The Internet standards groups Worldwide Web Consortium (W3C) has created (link is external) a sort of neutral meeting ground where the industry can discuss proposals and potentially seek some sort of unified approach. The rationale for the [get ready for this statement] “Improving Web Advertising Business Group goal is to provide monetization opportunities that support the open web while balancing the needs of publishers and the advertisers that fund them, even when their interests do not align, with improvements to protect people from the individual and societal impacts of tracking content consumption over time.” Its participants (link is external) are another “Who’s Who” in data-driven marketing, including Google, AT&T, Verizon, NYT, IAB, Apple, Group M, Axel Springer, Facebook, Amazon, Washington Post, Verizon, and Criteo. DuckDuckGo is also a member (and both Google and Facebook have multiple representatives in this group). The sole NGO listed as a member is the Center for Democracy and Technology. W3Cs ad business group has a number of documents (link is external) about the digital marketing business that illustrate why the issue of the future of privacy and data collection and targeting should be a public—and not just data industry—concern. In an explainer (link is external) on digital advertising, they make the paradigm so many are working to defend very clear: Marketing’s goal can be boiled down to the "5 Rights": Right Message to the Right Person at the Right Time in the Right Channel and for the Right Reason. Achieving this goal in the context of traditional marketing (print, live television, billboards, et al) is impossible. In digital realm, however, not only can marketers achieve this goal, they can prove it happened. This proof is what enables marketing activities to continue, and is important for modern marketers to justify their advertising dollars, which ultimately finance the publishers sponsoring the underlying content being monetized.” Nothing I’ve read says it better. Through a quarter century of work to perfect harvesting our identity for profit, the digital ad industry has created a formidable complex of data clouds (link is external), real-time ad auctions, cross-device tracking tools and advertising techniques (link is external) that further commodify our lives, shred our privacy, and transform the Internet into a hall of mirrors that can amplify our fears and splinter democratic norms. It’s people, of course, who decide how the Internet operates—especially those from companies such as Google, Facebook, Amazon, and those working for trade groups as the IAB. We must not let them decide how cookies may or may not be used or what new data standard should be adopted by the most powerful corporate interests on the planet to profit from our “identity.” It’s time for action by the FTC and Congress. Part 1. (1)For the uninitiated, TURTLEDOVE stands for “Two Uncorrelated Requests, Then Locally-Executed Decision On Victory”; FLEDGE is short for “First Locally-Executed Decision over Groups Experiment”; SPARROW is “Secure Private Advertising Remotely Run On Webserver”; and FLoC is “Federated Learning of Cohorts”). (2) In January 2021, the UK’s Competition and Markets Authority (CMA) opened up an investigation (link is external) into Google privacy sandbox and cookie plans.
    Jeff Chester
  • The Center for Digital Democracy (CDD) announced today its opposition to the California Privacy Rights Act (CPRA), also known as Proposition 24 (link is external), which will appear on the November 2020 California general election ballot. CDD has concluded that Prop 24 does not sufficiently strengthen Californians’ privacy and may, in fact, set a new, low, and thus dangerous standard for privacy protection in the U.S. We need strong and bold privacy legislation, not weaker standards and tinkering at the margins. We need digital privacy safeguards that address the fundamental drivers of our eroding privacy, autonomy, and that redress the growing levels of racial and social inequity. We need rules that go to the heart of the data-driven business model and curtail the market incentives that have created the deplorable state of affairs we currently face. What we need are protections that significantly limit data uses that undermine our privacy, increase corporate manipulation and exploitation, and exacerbate racial and economic inequality. We need default privacy settings that limit the sharing and selling of personal information, and the use of data for targeted advertising, personalized content, and other manipulative practices. We need to ensure privacy for all and limit any pay-for-privacy schemes that entice the most vulnerable to give up their privacy. In other words, we need to limit harmful data-use practices by default, and place the interests of consumers above market imperatives by allowing only those data practices that are not harmful to individuals, groups, and society at large. Prop 24 does none of that. Specifically, Prop 24 continues on the path of a failed notice-and-choice regime, allowing the much more powerful companies to set unfair terms. Instead, privacy legislation should focus on strong default settings and data-use practices that are allowable (“permissible uses”) and prohibiting all others. These safeguards should be in place by default, rather than forcing consumers to opt out of invasive advertising. Prop 24, in contrast, does not provide effective data-use limitations; instead it continues to limit data sharing and selling via an opt-out, rather than declaring them to be impermissible uses, or at minimum requiring an opt-in for such practices. Even “sensitive data” under Prop 24 is protected only via a consumer-initiated opt-out, rather than prohibiting the use of sensitive personal data altogether. Equally concerning, Prop 24 would expand rather than limit pay-for-privacy schemes. Under the terms of Prop 24, corporations are still allowed to charge a premium (or eliminate a discount) in exchange for privacy. Consumers shouldn’t be charged higher prices or be discriminated against simply for exercising their privacy rights. This provision of Prop 24 is particularly objectionable, as it tends to harm vulnerable populations, people of color, and the elderly by creating privacy “haves” and “have-nots,” further entrenching other, existing inequities as companies would be able use personal data to profile, segment, and discriminate in a variety of areas. There are many other reasons that CDD objects to Prop 24, chief among them that this flawed measure - employs an outdated concept of “sensitive data” instead of focusing on sensitive data uses; - fails to rein in the growing power of data brokers that collect and analyze personal data from a variety of sources, including public data sets, for sale to marketers; - does not employ strong enough data minimization provisions to limit data collection, use and disclosure only to what is necessary to provide the service requested by the consumer; - undermines consumer efforts to seek enforcement of privacy rights by neglecting to provide full private right-of-action provisions; and - unnecessarily delays its protection of employee privacy.
    Katharina Kopp
  • The COVID-19 pandemic is a global public health emergency that requires a coordinated and large-scale response by governments worldwide. However, States’ efforts to contain the virus must not be used as a cover to usher in a new era of greatly expanded systems of invasive digital surveillance.We, the undersigned organizations, urge governments to show leadership in tackling the pandemic in a way that ensures that the use of digital technologies to track and monitor individuals and populations is carried out strictly in line with human rights.Technology can and should play an important role during this effort to save lives, such as to spread public health messages and increase access to health care. However, an increase in state digital surveillance powers, such as obtaining access to mobile phone location data, threatens privacy, freedom of expression and freedom of association, in ways that could violate rights and degrade trust in public authorities – undermining the effectiveness of any public health response. Such measures also pose a risk of discrimination and may disproportionately harm already marginalized communities.These are extraordinary times, but human rights law still applies. Indeed, the human rights framework is designed to ensure that different rights can be carefully balanced to protect individuals and wider societies. States cannot simply disregard rights such as privacy and freedom of expression in the name of tackling a public health crisis. On the contrary, protecting human rights also promotes public health. Now more than ever, governments must rigorously ensure that any restrictions to these rights is in line with long-established human rights safeguards.This crisis offers an opportunity to demonstrate our shared humanity. We can make extraordinary efforts to fight this pandemic that are consistent with human rights standards and the rule of law. The decisions that governments make now to confront the pandemic will shape what the world looks like in the future.We call on all governments not to respond to the COVID-19 pandemic with increased digital surveillance unless the following conditions are met:Surveillance measures adopted to address the pandemic must be lawful, necessary and proportionate. They must be provided for by law and must be justified by legitimate public health objectives, as determined by the appropriate public health authorities, and be proportionate to those needs. Governments must be transparent about the measures they are taking so that they can be scrutinized and if appropriate later modified, retracted, or overturned. We cannot allow the COVID-19 pandemic to serve as an excuse for indiscriminate mass surveillance.If governments expand monitoring and surveillance powers then such powers must be time-bound, and only continue for as long as necessary to address the current pandemic. We cannot allow the COVID-19 pandemic to serve as an excuse for indefinite surveillance.States must ensure that increased collection, retention, and aggregation of personal data, including health data, is only used for the purposes of responding to the COVID-19 pandemic. Data collected, retained, and aggregated to respond to the pandemic must be limited in scope, time-bound in relation to the pandemic and must not be used for commercial or any other purposes. We cannot allow the COVID-19 pandemic to serve as an excuse to gut individual’s right to privacy.Governments must take every effort to protect people’s data, including ensuring sufficient security of any personal data collected and of any devices, applications, networks, or services involved in collection, transmission, processing, and storage. Any claims that data is anonymous must be based on evidence and supported with sufficient information regarding how it has been anonymized. We cannot allow attempts to respond to this pandemic to be used as justification for compromising people’s digital safety.Any use of digital surveillance technologies in responding to COVID-19, including big data and artificial intelligence systems, must address the risk that these tools will facilitate discrimination and other rights abuses against racial minorities, people living in poverty, and other marginalized populations, whose needs and lived realities may be obscured or misrepresented in large datasets. We cannot allow the COVID-19 pandemic to further increase the gap in the enjoyment of human rights between different groups in society.If governments enter into data sharing agreements with other public or private sector entities, they must be based on law, and the existence of these agreements and information necessary to assess their impact on privacy and human rights must be publicly disclosed – in writing, with sunset clauses, public oversight and other safeguards by default. Businesses involved in efforts by governments to tackle COVID-19 must undertake due diligence to ensure they respect human rights, and ensure any intervention is firewalled from other business and commercial interests. We cannot allow the COVID-19 pandemic to serve as an excuse for keeping people in the dark about what information their governments are gathering and sharing with third parties.Any response must incorporate accountability protections and safeguards against abuse. Increased surveillance efforts related to COVID-19 should not fall under the domain of security or intelligence agencies and must be subject to effective oversight by appropriate independent bodies. Further, individuals must be given the opportunity to know about and challenge any COVID-19 related measures to collect, aggregate, and retain, and use data. Individuals who have been subjected to surveillance must have access to effective remedies.COVID-19 related responses that include data collection efforts should include means for free, active, and meaningful participation of relevant stakeholders, in particular experts in the public health sector and the most marginalized population groups.Signatories:7amleh – Arab Center for Social Media AdvancementAccess NowAfrican Declaration on Internet Rights and Freedoms CoalitionAI NowAlgorithm WatchAlternatif BilisimAmnesty InternationalApTIARTICLE 19Asociación para una Ciudadanía Participativa, ACI ParticipaAssociation for Progressive Communications (APC)ASUTIC, SenegalAthan - Freedom of Expression Activist OrganizationAustralian Privacy FoundationBarracón DigitalBig Brother WatchBits of FreedomCenter for Advancement of Rights and Democracy (CARD)Center for Digital DemocracyCenter for Economic JusticeCentro De Estudios Constitucionales y de Derechos Humanos de RosarioChaos Computer Club - CCCCitizen D / Državljan DCIVICUSCivil Liberties Union for EuropeCódigoSurCoding RightsColetivo Brasil de Comunicação SocialCollaboration on International ICT Policy for East and Southern Africa (CIPESA)Comité por la Libre Expresión (C-Libre)Committee to Protect JournalistsConsumer ActionConsumer Federation of AmericaCooperativa Tierra ComúnCreative Commons UruguayD3 - Defesa dos Direitos DigitaisData Privacy BrasilDemocratic Transition and Human Rights Support Center "DAAM"Derechos DigitalesDigital Rights Lawyers Initiative (DRLI)Digital Rights WatchDigital Security Lab UkraineDigitalcourageEPICepicenter.worksEuropean Digital Rights - EDRiFitugFoundation for Information Policy ResearchFoundation for Media AlternativesFundación Acceso (Centroamérica)Fundación Ciudadanía y Desarrollo, EcuadorFundación Datos ProtegidosFundación Internet BoliviaFundación Taigüey, República DominicanaFundación Vía LibreHermes CenterHiperderechoHomo DigitalisHuman Rights WatchHungarian Civil Liberties UnionImpACT International for Human Rights PoliciesIndex on CensorshipInitiative für NetzfreiheitInnovation for Change - Middle East and North AfricaInternational Commission of JuristsInternational Service for Human Rights (ISHR)Intervozes - Coletivo Brasil de Comunicação SocialIpandetecIPPFIrish Council for Civil Liberties (ICCL)IT-Political Association of DenmarkIuridicum Remedium z.s. (IURE)KarismaLa Quadrature du NetLiberia Information Technology Student UnionLibertyLuchadorasMajal.orgMasaar "Community for Technology and Law"Media Rights Agenda (Nigeria)MENA Rights GroupMetamorphosis FoundationNew America's Open Technology InstituteObservacomOpen Data InstituteOpen Rights GroupOpenMediaOutRight Action InternationalPangeaPanoptykon FoundationParadigm Initiative (PIN)PEN InternationalPrivacy InternationalPublic CitizenPublic KnowledgeR3D: Red en Defensa de los Derechos DigitalesRedesAyudaSHARE FoundationSkyline International for Human RightsSursiendoSwedish Consumers’ AssociationTahrir Institute for Middle East Policy (TIMEP)Tech InquiryTechHerNGTEDICThe Bachchao ProjectUnwanted Witness, UgandaUsuarios DigitalesWITNESSWorld Wide Web Foundation
  • By Jeffrey Chester The COVID-19 pandemic is a profound global public health crisis that requires our upmost attention: to stem its deadly tide and rebuild the global health system so we do not experience such a dire situation in the future. It also demands that we ensure the U.S. has a digital media system that is democratic, accountable, and one that both provides public services and protects privacy. The virus is profoundly accelerating our reliance on digital media worldwide, ushering (link is external) in “a new landscape in terms of how shoppers are buying and how they are behaving online and offline.” Leading platforms—Amazon, Facebook and Google—as well as many major ecommerce and social media sites, video streaming services, gaming apps, and the like—are witnessing a flood of people attempting to research health concerns, order groceries and supplies, view entertainment and engage in communication with friends and family. According to a marketing industry report (link is external), “nearly 90% of consumers have changed their behavior because of COVID-19.” More data (link is external) about our health concerns, kids, financial status, products we buy and more are flowing into the databases of the leading digital media companies. The pandemic will further strengthen their power as they leverage all the additional personal information they are currently capturing as a consequence of the pandemic. This also poses a further threat to the privacy of Americans who are especially dependent on online services if they are to survive. The pandemic is accelerating societal changes (link is external) in our relationship to the Internet. For example, marketers predict that we are witnessing the emergence of an experience they call the “fortress home”—as “consumer psychology shifts into an extreme form of cocooning.” The move to online buying via ecommerce—versus going to a physical store—will become an even more dominant consumer behavior. So, too, will in-home media consumption increase, especially the reliance on streaming (“OTT”) video. Marketers are closely examining all these pandemic-related developments using a global lens—since the digital behaviors of all consumers—from China to the U.S.—have so many commonalities. For example, Nielsen has identified six (link is external) “consumer behavior thresholds” that reveal virus-influenced consumer buying behaviors, such as “quarantined living preparation” and “restricted living.” A host of sites are now regularly reporting how the pandemic impacts the public, and what it means for marketing and major brands. See, for example, Ipsos (link is external), Comscore (link is external), Nielsen (link is external), Kantar (link is external), and the Advertising Research Foundation (ARF (link is external)). In addition to the expanded market power of the giants, there are also growing threats to our privacy from surveillance by both government (link is external) and the commercial sector. Marketers are touting how all the real-time geolocation data that is continuously mined from our mobile devices, wearables (link is external) and “apps” can help public health experts better respond to the virus and similar threats. At a recent (link is external) Advertising Research Foundation townhall on the virus it was noted that “the location-based data that brand stewards have found useful in recent years to deliver right-time/right-place messages has ‘gone from being useful that helps businesses sell a little bit more’ to truly being a community and public-health tool.” Marketers will claim that they have to track all our moves because it’s in the national interest in order to sanction the rapid expansion of geo-surveillance (link is external) in all areas of our lives. They are positioning themselves to be politically rewarded for their work on the pandemic, hoping it will immunize them from the growing criticism about their monopolistic and anti-consumer privacy behaviors. Amazon, Facebook, Google, Snapchat and various “Big Data” digital marketing companies announced (link is external), for example, a COVID-19 initiative with the White House and CDC. Brokered by the Ad Council, it will unleash various data-profiling technologies, influencer marketing, and powerful consumer targeting engines to ensure Americans receive information about the virus. (At the same time, brands are worried about having their content appear alongside information about the coronavirus, adopting new (link is external) “brand safety” tools that can “blacklist” news and other online sites. This means that the funding for journalism and public safety information becomes threatened (link is external) because advertisers wish to place their own interests first.) But the tactics (link is external) now sanctioned by the White House are the exact same ones that must be addressed in any legislation that effectively protects our privacy online. We believe that the leading online companies should not be permitted to excessively enrich themselves during this moment by gathering even more information on the public. They will mine this information for insights that enable them to better understand our private health needs and financial status. They will know more about the online behaviors of our children, grandparents and many others. Congress should enact protections that ensure that the data gathered during this unprecedented public health emergency are limited in how they can be used. It should also examine how the pandemic is furthering the market power of a handful of platforms and ecommerce companies, to ensure there is a fair marketplace accessible to the public. It’s also evident there must be free or inexpensively priced broadband for all. How well we address the role of the large online companies during this period will help determine our ability to respond to future crises, as well as the impact of these companies on our democracy.
  • Google’s (i.e., Alphabet, Inc.) proposed acquisition of Fitbit, a leading health wearable device company, is just one more piece illustrating how the company is actively engaged in shaping the future of public health. It has assembled a sweeping array of assets in the health field, positioning its advertising system to better take advantage of health information, and is playing a proactive role lobbying to promote significant public policy changes for medical data at the federal level that will have major implications (link is external)for Americans and their health.Google understands that there are tremendous revenues to be made gathering data—from patients, hospitals, medical professionals and consumers interested in “wellness”—through the various services that the company offers. It sees a lucrative future as a powerful presence in our health system able to bill Medicare and other government programs. In reviewing the proposed takeover, regulators should recognize that given today’s “connected” economy, and with Google’s capability and intention to generate monetizeable insights from individuals across product categories (health, shopping, financial services, etc.), the deal should not be examined solely within a narrow framework. While the acquisition directly bolsters Google’s growing clout in what is called the “connected-health” marketplace, the company understands that the move is also designed to maintain its dominance in search, video and other digital marketing applications. It’s also a deal that raises privacy concerns, questions about the future direction of the U.S. health system, and what kinds of safeguards—if any at all—will be in place to protect health consumers and patients. As health venture capital fund Rock Health explained in a recent report, “Google acquired Fitbit in a deal that gives the tech giant access to troves of personal health data and healthcare partnerships, in addition to health tracking software.” Fitbit reports that “28 million active users” worldwide use its wearable device products. For Google, Fitbit brings (link is external) a rich layer of personal data, expertise in fitness (link is external) tracking software, heart-rate sensors, as well as relationships with health-service and employee-benefit providers. Wearable devices can provide a stream (link is external)of ongoing data on our activities, physical condition, geolocation and more. In a presentation to investors made in 2018, Fitbit claimed to be the “number one health and fitness” app in the U.S. for both the Android and Apple app store, and considered itself the “number one “wearable brand globally,” available in 47,000 stores, and had “direct applications for health and wellness categories such as diabetes, heart health, and sleep apnea.” “Driving behavior change” is cited as one of the company’s fundamental capabilities, such as its “use of data…to provide insights and guidance.” Fitbit developed a “platform for innovative data collection” for clinical researchers, designed to help advance (link is external) “the use of wearable devices in research and clinical applications. Fitbit also has relationships with pharmacies, including those that serves people with “complex health conditions.” Fitbit has also “made a number of moves to expand its Health Services division,” such as its 2018 acquisition of Twine Health, a “chronic disease management platform.” In 2018, it also unveiled a “connected health platform that enables payers and health systems to deliver personalized coaching” to individuals. The company’s Fitbit Health Solutions division is working with more than 100 insurance companies in the U.S., and “both government sponsored and private plans” work with the company. Fitbit Premium was launched last year, which “mines consumer data to provide personalized health insights” for health care delivery. According to Business Insider Intelligence, “Fitbit plans to use the Premium service to get into the management of costly chronic conditions like diabetes, sleep apnea, and hypertension.” The company has dozens of leading “enterprises” and “Fortune 500” companies as customers. It also works with thousands of app developers and other third parties (think Google’s dominance in the app marketplace, such as its Play store). Fitbit has conducted research to understand “the relationship between activity and mood” of people, which offers an array of insights that has applications for health and numerous other “vertical” markets. Even prior to the formal takeover of Fitbit by Google, it had developed strong ties to the digital data marketing giant. It has been a Google Cloud client since 2018, using its machine learning prowess to insert Fitbit data into a person’s electronic health record (EHR). In 2018, Fitbit said that it was going to transfer its “data infrastructure” to the Google Cloud platform. It planned to “leverage Google’s healthcare API” to generate “more meaningful insights” on consumers, and “collaborate on the future of wearables.” Fitbit’s data might also assist Google in forging additional “ties with researchers who want to unlock the constant stream of data” its devices collect. When considering how regulators and others should view this—yet again—significant expansion by Google in the digital marketplace—the following issues must be addressed: Google Cloud and its use of artificial intelligence and machine learning in a new data pipeline for health services, including marketing Google’s Cloud service offers “solutions” (link is external) for the healthcare and life sciences industry, by helping to “personalize patient experiences,” “drive data interoperability,” and improve commercialization and operations”—including for “pharma insights and analytics.” Google Cloud (link is external) has developed a specific “API” (application programming interface) that enables health-related companies to process and analyze their data, by using machine learning technologies, for example. The Health Care Cloud API (link is external)also provides a range of other data functionalities (link is external) for clinical and other uses. Google is now working to help create a “new data infrastructure layer via 3 key efforts,” according to a recent report on the market. It is creating “new data pipes for health giants,” pushing the Google Cloud and building “Google’s own healthcare datasets for third parties.” (See, for example, “G Suite (link is external) for Healthcare Businesses” products as well as its “Apigee API Platform,” which works with the Cleveland Clinic, Walgreens, and others). Illustrating the direct connection between the Google Cloud and Google’s digital marketing apparatus is their case study (link is external) of the leading global ad conglomerate, WPP. “Our strong partnership with Google Cloud is key,” said WPP’s CEO, who explained that “their vast experience in advertising and marketing combined with their strength in analytics and AI helps us to deliver powerful and innovative solutions for our clients” (which include (link is external) “369 of the Fortune Global 500, all 30 of the Dow Jones 30 and 71 of the NASDAQ 100”). WPP links the insights and other resources it generates from the Google Cloud to Google’s “Marketing Platform” (link is external) so its clients can “deliver better experiences for their audiences across media and marketing.” Google has made a significant push (link is external) to incorporate the role that machine learning plays with marketing across product categories, including search and YouTube. It is using machine learning to “anticipate needs” of individuals to further its advertising (link is external) business. Fitbit will bring in a significant amount of additional data for Google to leverage in its Cloud services, which impact a number of consumer and commercial markets beyond (link is external) health care. The Fitbit deal also involves Google’s ambitions to become an important force providing healthcare providers access to patient, diagnostic and other information. Currently the market is dominated by others, but Google has plans for this market. For example, it has developed a “potential EHR tool that would empower doctors with the same kind of intuitive and snappy search functionality they've come to expect from Google.” According to Business Insider Intelligence, Google could bundle such applications along with Google Cloud and data analytics support that would help hospitals more easily navigate the move to data heavy (link is external), value-based care (VBC) reimbursement models (link is external).” Google Health already incorporates a wide range of health-related services and investments “Google is already a health company,” according (link is external) to Dr. David Feinberg, the company’s vice president at Google Health. Feinberg explains that they are making strides in organizing and making health data more useful thanks to work being done by Cloud (link is external) and AI (link is external) teams. And looking across the rest of Google’s portfolio of helpful products, we’re already addressing aspects of people’s health. Search helps people answer everyday health questions (link is external), Maps helps get people to the nearest hospital, and other tools and products are addressing issues tangential to health—for instance, literacy (link is external), safer driving (link is external), and air pollution (link is external)…. and in response, Google and Alphabet have invested in efforts that complement their strengths and put users, patients, and care providers first. Look no further than the promising AI research and mobile applications coming from Google and DeepMind Health (link is external), or Verily’s Project Baseline (link is external) that is pushing the boundaries of what we think we know about human health. Among Google Health’s initiatives are “studying the use of artificial intelligence to assist in diagnosing (link is external) cancer, predicting (link is external) patient outcomes, preventing (link is external) blindness…, exploring ways to improve patient care, including tools that are already being used by clinicians…, [and] partnering with doctors, nurses, and other healthcare professionals to help improve the care patients receive.” Through its AI work, Google is developing “deep learning” applications for electronic health records. Google Health is expanding its team, including specifically to take advantage of the wearables market (and has also hired a former FDA commissioner to “lead health strategy”). Google is the leading source of search information on health issues, and health-related ad applications are integrated into its core marketing apparatus A billion health-related questions are asked every day on Google’s search engine, some 70,000 every minute (“around 7 percent of Google’s daily searches”). “Dr. Google,” as the company has been called, is asked about conditions, medication, symptoms, insurance questions and more, say company leaders. Google’s ad teams in the U.S. promote how health marketers can effectively use its ad products, including YouTube, as well as understand how to take advantage of what Google has called “the path to purchase.” In a presentation on “The Role of Digital Marketing in the Healthcare Industry,” Google representatives reported that After conducting various studies and surveys, Google has concluded that consumers consult 12.4 resources prior to a hospital visit. When consumers are battling a specific disease or condition, they want to know everything about it: whether it is contagious, how it started, the side-effects, experiences of others who have had the same condition, etc. When doing this research, they will consult YouTube videos, read patient reviews of specific doctors, read blog articles on healthcare websites, read reviews, side-effects, and uses of particular medicines. They want to know everything! When consuming this information, they will choose the business that has established their online presence, has positive reviews, and provides a great customer experience, both online and offline. Among the data shared with marketers was information that “88% of patients use search to find a treatment center,” “60% of patients use a mobile device,” “60% of patients like to compare and validate information from doctors with their own online research,” “56% of patients search for health-related concerns on YouTube,” “5+ videos are watched when researching hospitals or treatment centers,” and that “2 billion health-related videos are on YouTube.” The “Internet is a Patient/Caregiver’s #1 confidant,” they noted. They also discussed how mobile technologies have triggered “non-linear paths to purchase,” and that mobile devices are “now the main device used for health searches.” “Search and video are vital to the patient journey,” and “healthcare videos represent one of the largest, fastest growing content segments on YouTube today.” Their presentation demonstrated how health marketers can take advantage of Google’s ability to know a person’s location, as well as how other information related to their behaviors and interests can help them “target the right users in the right context.” To understand the impact of all of Google’s marketing capabilities, one also should review the company’s restructured (and ever-evolving) “Marketing Platform.” Google’s Map Product will be able to leverage Fitbit data Google is using data related to health that are gathered by Google Maps, such as when we do searches for needed care services (think ERs, hospitals, pharmacies, etc.). “The most popular mapping app in the U.S…. presents a massive opportunity to connect its huge user base with healthcare services,” explain Business Insider Intelligence. Google has laid the groundwork with its project addressing the country’s opioid epidemic, linking “Google Maps users with recovery treatment centers,” as well as identifying where Naloxone (the reversal drug for opioid overdoes) is available. Last year, Google Maps launched a partnership with CVS “to help consumers more easily find places to drop off expired drugs.” Through its Waze subsidiary, which provides navigation information for drivers, Google sells ads to urgent care centers, which find new patients as a result of map-based, locally tailored advertisements. Google’s impact on the wearable marketplace, including health, wellness and other apps The acquisition of Fitbit will bolster Google’s position in the wearables market, as well as its direct and indirect role providing access to its own and third-party apps. Google Fit, which “enables Android users to pair health-tracking devices with their phone to monitor activity,” already has partnerships with a number of wearable device companies, such as Nike, Adidas and Noom. Business Intelligencer noted in January 2020 that Google Fit was “created to ensure Android devices have a platform to house user-generated health data (making it more competitive with Apple products). In 2019, Google acquired the smartwatch technology from Fossil. Fitbit will play a role in Google’s plans for its Fit service, such as providing additional data that can be accessed via third parties and made available to medical providers through patients’ electronic health records. The transaction, said one analyst, “is partly a data play,” and also one intended to keep customers from migrating from its Android platform to Apple’s. It is designed, they suggest, to ensure that Google can benefit from the sales of health-related services during the peak earning years of consumers. The Google Play app store offers access to an array of health and wellness apps that will be impacted by this deal. Antitrust authorities in the EU have already sanctioned Google for the way it has leveraged its Android platform for anti-competitive behavior. Google’s health related investments, including its use of artificial intelligence, and the role of Fitbit data Verily is “where Alphabet is doing the bulk of its healthcare work,” according to a recent report on the role AI plays in Google’s plans to “reinvent the $3 Trillion U.S. healthcare industry.” Verily is “focused on using data to improve healthcare via analytics tools, interventions, research” and other activities, partnering with “existing healthcare institutions to find areas to apply AI.” One of these projects is the “Study Watch, a wearable device that captures biometric data.” Verily has also made significant investments globally as it seeks to expand. DeepMind works on AI research, including how it is applicable to healthcare. Notably, DeepMind is working with the UK’s National Health Service. Another subsidiary, Calico, uses AI as part of its focus to address aging and age-related illnesses. Additionally, “GV” (Google Ventures) makes health-related investments. According to the CB Insights report, “Google’s strategy involves an end-to-end approach to healthcare, including: Data generation — This includes digitizing and ingesting data produced by wearables, imaging, and MRIs among other methods. This data stream is critical to AI-driven anomaly detection; Disease detection — Using AI to detect anomalies in a given dataset that might signal the presence of some disease; and Disease/lifestyle management — These tools help people who have been diagnosed with a disease or are at risk of developing one go about their day-to-day lives and/or make positive lifestyle modifications. Google has also acquired companies that directly further its health business capabilities, such as Apigee, Senosis Health and others. Google’s continuous quest to gather more health data, such as “Project Nightingale,” has already raised concerns. There are now also investigations of Google by the Department of Justice and State Attorney’s-General. The Department of Justice, which is currently reviewing the Google/Fitbit deal, should not approve it without first conducting a thorough review of the company’s health-related business operations, including the impact (including for privacy) that Fitbit data will have on the marketplace. This should be made a part of the current ongoing antitrust investigation into Google by both federal and state regulators. Congress should also call on the DoJ, as well as the FTC, to review this proposed acquisition in light of the changes that digital applications are bringing to health services in the U.S. This deal accompanies lobbying from Google and others that is poised to open the floodgates of health data that can be accessed by patients and an array of commercial and other entities. The Department of Health and Human Services has proposed a rule on data “interoperability” that, while ostensibly designed to help empower health services users to have access to their own data, is also a “Trojan Horse” designed to enable app developers and other commercial entities to harvest that data as an important new profit center. “The Trump Administration has made the unfettered sharing of health data a health IT priority,” explained one recent news report. Are regulators really ready to stop further digital consolidation? The diagnosis is still out! For a complete annotated version, please see attached pdf
  • A new report (link is external) on how political marketing insiders and platforms such as Facebook view the “ethical” issues raised by the role of digital marketing in elections illustrates why advocates and others concerned about election integrity should make this issue a public-policy priority. We cannot afford to leave it in the hands of “Politech” firms and political campaign professionals, who appear unable to acknowledge the consequences to democracy of their unfettered use of powerful data-driven online-marketing applications. “Digital Political Ethics: Aligning Principles with Practice” reports on a series of conversations and a two-day meeting last October that included representatives of firms (such as Blue State, Targeted Victory, WPA Intelligence, and Revolution Messaging) that work either for Democrats or Republicans, as well as officials from both Facebook and Twitter. The goal of the project was to “identify areas of agreement among key stakeholders concerning ethical principles and best practices in the conduct of digital campaigning in the U.S.” Perhaps it should not be a surprise that this group of people appears to be incapable of critically examining (or even candidly assessing) all of the problems connected with the role of digital marketing in political campaigns. Missing from the report is any real concern about how today’s electoral process takes advantage of the absence of any meaningful privacy safeguards in the U.S. A vast commercial surveillance apparatus that has no bounds has been established. This same system that is used to market goods and services, and which is driven by data-brokers, marketing clouds, (link is external) real-time ad-decision engines, geolocation (link is external) identification and other AI-based (link is external)technologies—along with the clout of leading platforms and publishers—is now also used for political purposes. All of us are tracked and profiled 24/7, including where we go and what we do—with little location privacy anymore. Political insiders and data ad companies such as Facebook, however, are unwilling to confront the problem of this loss of privacy, given how valuable all this personal data is to their business model or political goal. Another concern is that these insiders now view digital marketing as a normative, business-as-usual process—and nothing out of the ordinary. But anyone who knows how the system operates should be deeply concerned about the nontransparent and often far-reaching ways digital marketing is constructed to influence (link is external) our decision-making and behaviors, including at emotional (link is external) and subconscious (link is external) levels. The report demonstrates that campaign officials have largely accepted as reasonable the various invasive and manipulative technologies and techniques that the ad-tech industry has developed over the past decade. Perhaps these officials are simply being pragmatic. But society cannot afford such a cynical position. Today’s political advertising is not yesterday’s TV commercial—nor is it purely an effort to “microtarget” sympathetic market segments. Today’s digital marketing apparatus follows all of us continuously, Democrats, Republicans, and independents alike. The marketing ecosystem (link is external) is finely tuned to learn how we react, transforming itself depending on those reactions, and making decisions about us in milliseconds in order to use—and refine—various tactics to influence us, entirely including new ad formats, each tested and measured to have us think and behave one way or another. And this process is largely invisible to voters, regulators and the news media. But for the insiders, microtargeting helps get the vote out and encourages participation. Nothing much is said about what happened in the 2016 U.S. election, when some political marketers sought to suppress the vote among communities of color, while others engaged is disinformation. Some of these officials now propose that political campaigns should be awarded a digital “right of way” that would guarantee them unfettered access to Facebook, Google and other sites, as well as ensure favorable terms and support. This is partly in response to the recent and much-needed reforms adopted by Twitter (link is external)and Google (link is external)that either eliminate or restrict how political campaigns can use their platforms, which many in the politech industry dislike. Some campaign officials see FCC (link is external) rules regulating TV ads for political ads as an appropriate model to build policies for digital campaigning. That notion should be alarming to those who care about the role that money plays in politics, let alone the nature of today’s politics (as well as those who know the myriad failures of the FCC over the decades). The U.S. needs to develop a public policy for digital data and advertising that places the interests of the voter and democracy before that of political campaigns. Such a policy should include protecting the personal information of voters; limiting deceptive and manipulative ad practices (such as lookalike (link is external) modeling); as well as prohibiting those contemporary ad-tech practices (e.g., algorithmic based real-time programmatic (link is external) ad systems) that can unfairly influence election outcomes. Also missing from the discussion is the impact of the never-ending expansion of “deep-personalization (link is external)” digital marketing applications designed to influence and shift consumer behavior more effectively. The use of biodata, emotion recognition (link is external), and other forms of what’s being called “precision data”—combined with a vast expansion of always-on sensors operating in an Internet of Things world—will provide political groups with even more ways to help transform electoral outcomes. If civil society doesn’t take the lead in reforming this system, powerful insiders who have their own conflicts of interests will be able to shape the future of democratic decision-making in the U.S. We cannot afford to leave it to the insiders to decide what is best for our democracy.
  • In the aftermath of Google’s settlement with the FTC over its COPPA violations, some independent content producers on YouTube have expressed unhappiness with the decision. They are unclear how to comply with COPPA, and believe their revenue will diminish considerably. Some also worry that Google’s recently announced (link is external) system to meet the FTC settlement—where producers must identify if their content is child-directed—will affect their overall ability to “monetize” their productions even if they aren’t aiming to primarily serve a child audience. These YouTubers have focused their frustration at the FTC and have mobilized to file comments in the current COPPA proceedings (link is external). As Google has rolled out its new requirements, it has abetted a misdirected focus on the FTC and created much confusion and panic among YouTube content producers. Ultimately, their campaign, designed to weaken the lone federal law protecting children’s privacy online, could create even more violations of children’s privacy. While we sympathize with many of the YouTubers’ concerns, we believe their anger and sole focus on the FTC is misplaced. It is Google that is at fault here, and it needs finally to own up and step up. The truth is, it is Google’s YouTube that has violated the 2013 COPPA rule (link is external) pretty much since its inception. The updated rule made it illegal to collect persistent identifiers from children under 13 without parental consent. Google did so while purposefully developing YouTube as the leading site for children. It encouraged content creators to go all in and to be complicit in the fiction that YouTube is only for those whose age is 13 and above. Even though Google knew that this new business model was a violation of the law, it benefitted financially by serving personalized ads to children (and especially by creating the leading online destination for children in the U.S. and worldwide). All the while small, independent YouTube content creators built their livelihood on this illegitimate revenue stream. The corporate content brand channels of Hasbro, Mattel and the like, who do not rely on YT revenue, as well as corporate advertisers, also benefitted handsomely from this arrangement, allowing them to market to children unencumbered by COPPA regulations. But let’s review further how Google is handling the post-settlement world. Google chose to structure its solution to its own COPPA violation in a way that continues to place the burden and consequences of COPPA compliance on independent content creators. Rather than acknowledging wrong-doing and culpability in the plight of content creators who built their livelihoods on the sham that Google had created, Google produced an instructional video (link is external) for content creators that emphasizes the consequences of non-compliance and the potential negative impact on the creators’ monetization ability. It also appeared to have scared those who do not create “for kids” content. Google requires content creators to self-identify their content as “for kids,” and it will use automated algorithms to detect and flag “for kids” content. Google appears to have provided little useful information to content providers on how to comply, and confusion now seems rampant. Some YouTubers also fear (link is external) that the automated flagging of content is a blunt instrument “based on oblique overly broad criteria.” Also, Google declared that content designated as “for kids” will no longer serve personalized ads. The settlement and Google’s implementation are designed to assume the least risk for Google, while maximizing its monetary benefits. Google will start limiting the data it collects on made “for kids” content – something they should have done a long time ago, obviously. As a result, Google said it will no longer show personalized ads. However, the incentives for content creators to self-identify as “for kids” are not great, given that disabling (link is external) behavioral ads “may significantly reduce your channel’s revenue.” Although Google declares that it is “committed to help you with this transition,” it has shown no willingness to reduce its own significant cut of the ad revenue when it comes to children’s content. While incentives for child-directed content creators are high to mis-label their content, and equally high for Google to encourage them in this subterfuge, the consequences for non-compliance now squarely rest with content creators alone. Let’s be clear here. Google should comply with COPPA as soon as possible where content is clearly child- directed. Google has already developed a robust set of safeguards and policies (link is external) on YouTube Kids to protect children from advertising (link is external) for harmful products and from exploitative influencer marketing. It should apply the same protections on all child-directed content, regardless of which YouTube platform kids are using. When CCFC and CDD filed our COPPA complaint in 2018, we focused on how Google was shirking its responsibilities under the law by denying that portions of YouTube were child-directed (and thus governed by COPPA). The channels we cited in our complaint were not gray-area channels that might be child attractive but also draw lots of teen and adult viewers. Our complaint discussed such channels as Little Baby Bum, ChuChu TV Nursery Rhymes and Kids Songs, and Ryan’s Toy Reviews. We did not ask the FTC to investigate or sanction any channel owners, because Google makes the rules on YouTube, particularly with regard to personal data collection and use, and therefore it was the party that chose to violate COPPA. (Many independent content creators concur indirectly when they say that they should not be held accountable under COPPA. They maintain that they actually don’t have access to detailed audience data and do not know if their YouTube audience is under 13 at all. Google structures what data they have access to.) For other content, in the so-called “gray zone,” such as content for general audiences that children under 13 also watch, or content that cannot be easily classified, we need more information about Google’s internal data practices. Do content creators have detailed access to demographic audience data and are thus accountable, or does Google hold on to that data? Should accountability for COPPA compliance be shifted more appropriately to Google? Can advertising restrictions be applied at the user level once a user is identified as likely to be under thirteen regardless of what content they watch? We need Google to open up its internal processes, and we are asking the FTC to develop rules that share accountability appropriately between Google and its content creators. The Google settlement has been a significant victory for children and their parents. For the first time, Google has been forced to take COPPA seriously, a U.S. law that was passed by Congress to express the will of the majority of the electorate. Of course, the FTC is also complicit in this problem as it waited six years to enforce the updated law. They watched Google’s COPPA violations increase over time, allowing a monster to grow. What’s worse, the quality of the kids YouTube content was to most, particularly to parents, more than questionable (link is external), and at times even placed children seriously at risk (link is external). What parents saw in the offering for their children was quantity rather than quality content. Now, however, after six years, the FTC is finally requiring Google and creators to abide by the law. Just like that. Still, this change should not come as a complete surprise to content creators. We sympathize with the independent YT creators and understand their frustration, but they have been complicit in this arrangement as well. The children’s digital entertainment industry has discussed compliance with COPPA for years behind closed doors, and many knew that YouTube was in non-compliance with COPPA. The FTC has failed to address the misinformation that Google is propagating among content creators, its recent guidance (link is external) not withstanding. Moreover, the FTC has allowed Google to settle its COPPA violation by developing a solution that allows Google to abdicate any responsibility with COPPA compliance, while continuing to maximize revenue. It’s time for the FTC to study Google’s data practices and capabilities better, and squarely put the onus on Google to comply with COPPA. As the result of the current COPPA proceedings, rules must be put in place to hold platforms, like YouTube, accountable.
  • Big Tech companies are lobbying to undermine the only federal online privacy law in the US – one which protects children--and we need your help to stop them. Along with the Campaign for a Commercial-Free Childhood (CCFC), we ask for your help to urge the Federal Trade Commission to strengthen—not weaken—the Children’s Online Privacy Protection Act (COPPA). Please sign this petition because your voice is essential to a future where children’s privacy is protected from marketers and others. Take action (link is external) to protect the privacy of children now and in the future! Commercialfreechildhood.org/coppa (link is external)