- The Monopoly Report
- Posts
- Fun with AI: Health Targeting Edition
Fun with AI: Health Targeting Edition
An argument against viewing AI as the new wild west for audience targeting
I’m Alan Chapell. I’ve been working at the intersection of privacy, competition, advertising and music for decades, and I’m now a regulatory analyst for The Monopoly Report.
Our latest Monopoly Report podcast is out, featuring Kamyl Bazbaz and Joseph Jerome from DuckDuckGo. We discuss the role of the browser, the intersection between privacy and competition, and how DuckDuckGo hopes to differentiate itself within the nuances of the larger ads space.

Health targeting with AI can feel ham fisted, as if it comes from a previous generation of data brokers
Health Targeting in the Age of AI
In my last post, I noted that some within the ads space might be viewing AI audience targeting as the next wild west. Some might even be hoping that AI will obfuscate the privacy rules to enable some flavor of status quo.
This week, I share a potential example of that new “wild west” sentiment as it applies to health targeting. Before we get to the meat, here's some background:
Health Targeting Pre-2020
Up until about 2020, health targeting rules in the ads space were addressed by the industry self-regulatory bodies: the NAI and the DAA. I’m not saying the rules were perfect, but they encouraged transparency - and created a baseline of what was viewed as sensitive.
The FTC and State Privacy Laws
For better or worse, the Lina Kahn FTC forced a recalibration of the way the ads space viewed health segments. For example, the Kahn FTC took the position that an “interested in Vitamin B” segment is sensitive. This new approach begged the question: If Vitamin B is sensitive, what is NOT sensitive?
At the same time, several U.S. states started codifying health / medical conditions and medical records as sensitive (i.e., requiring consent). And while the Andrew Ferguson FTC has seemingly backed off from the Kahn FTC’s approach to health sensitivity, many states have ramped up their efforts. For example, the definition of consumer health data as defined in Washington state’s law is incredibly broad, and several states view just about any inference related to health as being subject to additional scrutiny.
Note: I haven’t discussed HIPAA here. The back story regarding the guidance that came from HHS would have decimated health targeting for HIPAA-covered entities had it not been struck down by a Texas Court. (Well, that probably merits its own article.)
My larger point today is that it’s becoming increasingly more challenging to target ads in the health vertical.
Demographic Targeting to the Rescue
Demographic targeting in a health context has been in use (to varying degrees) for well over a decade. But with all the uncertainty created by the Kahn FTC-imposed rules, as well as state privacy laws, the ads marketplace needed guidance on how to use demographic data in a way that was privacy safe (relatively speaking).
In response, the NAI crafted guidance designed to enable privacy-safe use of demographic data to create health profiles. The idea was that an advertiser can strategically narrow their ad campaign audience and reduce impression waste by – for example - targeting a prostate cancer drug only to men. (Disclosure: I’m currently the NAI board chair.)
I’m a fan of this use case - as it seems a reasonable attempt to increase the efficiency of ad spend. That said, I’m not a fan of health targeting companies putting this use case on steroids and then trying to hold it out as privacy-safe.
Bigger Is Better?
I’ve recently looked at marketing collateral for a health targeting company that uses “tens of thousands” of demographic segments in conjunction with AI to infer interest in certain health conditions. I’m not going to name / shame the company here.
If I’m understanding the process, this company starts with de-identified medical records and then uses AI to infer “tens of thousands” of individual demographic segments from that data set. It’s difficult to believe that ethnicity, race and religion are not being included somehow in a demo model this large - but it’s hard to tell.
But with this new AI-infused demographic model, we’re no longer looking at serving a prostate cancer drug to an audience of men. Rather, we’re looking at prostate cancer for men who potentially (i.e., I’m just making this up for effect):
Speak Spanish,
Have a household income between $20-35K,
Lived near a toxic Superfund site,
Are between 40-42 years old,
Have a HS diploma,
Work as a sales clerk,
Are Latvian Orthodox,
Have two daughters and a son,
Don’t have a driver’s license,
Are not currently married.
That’s ONLY 10 demographic segments – and you’re already really close to being able to identify an actual person from the data set (if you’re not there already).
With that in mind, am I to believe that building a segment using AI to infer TEN THOUSAND demographic segments is going be privacy safe? Umm… how are we defining privacy-safe?
It’s not just about re-identification risk. It’s also about transparency, data minimization, proportionality and… well… using AI in a way that is kind of creepy.
Here’s an Example to Drive Home My Point
Below are descriptions of two different types of health segments. The first is created with a pixel placed on a web page. The second is created via AI-enhanced demographics. What type of segment comes off as more creepy?
Brain cancer inference based on a visit to HealthSite.com/BrainCancer (i.e., the classic behavioral targeting profile).
OR
Brain cancer inference based on 10K different demographic attributes that you share, all run through an AI model that infers that a particular cluster of demographic segments is pre-disposed to condition X.
One might argue that it’s a tie.
The first type of segment would almost certainly be deemed “sensitive” under state privacy law and/or an FTC analysis. But is the second segment really any better? It’s hard to argue that segment two is really less creepy. And given how the states have expanded how they define personal data to include inferences drawn from a data set, it’s hard to see how segment approach two isn’t sensitive under state privacy law as well.
I’m not trying to tell you where to set the bar when it comes to health targeting data. Rather, if you’re looking at health advertising and profiling, remember the phrase caveat emptor.
I’m also warning us all not to use “de-identification,” thousands of demographic segments, and AI as ad tech’s next version of three-card monte. Bigger ain’t always better. And health is one of those places where it’s easier to draw a straight line between health inferences and tangible consumer harms.
If you’re in the health targeting space and feel like I’m missing or misrepresenting your models, please reach out.
__________________________________________________________________________
If there’s an area that you want to see covered on these pages, if you agree / disagree with something I’ve written, if you want to tell me you dig my music, or if you just want to yell at me, please reach out to me on LinkedIn or in the comments below.
Reply