The Case Against AI-Generated Users
< thinking

The Case Against AI-Generated Users

And alternative suggestions for design research.
words:
Dan Perkel
Angela Kochoska
Will Notini
visuals:
Beth Holzer
read time:
9 minutes
published:
January
2025

Design research is at an inflection point. Practitioners and budget-setters are going back and forth over whether it’s worth the time and money: What really brings value—and for what purpose—particularly in an era when budgets are tight? Similarly, folks focused on UX research argue that we are in the middle of a “reckoning,” questioning its ROI in certain contexts. Others disagree.

As this tension builds, the world of generative AI offers a new proposition: AI-generated—aka synthetic—users. Here’s the pitch: Are you sick and tired of going through the cost, time, and difficulty of crafting a research plan, finding participants, and creating thoughtful questions? Are moderated exploratory sessions and usability tests taking too long to conduct and synthesize into something actionable? There is a solution: generative AI tools that will simulate what real people might say and then feed it back to you instantaneously. Perhaps even synthesize it automatically!

The synthetic users proposition is meant to be a silver bullet. Already, folks are using it, skipping the time, cost, and hassle of doing research with real people. As leaders in this space, we felt obligated to take it seriously enough to try it out (with a healthy mix of skepticism and optimism).

As one of our teams kicked off a project to improve rural healthcare, they decided to enlist the help of ChatGPT to provide synthetic patient and practitioner insights, using several GenAI tools to create artificial users to interview. It turns out, after three weeks of swimming in a lukewarm sea of AI-generated information, all we needed was one hour with one real flesh-and-blood patient and a human physician who works with marginalized communities in rural areas to show us just how deep and complex real human experiences are—far more than our artificial users could convey.

Pedestrians walking on a bright pink crosswalk, with some figures replaced by pink checkerboard silhouettes.

What design research accomplishes vs. what LLMs can deliver

Relying on synthetic users—really just a lot of data in an LLM model—isn’t just bad for practitioners; it’s bad for business outcomes, and the people we’re designing for and with. AI is unable to provide the key ingredients of research that are critical to our work:

Important tangents

In open-ended interviews, we follow the lead of participants, inviting them to take us on their journey, even when—or especially when—it seems to deviate from what we thought we were going to talk about.

Unexpected insights

Looking at something from research participants’ points of view leads to things we never would have learned otherwise, and creates sparks of insight that help us shift priorities or radically reimagine the problem we’re trying to solve, as well as potential business outcomes.

Context clues

When you spend time in someone’s home or place of work, their artifacts, photos, prized possessions, accidental intrusions from family members—and a host of other things—can dramatically alter the course of a research experience and lead to new insights.

Contemporary generative AI products enshrine design principles and methods of working that run counter to what we know about good research practice.

People pleasers

They are designed and trained to please you and be helpful. Sometimes participants try too hard to be helpful as well, but researchers are trained to sense this and dig further. There’s nowhere to dig further with a synthetic user.

Emotionless entities

Contemporary LLMs lack authentic emotion. They don’t leave the markers and cues that participants often drop to signal there might be more to learn. And if you want a more emotional response, then it’s up to you as a researcher to prompt it—which defeats the whole purpose of doing research in the first place.

A shallow context pool

LLMs have to be fed the right context. They can’t say unexpected things beyond what you’re already asking about. Meanwhile, researchers are only able to define what’s relevant as the process unfolds.

Forced assumptions

Actually writing a prompt that gets you meaningful answers requires you to make a ton of assumptions about what you might learn.

A disembodied “person” composed entirely of data, which has been synthesized by an LLM looking at vast collections of other disembodied artifacts or words, simply cannot provide the insights and discoveries that research is designed to produce.

Others agree.  Even ChatGPT. When we asked it, “Is it ethical to use synthetic users instead of real ones?” ChatGPT told us, “Users provide valuable qualitative feedback, emotions, and insights that synthetic users cannot replicate. Overuse of synthetic users might result in missing these important human factors in design.”

Or, as researcher Christopher Roosen puts it, “Using synthetic humans for direct research is like navel gazing, but not at our own navels, but into the great messy, cluttered, and often- disgusting navel of the Internet-with-a-capital-I. The solution isn't to make up fake people. It’s getting better access to real people.”

A vibrant composition of two individuals walking on a blue pavement background on the left, with their figures replaced by pink checkerboard silhouettes.

Scrappy ways to accomplish the same goals

Design research can feel like it takes too long or is too speculative or unrewarding. Sometimes, it can lead to a lot of swirl, creating an anxiety gap between insights and the creation of new solutions. There’s also the real problem of resources. Or maybe your stakeholders are looking for immediate results. It’s reasonable for teams to wonder how they can take a more direct path.

But that doesn’t mean you have to discard human-centered research and design altogether. We too are often working on tight timelines, and having to find scrappy ways to get real data from real people. Here are some practical strategies for even the most shoe-string budget.

Friend and family interviews

You should strive to recruit a balanced sample of research participants and work to eliminate any bias baked into the sample design. But doing design research—especially in the early stages of the process—is different. At this stage, it’s actually beneficial to embrace the bias of the sample and use it as a tool to learn. Talking to friends and family to get feedback on early concepts is a viable way to recruit research participants—so long as those people care deeply about the subject matter and you understand the limitations of their perspectives.

Diary studies

When you want to quickly get data from people’s daily lives, often over a specific period of time, diary studies—having people log daily activity or respond to specific prompts throughout their day—come in handy. The size and scope of data you can study can be tailored to meet your immediate needs: You can ask participants to log their habits and thoughts throughout the day in a text or audio journal, take photos and videos, and reflect on your product or similar ones. There are even platforms like dscout that streamline the recruitment, data collection, and analysis process. In one diary study, we asked  a couple dozen participants to show us the moments that mattered in their week—which completely changed how we design for major TV moments in the streaming era.

Intercepts

An intercept is a quick conversation with someone in a physical context. Where better to discuss a hospital’s waiting room experience than in that same waiting room? While the insights from intercepts are often targeted and specific, they’re ideal for quick, actionable feedback—and relatively inexpensive, too. On one project, we showed up at a major home appliance retailer to talk to customers and associates about the problems they were trying to solve together. We discovered that people often came in for a face-to-face interaction that would help them decide whether to repair or replace their home appliance. But while associates were eager to solve the problem, they felt incredibly ill-equipped to do so and could only offer anecdotes based on personal experience.

Expert interviews

Experts not only bring knowledge, but also intuition that can only be acquired through lived experiences, critical perspectives, and sometimes even information that may not be widely accessible. When our colleagues worked on that project to improve access to healthcare for rural populations, talking to one expert in that field gave them the broadest view of all the challenges that population is facing and helped them design for it with greater empathy. Instead of trying to create something purely digital, the team pivoted to creating solutions that foster trust and understanding between the doctor and patient.  

Expert interviews are typically more expensive (they can range from $250 to $1000/hr depending on your expert) but luckily, you don’t need too many of them. You can also think broadly about what qualifies someone as an expert. Communities tied together by a common interest (e.g. Reddit) may get you a different kind of expertise than the capital E experts available for hire.

Social signals

If all else fails, sometimes it’s more impactful to hear the opinions of people directly through the data they leave behind online in the form of posts, comments, and reviews.  Subreddits visited by particular interest demographics, trending topics on social media sites, and product reviews can uncover deeper insights around people’s true feelings and experiences, both bad and good.  When we worked with a music gear manufacturer dealing  with increased returns and customer dissatisfaction, a simple analysis of their own review data helped us identify specific problems, and customers’ suggested solutions.

Looking ahead

Generative AI and LLMs are very useful tools that we’ve embraced in many of our distinct design crafts. And we still believe that there is an opportunity to use LLMs in design research. They can help us think about problems in new ways, help shape a research plan, and guide interviews. After all, LLMs do have access to a wide variety of material that could be quite inspirational in planning phases—so long as we realize they are not providing insights from real people. At least, for now. (We remain open to what the future might bring.) Because today, unless you’re designing for LLMs, not accounting for real human input is a shortcut to nowhere.

No items found.
No items found.
Dan Perkel
Partner, San Francisco
As a leader of IDEO’s Media, Entertainment, and Technology practice in North America, I work with some of the world's leading companies to imagine new digital platforms and experiences, build new creative capabilities, and write their next chapters of sustainable growth.
Angela Kochoska
Senior Designer
Angela Kochoska is a data scientist who leverages the power of data-driven design research and emerging technologies to fuel human-centered design. She works at the intersection of design and technology through projects where she often experiments and prototypes with Generative AI, natural language processing, and mixed methods analysis.
Will Notini
Senior Design Research Lead
Will draws on his training in social science research and strategy to execute design and innovation work for clients in a range of industries. He manages multi-disciplinary teams through iterative design projects including ideation, prototyping, testing, and roadmapping phases.
Beth Holzer
Design Lead, Global Marketing
Beth brings ideas to life with visual design, using craft to add context and texture. She specializes in translating complex ideas into imagery that tells compelling stories.
No items found.

Get in touch

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.