I was recently invited to a fireside chat with the latest awesome cohort and faculty from UC Irvine’s Master of Design & Research program.
There were lots of great questions — from job-seeking and career-path considerations to when (and when not) to introduce friction into user flows. We even discussed some of the tricky ethical issues designers face.
But of course, the conversation inevitably gravitated to AI; its pervasive role in creating products and the importance of integrating it into day-to-day workflows.
(Since I recently wrote about some of the newest Figma AI features announced at CONFIG to make designers’ daily workflows easier, I won’t tackle them here.)
The group’s questions got me thinking about how quickly AI has impacted the way I work across a few dimensions of design and research strategy.
So here’s how I’m thinking about my use of AI at the moment:
As a thought and sparring partner for initial problem framing and sensemaking
As a generator of artifacts — where those artifacts are quickly shifting from design to code
As an almost-meh proxy for users (with lots of caveats)
AS A THOUGHT & SPARRING PARTNER 🥊
I primarily use ChatGPT (but also Claude, sometimes Gemini and Llama, and a few other small models on occasion) as a thought starter, thought partner, and, ultimately, a sparring partner.
When I’m working with an existing product or feature, I aggregate user feedback and research. With a mobile app, for example, I’ll include App Store reviews highlighting the good, the bad, and the ugly.
I also use AI for initial insights into competitors’ products and services. These are typically wide-ranging conversations early on, tackling everything from technical issues and “UX/UI” problems, to the perceived value of the service. Generally, it’s a broad funnel of considerations that lets me hone in on what matters.
Lastly, I like to debate and spar with it. It’s well known that most models can be pretty sycophantic, so I push back on their ideas — especially when they pertain to mine! — as I seek out contrary opinions and evidence.
“Don’t tell me what you think I want to hear” is one of my most common prompts. (I sometimes even preface it with please.)
AS AN ARTIFACT GENERATOR 🖼️ 👉 0️⃣ 1️⃣
As mentioned earlier, there have been some nice quality-of-life enhancements in design tools like Figma.
Right now, the consensus is that developers use AI more often than designers, as the latter are still working out the best ways to incorporate it. Probably no surprise, given that many LLMs have focused on writing and debugging code as an early adoption use case across tech.
However, determining exactly what to build is generally the harder part of creating successful products. Feasibility is important, of course, but viability, value, and usability are even more so.
If a designer has good product sense, a deep understanding of the problem space, and insights about the customer, that’s a pretty darn good start.
I’ve generated concepts like wireframes and sketches that are fine, but they typically require multiple revisions and refinements — and, of course, time to regenerate.
With the latest advances in vibe coding, the tools now almost let you leapfrog static design artifacts and get to something interactive much, much faster.
Being able to use natural language to get to working code — so you can communicate product intent and gather feedback is actually pretty magical.
And when you factor in AI’s nascent ability to ingest design systems and then regenerate designs that adhere to brand, accessibility, and interaction pattern guidelines — well, now we’re talking superpowers!
AS AN ALMOST-MEH PROXY FOR USERS 🤖
In user research, the use of synthetic user data is one of the more controversial topics.
It is good at some research workflow things. For example, with proper prompting, it’s pretty darn good at desk research and initial research-plan generation. (Be sure to ask it to cite sources, though!)
It’s also excellent at analyzing data and summarizing themes. For example, researchers on my team have dumped transcripts, notes, and recordings into an LLM; the model quickly synthesized and summarized the findings.
The researcher-in-the-loop still needs to make sense of it though, prioritize key findings, and provide proper oversight.
However, with synthetic user data — that is, data from fake users generated by AI — all bets are off.
Sort of.
There is, of course, no substitute for talking to real users and customers to understand their needs, pain points, and workflows.
Still, synthetic data presents interesting possibilities.
If you’re not just prompting a model at random – and instead feed it your actual user research, including raw data, and have it base responses on that corpus — then you may have a compelling use case.
Otherwise, the model is just making things up based on existing patterns out in the wild on the internet. That’s too generic (and dangerous!) to be useful. And also it’s the nuance of talking to and observing customers that produces true insights and breakthroughs.
Think of it as an active user persona/repository that you, your team, and even stakeholders can prompt.
As everyone knows, research reports and repositories can die on the vine if they aren’t readily accessible. Continually feeding new information into a persona model keeps your understanding of customers relevant.
Not to mention that if there isn’t any information in response to a stakeholder’s prompt, that gap can expose an opportunity for researchers to dig into.
So while synthetic user data still isn’t ready for prime time, it can be useful for generating hypotheses — and uncovering what you might not yet understand about your customers.
I’d say that’s a promising start.
Marc