I was recently invited to a fireside chat with the latest awesome cohort and faculty from UC Irvine’s Master of Design & Research program.
There were lots of great questions — from job-seeking and career-path considerations, to when (and when not) to introduce friction into user flows. We even discussed some of the tricky ethical issues designers face.
But of course, the conversation inevitably gravitated to AI.
(Since I wrote about some of the newest Figma AI features announced at CONFIG to make designers’ daily workflows easier earlier this summer, I won’t tackle them here.)
The group’s questions got me thinking about how pervasive AI has become in the way I work across a few dimensions. And so here’s how I’m thinking about my current usage:
As a thought and sparring partner for initial problem framing and sensemaking
As a generator of artifacts — where those artifacts are quickly shifting from design to code
As an almost-meh proxy for users (with lots of caveats, but also potential)
AS A THOUGHT & SPARRING PARTNER 🥊
I primarily use ChatGPT (but also Claude, sometimes Gemini and Llama, and a few other smaller models on occasion) as a thought starter, thought partner, and, ultimately, thought sparring partner.
When I’m working with an existing product or feature, I aggregate user feedback and research. With a mobile app, for example, I’ll include App Store reviews highlighting the good, the bad, and the ugly.
I also use AI for initial insights into competitors’ products and services. These are typically wide-ranging conversations early on; tackling everything from technical issues and “UX/UI” problems, to the perceived value of the service. Generally, it’s a broad funnel of considerations that then lets me hone in on what matters.
Lastly, I like to debate and spar with it. It’s well known that most models can be pretty sycophantic, so I push back on their ideas — especially when they pertain to mine! — as I seek out contrary opinions and evidence.
“Don’t tell me what you think I want to hear” is one of my most common prompts. (I sometimes even preface it with please.)
AS AN ARTIFACT GENERATOR 🖼️ 👉 0️⃣ 1️⃣
As mentioned earlier, there have been some nice quality-of-life enhancements in design tools like Figma.
The current consensus is that developers use AI more often than designers, as the latter are still working out the best ways to incorporate it. Probably no surprise, given that many LLMs have focused on writing and debugging code as an early adoption use case across tech.
However, determining exactly what to build is generally the hardest part of building. Feasibility is important, of course, but viability, value, and usability are even more so.
If a designer has good product sense, a deep understanding of the problem space, and insights about the customer, that’s a pretty darn good start.
I’ve generated concepts like wireframes and sketches that are fine, but they typically require multiple revisions and refinements — and, of course, time to regenerate.
With the latest advances in vibe coding, the tools now almost let you leapfrog static design artifacts and get to something interactive much, much faster.
Being able to use natural language to get to working code — so you can communicate product intent and gather feedback — is actually pretty magical.
And when you factor in AI’s nascent ability to ingest design systems and then generate designs that adhere to brand, accessibility, and interaction pattern guidelines — well, now we’re talking superpowers!
AS AN ALMOST-MEH PROXY FOR USERS 🤖
In user research, the use of synthetic user data — that is, data from fake users generated by AI — is currently one of the more controversial topics.
It is certainly good at some research workflow things. For example, with proper prompting, desk research and initial research-plan generation is generally sound. (Be sure to ask it to cite sources, though!)
It’s also excellent at analyzing data and summarizing themes. For example, researchers on my team have dumped transcripts, notes, and recordings into an LLM and the model quickly synthesized and summarized the findings.
The researcher-in-the-loop still needs to make sense of it though, prioritize key findings, and provide proper oversight.
However, with synthetic user data all bets are off.
Sort of.
There is, of course, no substitute for talking to real people to understand their goals, tasks, and pain points.
Still, synthetic data does present some interesting possibilities.
If you’re not just randomly prompting a model – and instead feed it your actual user research, including raw data, and have it base responses on that corpus — then you may have a compelling use case.
Otherwise, the model is just making things up based on existing patterns out in the wild on the internet. That’s too generic (and dangerous!) to be useful.
Think of it as an active user persona/repository that you, your team, and even stakeholders can engage with.
As most researchers know, data, reports, and repositories can die on the vine if they aren’t readily accessible. Continually feeding new information into a so-called ‘persona model’ keeps the understanding of customers relevant.
So while synthetic user data still isn’t ready for prime time to base business decisions on just yet, it can potentially be useful for generating hypotheses — and uncovering what you might not yet understand about your customers.
I’d say that’s a promising start.
Marc