Date
TBC
Duration
40 min
Location
TBC
Note: This session is included in the conference ticket, but specific program has not been finalised.
AI can now generate entire interfaces’ worth of text in seconds. But how do we know if the words are actually good? This talk is for designers, product managers, and anyone else who already uses AI to write UX content and needs a clearer way to judge what it produces. It offers a simple, practical framework for evaluating AI-generated UX content based on five key principles: clarity, next-step guidance, reassurance, user language, and value communication.
Through real product examples, Michal will show how to spot unclear or confusing copy, refine tone and intent, and build teams’ confidence in integrating AI tools without losing the human touch. She will then show how to decide which parts of the experience deserve extra human attention, and how to use lightweight or more in-depth testing methods to check if the words actually work. Attendees will leave with a realistic way to combine AI speed with human judgment in their UX content generation workflows.
Michal Kessel Shitrit

Head of English UX Writing
Draft
As Head of English UX Writing at Draft, Michal spends her days juggling Figma files and crafting content for a diverse set of products. She has worked across fintech, e-commerce, and enterprise platforms, helping brands ship clear, intuitive user experiences. With a strong background in localisation, she has helped teams adapt content for global audiences, ensuring products resonate across different languages and markets




