floating indicator next to menu
floating indicator of scroll progress
0%

From GenAI to GenUI: Codify your UI on the fly

At WebExpo 2025, international speaker and developer advocate Tejas Kumar delivered an engaging live-coding session exploring how generative AI is transforming user interface development. His talk went beyond the buzz of chatbots to introduce the concept of Generative UI – an approach that allows AI to build and update interfaces in real-time, creating smoother, more native user experiences.

Historical context: The evolution of AI and UI

Tejas set the stage by highlighting a significant moment in the AI landscape: the release of ChatGPT in November 2022. He noted that “immediately, the influencers, the hype people, they’re like, oh my god, this is huge,” exemplifying how quickly the tech community mobilised towards building chatbots. Prior to this innovation, chatbots were often clunky, uninspired interactions that required users to engage with a cumbersome interface.

The LLM and UX landscape in 2022 was marked by performance that felt slow compared to today’s standards. Tejas joked, “This is the UX of 2022 when everybody was starting to build chatbots. Good luck. Just wait and wait.” At the time, response times could stretch up to 15 seconds, serving as a clear indicator that change was necessary.

Real-time streaming for better UX

A more interactive and responsive user experience became achievable thanks to advancements in real-time streaming. In 2023, developers began integrating these capabilities into their applications, shifting from static replies to incremental content delivery. Tejas demonstrated this evolution through code examples, showing how to stream responses and make use of partial outputs from language models to enrich user interactions.

Credit: Tejas Kumar

He also shared practical techniques for building applications that deliver quick, iterative feedback. This is where the concept of “partial JSON” comes into play. “How do you deal with incremental JSON?” Tejas asked, highlighting the usefulness of NDJSON (Newline Delimited JSON) for handling streamed data. By using this format, developers can parse and present information more efficiently as it arrives.

Generative UI: A new frontier in application design

Perhaps the most striking revelation from Tejas’ presentation centred around the potential of Generative UI. This approach leverages AI’s ability to produce dynamic user interface elements or components on-the-fly. Tejas pointed out that, “if you can now stream JSON objects across the network, you can stream portions of the DOM,” effectively blending data generation and user interface design.

To demonstrate this, he introduced a simple “Movies plus plus” application, which illustrated the capabilities of Generative UI in real time. In this demo, users could enter informal queries, such as searching for films with a strong female lead, and receive relevant results almost instantaneously. Tejas showcased how the application can produce visual components that mirror user intent without excessive manual input; a remarkable step forward in usability. In contrast to Netflix and its results from the same prompt, this wasn’t even a debate.

Credit: Tejas Kumar

Tejas asserted, “This is what we can do with AI and UX,” framing AI as a crucial tool for building interfaces that adapt in the moment.

Model Context Protocol: Automating developer workflows

Later in the talk, Tejas revealed the potential of the Model Context Protocol (MCP), a client-server architecture designed to streamline coding automation. MCP allows AI applications to fetch and process only the necessary data on demand, removing the need to click through countless pages or interfaces. By delivering just what’s needed, when it’s needed, MCP simplifies user workflows without sacrificing accessibility.

By utilising MCP, developers can build platforms that intelligently gather and present data, significantly enhancing interaction quality. Tejas exemplified this by querying details for upcoming talks at WebExpo. “Did I have to go browse the internet? Did I have to click on buttons? Did I have to suffer maybe some inaccessible pattern?” he articulated, showcasing the immediacy of information retrieval through integrated AI functionalities.

Credit: Tejas Kumar

Looking ahead: The future of user experiences

Tejas concluded his talk by painting a picture of what the future might hold for the intersection of AI and UX. He noted that as technology evolves, the legislation around interface design could shift dramatically. Quoting Kent C. Dodds, he mused, “I don’t know if we’ll need buttons. I don’t know if we’ll need web pages,” suggesting a future where AI-driven interfaces might take centre stage. “That’s his take,” Tejas added. “I still rest in the camp of ‘I don’t know’.”

The core takeaway from Tejas’s enlightening WebExpo talk was that generative AI and its applications offer a pathway for creating more intuitive, human-centric experiences. Moving rapid development cycles forward, generative AI promises to eliminate frustrations linked with traditional UI paradigms while enabling a more engaged user journey.

For anyone keen to explore the concepts, demos, and coding techniques Tejas covered in more detail, you can watch the full talk and access the slides just below.

This Site Uses Cookies

For processing purposes, your consent is required, which you express by selecting "Allow all." You can also customise your settings.