GenUI
Code for the paper: Generative Interfaces for Language Models
Install / Use
/learn @SALT-NLP/GenUIREADME
Stanford University
*Equal contribution <br />
<h3 align="center"> <b><a href="https://salt-nlp.github.io/generative_interfaces/">Homepage</a></b> • <b><a href="https://arxiv.org/abs/2508.19227">Paper</a></b> • <b><a href="https://huggingface.co/datasets/SALT-NLP/GenUI">Dataset</a></b> • <b><a href="https://salt-nlp.github.io/generative_interfaces/dataviewer/data_viewer.html">Data Viewer</a></b>What are Generative Interfaces?
We investigate Generative Interfaces for Language Models, a paradigm where LLMs respond to user queries by proactively generating user interfaces (UIs) to enable more adaptive, interactive interactions that better support complex user goals.
Updates
[10/07/2025] We conducted a new user study where 76 participants used their own daily queries to compare generative interfaces with conversational ones. Data is released <a href="https://huggingface.co/datasets/SALT-NLP/GenUI" target="_blank" rel="noopener noreferrer">here</a>. Data viewer is available <a href="https://salt-nlp.github.io/generative_interfaces/dataviewer/data_viewer.html" target="_blank" rel="noopener noreferrer">here</a>.
How do Generative Interfaces work?
-
Requirement specification [system prompt], [Code]: First, we parse the input into a requirement specification, capturing the main goal, desired features, UI components, interaction styles, and problem-solving strategies.
-
Structured representation generation [system prompt], [Code]: Second, we generate a Structured Interface-Specific Representation based on the requirement specification.
-
UI generation [system prompt], [Code]: To support faithful realization of the structured specification, we utilize a component codebase containing reusable implementations of common UI elements (e.g., charts, videos, synchronized clocks). In addition, a web retrieval module gathers relevant UI examples and data sources to inform both the representation design and the final rendering. Finally, the entire context, including the natural language query, requirement specification, structured representation, 7 predefined components, and retrieved examples, is passed to a code generation model, which synthesizes executable HTML/CSS/JS code. This completes the pipeline from query to fully rendered, high-quality interactive interface.
-
Adaptive reward function [system prompt], [Code]: We use a large language model to automatically generate evaluation criteria based on each user query, such as “clarity” or “concept explanation,” assigning weights and verification rules to compute an overall score.
-
Iterative refinement [system prompt], [Code]: We first generate several UI candidates and score them using the reward function. The best one is selected, then used to guide the next round of generation. This process repeats with feedback until a candidate meets the quality threshold.
Setup
Prerequisites
Package Manager
API Keys
Put the following API keys in the .env file.
Authentication
- Supabase account for authentication
LangGraph Server
- LangGraph CLI for running the graph locally
LangSmith
- LangSmith for tracing & observability
Installation
First, clone the repository:
git clone git@github.com:SALT-NLP/GenUI.git
cd GenUI
Next, install the dependencies:
yarn install
After installing dependencies, set the required values (API keys, authentication information) in ./apps/web/.env.example.
Then copy it to .env in the root folder of the project, and in apps/web.
# The root `.env` file will be read by the LangGraph server for the agents.
cp ./apps/web/.env.example ./.env
cp ./apps/web/.env.example ./apps/web/.env
Setup Authentication
After creating a Supabase account, visit your dashboard and create a new project.
Next, navigate to the Project Settings page inside your project, and then to the API tag. Copy the Project URL, and anon public project API key. Paste them into the NEXT_PUBLIC_SUPABASE_URL and NEXT_PUBLIC_SUPABASE_ANON_KEY environment variables in the apps/web/.env file.
After this, navigate to the Authentication page and the Providers tab. Make sure Email is enabled (also ensure you've enabled Confirm Email). You may also enable GitHub, and/or Google if you'd like to use those for authentication. (see these pages for documentation on how to set up each provider: GitHub, Google)
Test Authentication
To verify authentication works, run yarn dev and visit localhost:3000. This should redirect you to the login page. From here, you can either log in with Google or GitHub, or if you haven't configured these providers, navigate to the signup page and create a new account with an email and password. This should then redirect you to a confirmation page, and after confirming your email, you should be redirected to the home page.
Setup Server
The first step to running Generating UI locally is to build the application. This is because Generating UI uses a monorepo setup and requires workspace dependencies to be built so other packages/apps can access them.
- Run the following command from the root of the repository:
yarn build
- Navigate to
apps/agentsand runyarn dev(this runsnpx @langchain/langgraph-cli dev --port 54367).
You will see something like:
Ready!
- 🚀 API: http://localhost:54367
- 🎨 Studio UI: https://smith.langchain.com/studio?baseUrl=http://localhost:54367
- After your LangGraph server is running, execute the following command inside
apps/webto start the Generating UI frontend:
yarn dev
On initial load, compilation may take time.
- Open localhost:3000 with your browser and start trying generative interfaces.
- Using Claude is recommended. Turn on web search to enable fetching relevant web pages.
- Generation can take multiple minutes due to iterative generation.
- You can track the intermediate steps in the terminal where you run
yarn devinapps/agents.
Troubleshooting
For problems related to pdf-parse, you might refer to the solution here.
Citation
If you find this work useful for your research, please cite our GitHub repo:
@misc{chen2025generative,
title={Generative Interfaces for Language Models},
author={Jiaqi Chen and Yanzhe Zhang and Yutong Zhang and Yijia Shao and Diyi Yang},
year={2025},
eprint={2508.19227},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Related Skills
node-connect
352.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
111.1kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
352.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
352.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
