Generating UI with Structured Outputs
It's easy to start building AI-generated UI, but hard to ship.
Here's the core problem: how do we reliably go from natural language to code to a working UI? And even if the UI is functional, how do we make it pleasant to use?
If you ask a model to write UI code, you often end up in purple hell. It all looks the same and you immediately know it's AI generated. Simon Willison would describe this as "slop".
[image]: a generic prompt and the generated UI
Even with strong primitives like shadcn/ui, code generation can improve how it looks, but it is still brittle and tied to your environment. Generating code on the fly is not viable for end users. It is slow: write, build, bundle. It is also failure prone: type errors, missing imports, and runtime mismatches.
A better approach is to generate JSON, not code.
LLMs are good at JSON when you give them a schema. You can validate it, render it, and recover when something goes wrong.