Most teams treat LLM design tools like magic generators. The teams that get better results treat them like collaborators that need context, constraints, and iteration. This guide walks through the 5-step workflow behind good output, using Figma Make as one example of a broader class of tools.
Five artifacts that chain together — each one feeding the next.
A markdown document defining the core problem, design priorities, edge cases, and success criteria. Keep it in the tool context so every subsequent session builds on the same problem definition.
Structured persona docs with goals, frustrations, behaviours, and one critical design implication each. When the tool can reference them, it designs for real humans instead of generic SaaS defaults.
A living design token reference — colors, type scale, spacing, tone of voice, and what the brand should never look like. This is the fastest path to consistent output across sessions and tools.
The actual generation prompt — written to reference all three prior artifacts. With the brief, personas, and brand doc loaded, the output quality difference versus a blank prompt is categorical, not incremental.
Six high-leverage follow-up prompts for hierarchy, accessibility, responsiveness, data states, and the Persona Check — the most underused review in the kit.
The biggest unlock in LLM design tools isn't a better prompt. It's the context you load before the first generation.
When you open a tool like Figma Make or Claude Design, the intro screen usually offers fast sample generations. These demonstrate what the tool can produce and are useful for exploring range. They're capability demos, not a starting point for real product work. The output reflects no knowledge of your users, your problem, or your brand.
Steps 1–3 of this guide generate markdown documents: your problem brief, user personas, and brand tokens. Keep these in whatever context layer your tool supports: a guidelines folder, project knowledge, attached docs, or pinned reference files. When you finally prompt for a screen, the tool already understands the user, the problem, and the brand. None of that has to be re-explained from scratch.
Build the context files in one session, then start a fresh chat for the first real screen prompt. Long setup history tends to pollute generation quality once you move from framing into layout work.
Your context files handle the "who" and "why" — your prompt handles the "what and how." These examples show what a specific, context-informed prompt produces versus a blank one.
You don't have to write your context documents alone. Describe what you're building to Claude first — the screen, the user, what they need to accomplish — and ask it to draft the problem brief or persona doc. A short conversation builds the files that make your first generation session significantly more productive.
Do this before you touch a single frame.
Ask the tool to generate a design brief first. This forces clarity on what you're actually solving and gives you the first document to keep in your reusable tool context. Once it's there, every later generation has the "why" behind the design without you re-explaining it.
The brief becomes a reusable artifact. Keep it in the tool context layer your workflow supports so future sessions start with the same problem definition already loaded.
Give the model a human to design for.
Generate structured persona documents as markdown. These aren't just for your team. They become part of your reusable context so every generated component reflects real user needs, not generic patterns. The more specific the persona, the more distinctive the output.
After generating personas, ask the tool to pick the most underserved persona and explain what a design optimised specifically for them would prioritise differently. This surfaces directions you wouldn't reach any other way.
Stop the tool from going off-brand before it starts.
Generate a design token and style reference document and keep it in your reusable tool context. It's the single most efficient way to get consistently on-brand output, far more effective than repeating style instructions in every prompt.
Ask the tool to generate a visual HTML style guide instead of markdown — a living reference page that renders the actual colors, type scale, and component styles in the browser. Far more useful to share with a team or client than a text document.
Now you're ready. And you have context behind you.
This is the prompt most people start with. Starting here without steps 1–3 means the tool guesses at the problem, the user, and the brand. With those three documents loaded into the context layer, this becomes a fundamentally different request. The example below is an Acme Corp financial services screen, but the principle applies to any screen you're building.
With 3 documents in the tool context, you're giving the model the equivalent of a design system + user research + creative brief automatically loaded into every session. The output difference compared to a blank prompt is categorical, not incremental.
The first output is a skeleton. These prompts shape it.
Follow-up prompts are where most of the craft happens. These six patterns each target a different failure mode in generated output. Use them individually — one at a time — and always describe what you want rather than what went wrong.
The last prompt is the most underused. Asking the tool to critique its own output through the eyes of a specific user — with the persona doc attached — surfaces issues that no amount of visual polish will fix. Run it before you call any screen done.
These patterns produce outputs you'll spend more time fighting than refining.
Run your prompt by Claude first: "I'm about to use this design-tool prompt — what's missing or too vague?" It catches the same gaps the generator will struggle with. This is especially useful for your first prompt on a new screen type, and for writing reusable prompt patterns or skills.
"Treat the first generation as a skeleton, not a final result. Get the structure right first, then iterate on style."