NOTE: Running Mock Example
Every guide in this playbook uses the same feature: the Loan Application Review Table for portfolio LOAN-2024-Q3. A requirement captured in Phase 01 feeds the design in Phase 02, the handoff in Phase 03, and the test cases in Phase 05 — no retranslation needed.
Playbook

Choose Your Craft

Each craft section is a self-contained module. Start at Product if you're building something new. Jump into Design, Handoff, or Development if you're picking up mid-process.

Phase 01
Product
Capture stakeholder input and structure it so every downstream AI tool — Figma Make, Copilot, Claude — can consume it without guessing.
Phase 02
Design
Turn structured requirements into screens using LLM-powered design tools like Claude Design, Figma Make, and Cursor — each suited to a different output and audience.
Phase 03
Handoff
Capture the modern handoff layer between prototype and implementation: working code artifacts, evolved specs, component mapping, and the decisions that should survive into the real codebase.
Phase 04
Development
Learn the AI coding stack end to end: model choice, CRISP prompts, agent workflows, instruction files, MCP, and the release controls that turn agentic delivery into an audit-ready deployment process.
Phase 05
QA & Testing
Use prompts, automated test generation, PR gates, GitHub Actions, Playwright evidence, and requirement-tagged artifacts to make testing faster and more audit-ready.
Architecture

The Full AI-Powered SDLC

Six phases from UX discovery through validation reporting, with AI agents, human gates, and traceability baked in at every step.

Acme Corp AI-Powered SDLC diagram: six phases from UX Discovery through Validation Reporting, showing Figma/Storybook, Azure DevOps, GitHub Actions agents, and human gate approvals.

Scroll right on small screens  →

Why this works

One Requirement, Five AI Outputs

A well-structured requirement cascades through your entire AI toolchain. The 15 minutes you spend structuring it up front saves hours of rework when AI tools need to guess at intent.

📋
Requirement
Structured input
🎨
Figma Make
UI mockups
💻
Copilot / Claude
Code generation
🧪
AI Testing
Test generation
📊
ADO / Helix
Backlog & traceability
Principles

What Makes AI Work Across the SDLC

These apply at every craft. The tools are capable — the quality of your input is the constraint.

01 Product
Context is the key input

Your tools are powerful — Claude, Copilot, Figma Make. The gap isn't capability, it's the quality of what you feed them. Structure once, consume everywhere.

02 Product
Structure early, not late

Requirements don't get structured until migration into Helix near the end of development — by that point, AI tools can't help. Capture structure at the point of the stakeholder call.

03 Design
Specificity beats brevity

Name the user, the moment, the data. "A Loan Officer reviewing 6 commercial loan applications with overdue credit decisions" gets a better screen than "a monitoring dashboard" every time.

04 Design
Real data changes stakeholder feedback

Stakeholders give vague feedback on Lorem Ipsum. They give specific, actionable feedback on LOAN-0042 — Credit Risk Flag, 38 days pending, Chicago Branch Office.

05 Development
Behavioral over aspirational

If a requirement describes a quality ("user-friendly") without observable behavior, AI can't generate code from it. Describe what happens — what loads, validates, submits.

06 QA
Edge cases only exist if written down

AI tools generating tests rely entirely on what you've written. No edge cases in your acceptance criteria means no edge case tests — the gap becomes a compliance risk in SOC 2 Type II and PCI-DSS contexts.