How I Use Cursor to Validate Ideas
At Wayground, I often need to test ideas before we commit engineering time to them. Figma prototypes work for some things, but when you need real data, real inputs, and real AI responses, you need something that actually works.
That's where Cursor comes in. I use it to build functional prototypes I can put in front of users and learn from.
Instead of writing another "how to use Cursor" guide, I'll walk through an actual project I built recently. Step by step, from planning to deployment.
The Problem We Wanted to Solve
In US middle schools, students write essays in English Language Arts. Narrative, argumentative, informative. Each type has specific rubric traits: hook and opening, story structure, descriptive details, transitions, voice and tone, conclusion.
Here's the problem: students practice on pen and paper, then submit their work and wait. They don't know if they're on track until a teacher grades it days later. And teachers? They can't possibly give detailed feedback on every practice essay from every student. There's just not enough time.
We saw a gap. What if students could get instant feedback while practicing? Not a grade, but guidance. "Your opening is strong, but your transitions need work." Something that helps them improve before the test that actually counts.
The question wasn't whether we could build this. The question was whether students would actually use it. Would they understand the feedback? Would it help them improve? Would they care?
To find out, we needed something real to test. Not a Figma prototype with fake data. Something where they could actually write an essay and see what happens.
Before Opening Cursor: The Planning Phase
Framing the Problem
I've learned that the quality of what Cursor builds depends entirely on the quality of what you feed it. Garbage in, garbage out. So before I write a single prompt, I answer four questions:
I do this planning in FigJam rather than a document. Something about the visual format helps me think more clearly.
Defining the Structure
Once I know what I'm building, I sketch out the app structure. What pages exist? What happens on each one?
For EssayPulse, it's simple:
PAGE 1: PROMPT SELECTOR
App name and description
Categories of prompts (Life Moments, Relationships, Challenges, etc.)
Click a prompt to start writing
PAGE 1: WRITING PAGE
The selected prompt at the top
Rich text editor on the left
Rubric traits sidebar on the right (six traits, each with a star rating)
"Check your progress" button (enabled after 150 words)
Clicking a trait opens detailed feedback: what's working, what to improve
A rough sketch on paper helps here. Nothing fancy. Just boxes and arrows.
The Tech Stack
This is where it gets intimidating for designers. But here's the thing: you don't need to know what tech stack to use. You just need to describe what you're building clearly enough that Claude can figure it out.
That said, I've landed on a default stack that works for most prototypes:
FRONTEND
React + Vite + Tailwind CSS
BACKEND
Convex
DEPLOYMENT
Vercel
APIS
OpenAI
RICH TEXT
Tiptap
If you're unsure, just tell Claude what you need and ask it to recommend a stack.
The Prompt That Builds the App
Here's the template I use to describe any app to Cursor. I ask Claude to help me fill it out based on my planning notes:
Para 1 - Who & What Problem
This is a [type of app] for [target user] who want to [goal]. It solves the problem of [pain point].
Para 2 - How It Works
[User] lands on [first page] where they [action]. Once they [complete action], they go to [next page] with [key UI elements]. [Describe the interaction flow]. [User] can [repeat action] as many times as they want.
Para 3 - Pages
The pages to build are [list all pages].
Para 4 - Tech Stack
For the tech stack, use [framework] for the frontend, [library] for [specific feature], [service] for the backend, and [API] for [functionality].
Para 5 - Design Direction
The design should be [style adjectives], using [font type], [spacing], [corners], [shadows], and [color palette].
Para 6 - CTA
Create a thorough plan to build this application.
For EssayPulse, that became:
cursor-prompt.txt
This is a narrative writing practice app for students who want to improve their storytelling skills with real-time AI feedback. It solves the problem of students writing essays without knowing if they're on track until after submission or grading.
Students land on a prompt selector page where they pick from narrative writing prompts organized by category. Once they select a prompt, they go to the writing page with a rich text editor on the left and a rubric traits sidebar on the right. The sidebar shows 6 traits: Hook & Opening, Story Structure, Descriptive Details, Transitions, Voice & Tone, and Conclusion. Each trait displays a star rating out of 5. Once the student writes at least 150 words, they can click "Check your progress" to get AI-powered feedback. Clicking any trait opens a detail view showing "What's Working" and "What to Work On." Students can edit and re-check as many times as they want.
The pages to build are the prompt selector page and the writing page.
For the tech stack, use Vite + React + Tailwind CSS for the frontend, Tiptap for the rich text editor, Convex for the backend, and OpenAI API for scoring and feedback.
The design should be clean and minimal, using serif fonts, generous whitespace, rounded corners, soft shadows, and a muted warm palette.
Create a thorough plan to build this application.
I use Opus 4.5 for planning in Cursor. It breaks everything down into tasks, and if the plan looks right, I hit build.
What Came Out
One prompt. That's all it took to get a working app with:
A prompt selector with categories
A writing page with a rich text editor
Live word and paragraph count
A rubric sidebar with six traits
AI-powered feedback on each trait
"What's Working" and "What to Work On" sections
Did I need to add my OpenAI API key to Convex? Yes. Did the first "Check Progress" click throw an error until I did? Also yes.
But once that was set up, the core functionality worked.
Problem Statement
Understanding the Landscape
Before building anything, we playtested leading math practice tools with 20 students, interviewed few teachers, and analyzed Wayground's existing usage data to understand students' behavior in mathematics.
What We Found
65% never clicked "Show Solution" because students saw help as failure. IXL cuts points for hints and DeltaMath shows all steps at once.
"I feel dumb seeing all those steps." Full solutions overwhelmed students and confirmed their fear it was too hard, so they skimmed and moved on.
80% guessed randomly because it was faster than reading long explanations. Learning turned into a guessing game instead of real understanding.
Teachers need tools that reduce "I don't get it" interruptions and clearly show student progress for independent work.
Based on these insights, we framed our challenge:
For Students
How might we reduce fear and build confidence through patient, step-by-step guidance?
For Business
How might we create a high-frequency practice tool that teachers trust and schools adopt at scale?
Approach
Traditional practice tools took two equally bad paths:
Get it wrong → Punishes student or Shows entire texty solution → Overwhelms students → Gives up
What we needed to build instead:
Get it wrong → Break into digestible steps, one at a time → Nail the next one
Our approach was simple: mimic how great teachers teach:
1
Break the problem into manageable pieces
2
Show one step at a time, not everything at once
3
Build on what the student already understood
4
Celebrate small wins along the way
Journey
v1
CHAT STYLE
Six iterations. Each one bringing us closer.
Version 1 started with good intentions. AI-tutor style guidance, helpful hints, encouraging feedback. Students had other ideas.
What we built
Chat-style interface inspired by ChatGPT
Problems broken into 3-4 steps
Hints + MCQs + encouraging feedback
What we learned
Students skipped long context entirely
Hints went unread
Validations disrupted flow
v2
REFINING BASICS
What we changed
Shortened context to one sentence
Highlighted hints visibly
Made step names sticky
What we learned
Students only read hints, skipped main question
Step names completely ignored
v3
STEP FLOW PANE
What we changed
Added left pane showing step flow
Answers carried forward between steps
Full step sequence visible
What we learned
Students ignored the sidepane - anything outside main focus was overlooked
Screen felt noisy and cluttered
v4
CLUTTER REDUCTION
What we changed
Collapsed completed steps
Added visual examples
Cleaner interface
What we learned
Collapsing worked! Students focused better
Examples boosted accuracy
But some confused examples with actual questions
v5
BREAKTHROUGH
What we changed
Dropped chat UI completely
Removed all filler content
Clean pen-and-paper style flow
Visual scaffolding
What we learned
Students FINALLY understood step connections
Could explain full flow after 3-4 problems
This was the breakthrough moment
Just needed one final refinement
v6
THE FINAL VERSION
We removed the example box. That was it—the only change from V5.
In testing, students didn't say "this looks like a lot of work" anymore. They said "wait, that's it?" They stopped skipping. They stopped staring at the screen, paralyzed.
Version 6 didn't feel like homework. It felt like solving a problem with a friend beside you.
What we changed
Removed example box
Kept everything else
What we learned
Universal preference in testing
Intuitive and non-threatening
With the interaction design working, we focused on visual polish. We iterated until StepGuide felt less like software and more like working through problems on paper.
Building the System
We had the design. Now we needed to scale it from 5 skills to 500+ with content designers and subject matter experts.
We established guiding principles, standardized the anatomy of each step, and created guidelines for choosing interactive components.
Guiding Principles
🧱 Scaffolding
Break problems into foundational chunks. Build from simple to complex.
🔗 Continuity
Each step flows into the next. Students see clear connections.
✏️ Replicability
Students can recreate the solution on paper after completing the StepGuide.
📐 Clarity
Each step is a smaller question. Active thinking, not passive reading.
Anatomy of a Step
Every StepGuide follows the same structure, making it predictable and learnable:
Interactive Components
Starting with Fill in the Blanks and Multiple Choice, I established the base interactive components.

FIB
MCQ

The team evolved these into specialized interactions for complex math concepts: factor trees, long division, grids, graphing, number lines, and more.

Factor trees
Graphing
Long division
Grids

Behavioral Insight
After launch, we observed three distinct behavioral patterns:
50%
Skippers
Never used StepGuide. Saw it as an interruption.
30%
Burnt
Did at least one StepGuide, skipped the rest.
20%
Religious
Used it consistently. These were rare.
These 50% Skippers were the problem. They never gave StepGuide a chance.
Why skip? Students did the math:
Guessing was faster, easier, and could earn more reward than learning.
Aspect
Current Incorrect Question
(Using StepGuide)
Moving to Next Question (Skipping)
Effort Required
Higher (Several Steps to Correct Mistake)
Lower
Mastery Score Impact
None (Current Setting)
Possible Gain if Guess is Correct
Step Rewards
Half the Steps Compared to Correct Initial Answer
Full Steps if Answer is Correct
Learning Potential
Higher (Detailed Understanding of Mistake)
Lower (Missed Learning Opportunity)
Question Limit Awareness
Not Apparent (Students think questions are unlimited)
Possible Gain if Guess is Correct
The Fix
We repositioned StepGuide from punishment to opportunity:
Mountain hidden
No visible rewards
Felt like consequences
50% skip rate
Mini mountain visible
Bonus steps shown
Framed as "reattempt"
20% drop in skip rate
Impact
500k+
StepGuide completed
30%
accuracy boost after 2–3 StepGuides attempts
Stats are cool. Student reactions are better:
What Stuck With Me
Great design isn't just about craft. It's about understanding what drives behavior. Motivation matters more than interface perfection. Failure reveals insights research can't. And scaling to 500+ skills requires thinking in systems, not just screens.
Most importantly: question your assumptions early and often. Every version I built was based on what I thought students needed. Only by watching them use (or skip) it did I discover what they actually needed.



















