Three approaches to a fraction comparison learning experience for high school students working below grade level. Assessed against Jordan's completion and simplicity requirements and Maya's learning science specification.
Three prototypes
Students answer 20 comparison questions in sequence. The system silently weights question types toward wherever accuracy is lowest. Wrong answers show the strategy for that type with a visual, then trigger a same-type retry before advancing. A compact accuracy grid is always visible; tapping Map opens a full breakdown as a modal overlay.
Students choose a challenge type from a tile map and attempt to build a fraction that satisfies a constraint — between two bounds, closest under a target, equal to a given value. Eight types across two tiers, unlocking at 60% accuracy. Scale bars are hidden by default behind a hint toggle. Progress persists across sessions via localStorage.
Each round presents a benchmark fraction and four cards to sort into bigger or smaller zones. Eight rounds cover all four comparison types twice; after round four, the second pass reorders rounds by weakest type accuracy. Wrong answers show the strategy and comparative bar charts for each card placed incorrectly.
Against the brief
Ten requirements drawn from Jordan and Maya's briefs, assessed across all three prototypes.
| Criterion | Detail | Skill Map | Forge | Sort |
|---|---|---|---|---|
JordanClear completion metric |
A single reportable number — did the student finish? | ✓ | ✗ | ~ |
JordanShip simple |
No unnecessary UI complexity or onboarding friction | ✓ | ✗ | ✓ |
JordanNo gamification |
No XP, leaderboards, characters, or cartoon rewards | ✓ | ✓ | ✓ |
JordanMulti-market |
Works for 3rd/4th graders with minimal adaptation | ✓ | ✗ | ✓ |
MayaAll four types |
Same denominator, same numerator, benchmark ½, unlike | ✓ | ✓ | ✓ |
MayaFolded diagnostic |
Diagnosis happens through practice, not a separate phase | ✓ | ~ | ✗ |
MayaAdaptive difficulty |
Adjusts within the session based on live performance | ✓ | ✗ | ~ |
MayaVisual strategy feedback |
Wrong answer shows the rule for that type with a visual | ✓ | ✓ | ✓ |
MayaBehavioural hooks |
Streak counter, progress bar, implementation intention | ✓ | ~ | ✗ |
MayaMastery retry |
Wrong answer triggers same-type retry before advancing | ✓ | ✗ | ✗ |
Recommendation
Skill Map is the only prototype that passes all ten brief criteria without a structural workaround. It does so because its core design decision — an adaptive pairwise drill — directly mirrors what Maya specified and simultaneously satisfies Jordan's requirements.
The completion metric is airtight: 20 questions, finished or not. That's the number Jordan can report. There is no ambiguity about what "done" means, and no onboarding friction between opening the app and the first question.
The mastery retry is the most defensible learning science decision across the three prototypes. It's the only mechanism that ensures a student doesn't carry a wrong answer forward without correction. Forge and Sort both advance regardless of whether the student understood their mistake.
The always-visible skill grid creates a mild tension with Maya's preference for invisible adaptation. Students who notice the system may feel tracked rather than supported. The mitigation is the grid's design: it reads as personal progress rather than surveillance, and the Map button keeps the full breakdown out of the way by default.
Neither Forge nor Sort are redundant. Forge's construction challenge mechanic addresses a genuine gap — students who pass comparison questions by rote without number sense will struggle in Forge. Sort's batch mechanic reduces individual decision fatigue and may be preferable for shorter sessions. Both make strong additions to a second release, behind the Skill Map MVP.