Usability Testing of Website Prototypes
Usability testing of prototypes reveals UX problems before a single line of code is written. Cost of fixes at prototype stage — pennies compared to frontend rework. Goal of testing — not to confirm design is good, but find specific barriers preventing user from completing target action.
Prototype Types and Testing Methods
| Prototype Type | Tool | Method |
|---|---|---|
| Low-fidelity (wireframes) | Figma, Balsamiq | Moderated testing |
| High-fidelity (hi-fi) | Figma Interactive, Adobe XD | Moderated + unmoderated |
| Clickable HTML | Storybook, CodeSandbox | Unmoderated, A/B |
| Staging version | Real site | Full cycle |
Preparing Test Scenarios
Scenario (task) should describe goal, not path. Bad: "Click 'Add to Cart' button". Good: "You want to buy blue sneakers size 42. Find them and complete checkout".
For e-commerce typical task set:
- Find specific product via search
- Apply price and size filters
- Add product to cart and proceed to payment
- Find return conditions section
Each task records: execution time, error count, abandonment point (where user gave up), subjective complexity rating by SEQ (Single Ease Question, 1–7).
Recording and Analysis Tools
Hotjar — session recording with click heatmaps. Not suitable for Figma prototypes, needs staging version. Business plan allows collecting up to 500 sessions per day.
Maze — integrates directly with Figma prototype. User gets link, completes tasks, system auto-builds funnel, counts misclicks and time to first click. Suitable for unmoderated testing on 30–100 participants.
Lookback.io / UserTesting — for moderated sessions with video recording and live observation.
UserBrain — cheaper alternative, panel of testers from different countries.
Recruiting Participants
For B2C site 5–8 representatives of target audience sufficient — by Nielsen rule, they discover ~85% of issues. For B2B with narrow audience (e.g., accountants or logisticians) recruiting is harder and takes more time.
Sources: existing customers (via CRM or email), professional panels (Toloka, Yandex.Vzglyad), social media with profession targeting.
Analyzing Results and Prioritization
After testing each issue gets severity by scale:
- Critical — user couldn't complete task
- Major — completed with difficulty or incorrectly
- Minor — annoyance but task completed
- Cosmetic — insignificant note
Results formatted as affinity diagram: stickers with issues grouped by interface zones. From this build prioritized fix backlog with effort/impact assessment.
What to Test First
On website prototypes most often found:
- Unclear navigation and information architecture
- Overloaded forms (too many mandatory fields)
- Unclear CTAs (buttons don't look like buttons)
- Lack of feedback after action (no confirmation)
- Search problems — zero results, irrelevant suggestions
Timeframes
Preparing scenarios and recruiting participants — 3–5 days. Conducting 5–8 moderated sessions — 1–2 days. Analysis and report with prioritized backlog — 2–3 days. Total: full cycle takes 1.5–2 weeks.







