Mobile App UX Research
UX research is not "ask users what they like". Users poorly predict their behavior, but perfectly demonstrate it in real tasks. Difference between what people say and what they do is what research should catch. Project that skips this stage spends budget on features nobody uses and underinvests in what's really needed.
Which methods we apply and when
Method choice depends on project stage and question type. No point in deep interviews if you need to test navigation hypothesis — tree test works better. No point in quantitative survey of 300 people if product not yet launched and you need to understand audience mental model.
Deep interviews (Contextual Inquiry) — for open questions: "How do you solve this now? Show me what you do on phone". Record participant screen via permission screen recording on iOS (ReplayKit) or AZ Screen Recorder on Android. Analyze not answers but actions: where person freezes, where they tap instinctively, what they miss.
Card Sorting — for testing information architecture before design starts. Online sessions via Optimal Workshop OptimalSort: user sorts function cards into groups, names them. At 20+ participants patterns become obvious. Result — similarity dendrogram directly affecting navigation taxonomy.
Tree Testing — for testing ready hierarchy. Participant shown text tree without design, given task "find section X". Optimal Workshop Treejack gives metrics: success rate, directness rate, time on task. Benchmark — success rate above 78% for key tasks. If lower — navigation needs rethinking before wireframes.
Heuristic Evaluation — audit existing app by Nielsen heuristics. Runs fast (1–2 days), gives list of specific UX problems with prioritization. Works well for projects needing to improve current product, not build from scratch.
Analytics + Session Recording — for products with existing user base. Firebase Analytics + Mixpanel for event tracking; Smartlook or UXCam for mobile session recording (with automatic sensitive field hiding). Mobile heatmaps work differently than web: instead of click heatmap — aggregated touch areas.
How we run research
Typical process for new product:
-
Research questions formulation — not "what do users think", but specifically: "How do current solution users search for [task X]? What objects do they mentally create?"
-
Screening and participant recruitment — for b2c apps we use User Interviews, Respondent.io or local panels. Screening criteria — not demographics but behavioral: "uses mobile app for [task] at least 2 times per week".
-
Running sessions — 45–60 minute interviews, online via Zoom with recording (with consent). Interview protocol — semi-structured: context questions first, then task-based part.
-
Analysis — Affinity Mapping in FigJam. Each observation — separate card, grouping by patterns. Thematic analysis, not quote quantification.
-
Synthesis — personas (if not existed before), jobs-to-be-done formulations, insight list prioritized by frequency and criticality.
Full cycle on 8–12 participants takes 3–5 working days including recruitment and analysis.
What team gets
- Report with insights (not just quotes — interpretation and recommendations)
- Prioritized problem and opportunity list
- Jobs-to-be-done formulations for key scenarios
- Session sources (if required) — recordings, transcripts, affinity diagram
Typical mistakes
Recruiting "convenient" participants — colleagues, friends, loyal customers. They give socially-desirable answers and don't represent real audience. Second pattern — running research after design is ready. At this stage team seeks confirmation, not insights, and tends to ignore criticism.
Cost and exact timeline depend on participant count, methods and recruitment necessity. Calculated individually.







