Conducting UX Research of Website Target Audience
UX research is data collection about how real people interact with an interface or make decisions about using it. Without research, design is built on team assumptions — and team assumptions systematically diverge from user reality because the team is not its target audience.
This is not sociology for its own sake. Each method solves a specific design problem: where users get lost, why they don't complete registration, what builds or breaks trust.
Research Methods and When to Apply
Qualitative Methods
User Interviews — 45–60 minute conversation with a user following a semi-structured guide. Goal: understand the user's life context, tasks, pains, current solutions. We don't ask "did you like the site" — we ask "tell me how you last chose a contractor for repairs."
Typical sample: 5–8 people per segment. By saturation law (Nielsen, 2000), 5 interviews reveal about 85% of qualitative issues. More interviews needed with multiple fundamentally different segments.
Contextual Inquiry — researcher observes user at their workplace while performing real tasks. Discovers workarounds — solutions the user developed because the interface was inconvenient, but would never mention in an interview.
Usability Testing — user performs specific tasks with real site or prototype while researcher observes and notes difficulties. Think Aloud Protocol — ask user to verbalize their thoughts — reveals mental models.
Quantitative Methods
Web Analytics Analysis — Google Analytics 4 / Yandex.Metrica: user paths, exit pages, scroll depth, conversion funnels. Analytics answer "what happens" but not "why."
Heatmaps and Session Recordings — Hotjar, Microsoft Clarity, FullStory. Heatmaps show click distribution and mouse movement, session recordings show real user behavior. Useful for finding "dead zones" and unexpected click patterns (users click on something that looks like a button but isn't).
A/B Testing — comparing two page versions on real traffic. Requires statistically significant traffic (usually 1000+ unique visitors per group to reach 95% confidence) and clear winner metric.
Surveys — NPS, CSAT, or specific questions at interaction moments (triggered survey). Typeform, Google Forms, Survicate. Good for scaling qualitative hypotheses: check how widespread a problem found in interviews is.
Deep Dive: Conducting Proper Usability Testing
Most needed method, and most often done wrong.
Participant Recruiting — most critical step. Testing with "wrong" users gives false conclusions. Recruiting criteria should reflect real audience: domain experience, age, region, device type. Recruiting tools: Respond.io, Usertesting.com, Userlytics, VK/Telegram with interest targeting.
Tasks for Testing — tasks must be formulated from user goals, not interface functions. Bad: "Find the checkout button." Good: "You need to buy sneakers in size 42 by Friday. Act naturally."
Moderation — main rule: don't help. Staying silent when user struggles — uncomfortable, but that difficulty is data. Acceptable questions: "What are you thinking now?", "What did you expect to see?", "Where would you look next?"
Results Analysis — affinity diagram: all observations on sticky notes, grouping by themes. Output: prioritized list of problems with frequency and severity. Prioritization matrix: axes "frequency" and "severity" → critical problems quadrant needs immediate fix.
Real research example: for IT outsourcer corporate site, conducted 6 deep interviews with CTOs and IT directors + usability testing with 5 participants. Found: target audience wants to see tech stack and architecture examples — but site showed only services and testimonials. Also, 4 of 5 participants couldn't find case studies — hidden in "Blog" section. After redesign based on research, conversion to inquiry increased 40%.
Personas and Jobs-to-be-Done
Two artifacts built from research:
User Personas — generalized portraits of typical users. Name, age, role, goals, fears, usage context. Important: persona without data is fiction. Persona from 10+ interviews is a decision-making tool.
Jobs-to-be-Done (JTBD) — alternative framework. Focus not on user demographics, but on "job" user hires product to do. Formula: "When I [situation], I want [motivation], so [expected result]." Example: "When I choose a contractor for development, I want to see concrete technical examples of their work so I can assess team level without meeting in person."
Documentation and Results Delivery
After research, team receives:
- Research report — methodology, sample, key findings, recommendations
- Affinity diagram — visualized observation grouping
- Personas or JTBD cards
- Prioritized problem backlog with rationale
Timeline
Planning and recruiting — 1–2 weeks. Conducting 6–8 interviews — 1–2 weeks. Analysis and reporting — 1 week. Total: full cycle of quality research — 3–5 weeks. Quick prototype usability testing (5 participants, online) — 1–2 weeks.







