Mobile VR App Development for Training/Simulations
VR training works because the brain poorly distinguishes simulated from real experience. Surgeon practicing incision in VR engages same motor patterns as in operating room. Maintenance technician performing tasks on virtual equipment memorizes sequence through muscle memory. For mobile VR this imposes specific requirements: quality of simulation matters more than graphic realism.
Training Scenario Types and Technical Requirements
Different scenarios require different architecture:
| Simulation Type | Technical Focus | Key Challenges |
|---|---|---|
| Step-by-step procedures | Sequence, step verification | State machine, fail conditions |
| Emergency situations | Time pressure, stress test | Timers, branching scenarios |
| Soft skills / communication | Dialog trees, NPC | AI dialogue, facial animation |
| Technical maintenance | Object manipulation | Interaction system, physics |
| Spatial orientation | 3D navigation | Spatial audio, waypoints |
Scenario Engine: State Machine for Training Scenes
Training module is always branching scenario with success conditions, errors, transitions. Hard script doesn't work: must track user actions and respond.
// Unity: ScenarioManager based on ScriptableObject
[CreateAssetMenu(menuName = "Training/Scenario")]
public class TrainingScenario : ScriptableObject {
public List<TrainingStep> steps;
public int currentStepIndex;
public TrainingStep CurrentStep => steps[currentStepIndex];
public StepResult ValidateAction(TrainingAction action) {
var step = CurrentStep;
if (step.RequiredAction == action) {
currentStepIndex++;
return currentStepIndex >= steps.Count
? StepResult.ScenarioComplete
: StepResult.StepComplete;
}
step.ErrorCount++;
return StepResult.WrongAction;
}
}
TrainingAction is enum of all possible user actions: GrabObject, PressButton, NavigateTo, ConfirmChoice. Each step may have hints appearing when ErrorCount > threshold.
Interactivity: Object Manipulation in VR
Cardboard has one button. Full object manipulation requires either additional Bluetooth device (gamepad) or build interaction purely on gaze + dwell.
Gaze-based interaction for training:
User looks at object, progress indicator appears (filling ring), after 1.5–2 seconds — activation. For step-by-step training this works well because each step is predetermined — no need to figure out arbitrary manipulation.
For more complex interaction — 3DoF controller (Bluetooth) or move to 6DoF platform (Meta Quest, though beyond Cardboard scope).
Evaluation and Analytics
Training without progress measurement is useless. Each user action logged:
public struct TrainingEvent {
public string userId;
public string scenarioId;
public int stepIndex;
public TrainingAction action;
public bool isCorrect;
public float timeSpent;
public int attemptNumber;
public DateTimeOffset timestamp;
}
Metrics from this data:
- Completion rate — how far users progress
- Error rate per step — where mistakes happen (hints instruction/UI improvement)
- Time-to-complete — improvement dynamics per iteration
- Drop-off points — where users quit
Data sent to analytics (Firebase, custom backend) asynchronously. Batch send on network recovery.
Spatial Audio as Instructor
In training simulations sound is not background. Voice instructor narrates instructions. Positional audio directs attention: sounds from target object louder when user turns toward it.
On Android — Resonance Audio SDK. On iOS — AVAudioEnvironmentNode with positional sources. Subtitles mandatory: some users use devices without headphones.
Content Updates Without Recompilation
Training scenarios change: new equipment, updated procedures. Unity Asset Bundle system loads new 3D assets and ScenarioScriptableObject from server without Store app update.
// Load new training module as Asset Bundle
async Task<TrainingScenario> LoadScenarioBundle(string bundleUrl) {
var bundle = await AssetBundle.LoadFromUriAsync(bundleUrl);
return bundle.LoadAsset<TrainingScenario>("scenario");
}
Workflow
Analyze training content: subject domain, action types, required metrics.
Develop scenario engine: state machine, steps, success/failure conditions, hints.
3D content: create or adapt equipment models, environment.
Gaze interaction system, spatial audio instructor.
Analytics: event logging, backend transmission, progress dashboard.
Asset Bundle system for content updates without release.
Timeline Estimates
One training module with linear scenario and gaze interaction — 2–4 weeks. Platform with multiple modules, analytics, LMS integration, content update system — 2–4 months.







