B
UX Case Study - 2025

Maritime Safety VR: Guided vs Unguided Immersive Training

Designed and validated an immersive VR flooding-response training module for commercial fishing safety - with guided instructional scaffolding, delivered via Meta Quest 3, and evaluated through a rigorous between-subjects experiment.

Lab HiMER Lab
Advisor Dr. Heejin Jeong
Platform Meta Quest 3
Study 16 Participants, Between-subjects
Role UX Designer & Researcher
Tools Unreal Engine, Figma
VR / XR Design UX Research Usability Testing Cognitive Load Maritime Safety Wilcoxon Signed-Rank Unreal Engine Meta Quest 3 Between-Subjects Design Instructions

Problem, method, and what we found

This study asked a focused question: does embedding instructional guidance inside a VR training environment improve performance and experience? Here's the answer at a glance.

Problem
Limited Realistic Practice for Flooding Events

Traditional classroom-based safety training offers limited realistic practice for rare but life-threatening events such as flooding. Organizations like AMSEA provide drill-based courses, but trainees rarely practice emergency procedures under realistic, immersive pressure.

Research Motivation
Method
Between-Subjects VR Experiment

16 participants (M=25.4 yrs) randomly assigned to Guided VR (with embedded instructional cues) or Unguided VR (n=8 per group). Measured task completion time, usability, physical comfort, cognitive load, and immersion.

HiMER Lab Research
Outcome
Guided VR Wins on Every Metric

Guided condition outperformed unguided across all 6 measured dimensions - usability, immersion, comfort, cognitive load, satisfaction, and task completion speed. All results statistically significant.

p ≤ 0.0156 across all tests
My Role
  • UX Designer & Researcher
  • VR Interface Designer
  • Experiment Designer
  • Data Analyst
Tools & Tech
  • Unreal Engine
  • Meta Quest 3
  • Figma
  • Wilcoxon Signed-Rank Test
Research Type
  • Between-Subjects Experiment
  • Quantitative + Qualitative
  • Controlled Lab Setting
  • Maritime Safety

How the study was designed and run

This was a rigorous between-subjects experiment, not a subjective opinion survey. Every metric was chosen to answer a specific UX or performance question.

I

Defined the Research Question

Does embedded instructional guidance inside a VR training environment improve procedural performance and subjective experience compared to an unguided VR environment?

We built two hypotheses: Guided VR would result in faster task completion (H1), and guided VR would produce better usability, immersion, comfort, and lower cognitive load (H2).

Research DesignHypothesis Formation
II

Built the VR Prototype in Unreal Engine

The flooding-response simulation was developed in Unreal Engine and delivered via Meta Quest 3. Two variants were built: one with embedded guidance cues appearing near relevant objects in context, and one without any in-environment instructions.

The scenario: participants were asked to locate the water leakage, perform appropriate response actions (e.g., closing the hole in the pipe), and complete the procedure as quickly and accurately as possible.

Unreal EnginePrototypeMeta Quest 3
Instructor Maya
VR Prototype Development - Unreal Engine Character animation and facial rigging in the Unreal Engine Sequencer for the virtual instructor NPC
III

Recruited and Ran 16 Participants

16 participants (mean age 25.4, SD 1.3) were randomly assigned to either the Guided condition (n=8) or the Unguided condition (n=8). Random assignment ensured group equivalence and allowed clean between-group comparison.

All participants completed the same flooding-response scenario. Task completion time was measured. Post-session surveys captured usability, physical comfort, immersion, and cognitive load.

Between-SubjectsRandom Assignmentn=16
IV

Measured the Right Things

We used multiple validated instruments to capture a complete picture: task completion time (objective), a custom usability scale (3 sub-dimensions), and custom immersion and comfort scales.

Wilcoxon signed-rank tests were used for ordinal scale comparisons between groups. A one-tailed t-test was applied for the task completion time hypothesis. Both are appropriate for small-sample, non-parametric or directional research designs.

Usability ScaleWilcoxon TestOne-tailed t-test
V

Analyzed and Interpreted

Descriptive statistics were computed for both groups across all metrics. Between-group comparisons were run to determine statistical significance. Every result was interpreted through the lens of real-world training impact, not just p-values.

Statistical AnalysisDescriptive Stats

Who we designed for

The study participants included graduate CS students and working product designers. These two groups bring very different analytical lenses to VR training: one approaches it as a system to understand, the other as a design artefact to critique. Both expose failure modes that a purely maritime user group might not surface.

Alex M.
Alex M.
MS Computer Science Student - Human-Computer Interaction Track
Age 25 Graduate Student Very High tech comfort Research-oriented
"I want to understand how VR systems handle real-time interaction and spatial UX - not just use it, but see how the design decisions hold up under pressure."
Goals
  • Understand immersive system design from an HCI research lens
  • Experience how guided vs. unguided paradigms affect task performance
  • Apply VR interaction patterns to his own research or thesis work
  • Compare VR training outcomes against traditional classroom methods
Frustrations
  • Most VR training systems lack rigorous UX research behind them
  • Hard to find real-world datasets on immersive training effectiveness
  • UI feedback in VR is often inconsistent or poorly timed
  • Gap between academic HCI research and deployed safety-critical systems
Behaviors
  • Highly analytical - questions system logic and interaction design choices
  • Comfortable with headsets, controllers, and 3D spatial navigation
  • Explores edge cases and pushes systems beyond intended use
  • Motivated by understanding the "why" behind design decisions

Why two personas matter for this design

Alex comes in with deep technical intuition but no maritime context - he stress-tests the system's logic and interaction design. Sandra comes in with a professional UX lens - she evaluates visual hierarchy, cue timing, and affordance clarity at a level most users never consciously articulate. Together, they surface both functional and experiential failure modes. A VR training system that holds up under both profiles is genuinely robust.

Instructor / Safety Officer
Monitors trainee sessions, reviews cohort performance data, and manages module availability across both student and professional cohorts.
Fleet / Program Administrator
Tracks compliance records, certification status, and training completion rates across multiple trainees and vessels.
Research Team (HiMER Lab)
Collects embedded real-time metrics - usability, immersion, cognitive load - without disrupting the training experience. Dr. Heejin Jeong's team.

The full training experience, end to end

The journey map traces Marco's experience from the moment he puts on the headset to when he receives his post-session report. Every stage revealed specific UX challenges that directly shaped design decisions.

01 Onboarding
02 Navigation
03 Training Start
04 Active Scenario
05 Task Execution
06 Debrief
Action
Puts on Meta Quest 3. Sees vessel environment. Enters ID and access code via controller.
Selects "Start Training" from the 6-option home menu.
Chooses "Flooding Response" module. Sets difficulty to Intermediate.
Reads task list, sees timer, receives guided cues from Virtual Instructor.
Locates leak, activates bilge pumps, patches hull, communicates with crew, preps abandon ship.
Reads live performance metrics. Reviews task completion time, accuracy, cognitive load score.
Thinking
"This looks like a real boat. Okay, I just need to log in - same as any device."
"Six options. Clear labels. I can see exactly what each one does."
"I want intermediate difficulty. I know the basics but I want to be challenged."
"Good - it tells me what to do next. I'm not lost. I know where I am in the process."
"The cue near the pump tells me which valve. I would have guessed wrong without that."
"87% task completion time. 92% accuracy. That's better than I expected. Where did I lose time?"
Emotion
Curious / Cautious
Novelty of VR. Some controller uncertainty.
Comfortable
Clear layout reduces anxiety. Knows where to go.
Focused
Module selection feels purposeful, not random.
Engaged
Guidance reduces disorientation. Immersion kicks in.
High Focus
Critical scenario stress. Guided group stays in control.
Satisfied
Concrete metrics give a sense of achievement and direction.
Pain Points
Controller learning curve Headset fit discomfort
None significant
Difficulty calibration
Unguided: high disorientation Unguided: unclear next step
Unguided: wrong object selection Unguided: time pressure anxiety
Wants to compare vs. peers
Design Response
Vessel-first environment. VR login floats in context of the boat deck, not a generic screen.
6-tile spatial menu. Icons + labels + short descriptions. No hidden navigation.
Module list with clear categories. Locked modules visible as future goals.
Three-panel HUD: task list + progress + Virtual Instructor. Always visible, never intrusive.
Contextual object-level text cues. Placed in 3D space near the relevant equipment.
Live metrics panel: task time, procedure accuracy, response speed, cognitive load score.

How the system is structured

The IA was built around three distinct user roles - Trainee, Instructor, and Researcher - each with a different entry path and information need. The goal was a single platform that serves all three without forcing any user through screens irrelevant to them.

Login / Entry Guest Trainee Instructor Guest / Observer Trainee / Learner Instructor GUEST OPTIONS Demo Scenarios Platform Overview Upgrade → Register HOME DASHBOARD What would you like to do today? INSTRUCTOR HUB Cohort Monitor Session Data & Analytics Module Control Start Training Assessment My Progress VR Settings Safety Library Instructor View Score & Report Perf. Dashboard Quest 3 / Audio Manuals & Procs Live Monitor Module Selection MAX DEPTH 3 Flooding Response ACTIVE Onboard Fire PLANNED Man OB Recovery PLANNED Life Raft Deploy PLANNED Flares Signal PLANNED Immersion Cold PLANNED ACTIVE TRAINING SESSION — 3-Panel Spatial VR Interface LEFT PANEL Training Modules Module cards Progress indicators Difficulty levels CENTER PANEL Task Queue + Live Metrics Active task steps Timer & score Environment status RIGHT PANEL Instructor + Checklist Instructor feedback Safety checklist Completion status Debrief + Score
Entry Point
Trainee / Primary Flow
Guest / Observer
Instructor
Active Session
Active Scenario
Planned Scenario

The IA was deliberately kept flat (max 3 levels deep) to reduce navigational cognitive load inside VR. In a spatial interface, every extra level costs more mental effort than on a flat screen. Fewer levels, clearer labels, better VR UX.

VR Training Environment  - Commercial Fishing Vessel
VR Training Environment - Commercial Fishing Vessel Aerial view of the immersive vessel environment built in Unreal Engine

Designing for VR in a safety-critical context

Every UI decision in a VR safety trainer carries higher stakes than a typical app. Information must appear at the right moment, in the right place, without breaking immersion or overloading the trainee. Here is how the system was designed.

Entry Point: VR Login & Onboarding
Entry Point: VR Login & Onboarding The trainee enters the simulation from the first-person perspective of a commercial fishing vessel. The login screen is spatially placed in the environment, establishing context before training begins.
Main Navigation Hub
Main Navigation Hub Six clear action cards presented spatially in the VR environment: Start Training, Assessment, My Progress, VR Settings, Safety Library, and Instructor View. Designed for controller-based spatial interaction.
Guided Training Session
Guided Training Session Three-panel layout: Training Modules (left), Active Scenario with real-time task progress (center), Virtual Instructor with live metrics (right).
Live Performance + Environment Data
Live Performance + Environment Data Scrolled view showing task list, live performance metrics, environment conditions (wind, sea state, water temp), and safety checklist - all visible during the scenario.

The experiment in action

A between-subjects study with 16 participants compared guided vs. unguided VR training on a commercial fishing vessel. Each session was recorded, tracked, and analyzed across task completion, accuracy, cognitive load, and usability metrics.

View Video
Inside the VR Training Environment
Inside the VR Training Environment First-person walkthrough of the commercial fishing vessel simulation, showing the immersive training space trainees navigate during live scenarios
View Video
Task Execution With Guidance
Task Execution (With Guidance) - Locating and Repairing the Leak Group 1 (n=8) - Trainees locate the pipe breach and perform repair actions supported by embedded instructor cues, with real-time text prompts placed near relevant objects to guide decisions without breaking immersion
View Video
Task Execution Without Guidance
Task Execution (Without Guidance) - Locating and Repairing the Leak Group 2 (n=8) - Trainees navigate the same flooding scenario without embedded cues, relying solely on prior knowledge to locate the breach and complete the repair procedure
Onboard Fire Suppression Scenario
Onboard Fire Suppression Scenario A planned future training module simulating an onboard fire emergency, requiring trainees to locate extinguishers and follow suppression protocols

Key UX decisions and why they matter

This wasn't just building a simulation. Every design choice was made with a specific UX goal in mind, rooted in cognitive load theory, immersion principles, and the Technology Acceptance Model (TAM).

Guided Instructions Reduce Cognitive Load

Embedded text cues near relevant objects (e.g., 'SoftWood: Perfect choice' near patching materials) give trainees context exactly when they need it, in the context they need it. This directly reduces extraneous cognitive load and frees mental resources for procedural learning.

Cognitive Load TheoryScaffolded Learning

Three-Panel Spatial HUD Layout

The left panel shows module list, center shows the active scenario with task checklist, and right shows the Virtual Instructor with live metrics. This spatial arrangement mirrors physical cockpit design, keeping critical info accessible without requiring head movement to find it.

Spatial UIInformation Architecture

Real-Time Progress Visibility

Task completion progress (40% complete, 1 of 5 tasks) plus time-per-task indicators are always visible. This addresses two known VR training pain points: disorientation about where you are in the process, and anxiety from unclear time pressure.

Visibility of StatusHeuristic Design

Integrated Research Metrics Display

The Virtual Instructor panel shows live Usability (85/100), Immersion (78/100), Comfort (90/100), and Mental Load (45/100) scores during the session. This dual-purpose design serves both the trainee and the researcher, without disrupting the training flow.

Research IntegrationDual Audience Design

Contextual Environment Data

Wind speed (35 kts NW), sea state (6-8 ft), water temperature (42°F), heading, and engine status are displayed in the training environment. These aren't decorative. They establish stakes, increase realism, and prepare trainees for actual operational conditions.

Contextual FidelityImmersion Design

Safety Checklist Integration

The right panel includes a live safety checklist (PFD Secured, EPIRB Ready, VHF Radio Operational, Life Raft Accessible, Flares Valid). Grounding the VR training in real regulatory standards gives trainees outcomes they can apply directly to real-world certification.

Standards ComplianceTransfer Learning

The results were unambiguous

Across every single metric, the guided VR condition outperformed the unguided condition. The pattern was consistent and statistically significant.

All Wilcoxon signed-rank comparisons: W = 0, p ≤ 0.0156. Task completion time: t(7) = 2.14, p = 0.034. Every result reached statistical significance.

Guided
Unguided
Guided Wins
Usability
Instruction clarity, task clarity, interaction ease
Guided88%
Unguided42%
Gap+46 pts
W=0, p≤0.0156
Guided Wins
Physical Comfort
& Immersion quality
Guided82%
Unguided48%
Gap+34 pts
W=0, p=0.0078
Guided Wins
Cognitive Load
Mental demand (lower = better)
Guided28%
Unguided65%
Reduction−37 pts
W=0, p=0.0078
Guided Wins
Satisfaction & Recommendation Likelihood
Guided90%
Unguided48%
Gap+42 pts
W=0, p=0.0156
Guided Wins
Task Completion Time
Faster = better
Guided45%
Unguided72%
Time saved−37%
t(7)=2.14, p=0.034

Where this system goes next

The current module covers flooding response. The vision is a complete commercial fishing safety training system across all major emergency categories. Five additional modules are planned, each grounded in the same established safety curriculum and the same UX principles that proved effective in this study.

Man Overboard

MOB recovery procedures, communication protocols, and crew coordination under time pressure.

Man Overboard
Man Overboard - MOB Recovery VR man overboard scenario - crew coordination, communication protocols, and recovery procedures under time pressure

Life Raft Deploy

Emergency raft deployment and boarding. High physical-fidelity tasks requiring precise sequencing.

Life Raft Deploy
Life Raft Deploy - Emergency Boarding VR raft deployment scenario - sequential boarding procedures and high physical-fidelity emergency response tasks

Signal Flares

Distress signal deployment procedures. Visual, timing, and safety-critical decision design.

Signal Flares
Signal Flares - Distress Deployment VR signal flare scenario - visual timing, safety-critical decisions, and distress signal deployment procedures

Immersion Suit

Cold water survival suit donning. Fine motor skill training in a time-critical, anxiety-inducing scenario.

Immersion Suit
Immersion Suit - Cold Water Survival VR immersion suit scenario - fine motor skill training for survival suit donning in a time-critical, high-anxiety environment

Each future module will follow the same research-informed design approach: guided vs. unguided testing, cognitive load measurement, and safety compliance. The goal is a fully validated, scalable maritime safety training platform.

What participants said after the session

Guided Condition

Unguided Condition

🗣️

One pattern stood out across both groups: participants wanted the system to know where they were in their learning curve and adjust accordingly. That's not a feature request - it's a design direction. Adaptive, performance-driven guidance is the logical next evolution of this system.

Poster Presentation

Presenting the research findings and VR prototype at the academic poster session, showcasing the design process, methodology, and key outcomes to faculty and peers.

What I learned from this project

Design Insight

Contextual guidance doesn't break immersion when it is designed as part of the environment, not pasted on top of it. The timing and placement of cues matters as much as their content.

Research Insight

Small sample sizes (n=8 per group) can still produce rigorous results when the right statistical tests are chosen and the effect sizes are meaningful. The W=0 results here are about as clean as it gets.

Product Thinking

The three-panel spatial HUD, real-time metrics, and safety checklist integration all point to a bigger opportunity: this system shows strong potential as a promising complement to traditional drill-based programs like AMSEA for scalable maritime safety training.

The biggest takeaway: in high-stakes, safety-critical VR training, guidance is not hand-holding. It is the difference between a training experience that transfers to real life and one that does not.

What I would push further

The study sample was young (mean 25.4 years) and controlled. Real commercial fishing crews are older, more experienced, and operating in far messier conditions. Future iterations need to test with actual fishing industry workers, in noisier environments, with real fatigue factors. The design should hold up there too, or it needs to change.

I would also explore adaptive guidance: rather than fixed text cues, an AI-driven instructor could modulate the level of guidance in real-time based on the trainee's performance, adapting to both experts and beginners within the same session.

Next Project

Next-Gen Automotive HMI

Designed a production-ready infotainment interface for electric vehicles with modular, driver-focused interaction models — prioritizing safety, clarity, and minimal cognitive load.

AutomotiveHMI Design UX Design
Next-Gen Automotive HMI
Listening...