B
EV UI Design Hero
Case Study, 2025

Next-Gen Automotive HMI: Intuitive & Adaptive In-Car Experience

Designed a production-ready infotainment interface for electric vehicles with modular, driver-focused interaction models, prioritizing safety, clarity, and minimal cognitive load.

Role UI/UX Design Engineer
Duration 2025
Tools Figma, Illustrator, Photoshop
Platform 10โ€“15" In-Vehicle Display
Automotive HMI EV UI Design Figma Prototyping NHTSA / ISO 15005 Usability Testing Glanceability ADAS Visualization Multi-Display Systems State Machine Design Touch-First Interaction

Cluttered dashboards are a safety hazard

Modern infotainment systems bury critical functions under layers of nested menus. Drivers are forced to take their eyes off the road, increasing cognitive load and accident risk. The challenge was to design a touch-first, glanceable, context-aware HMI that scales across screen sizes and adapts to driving conditions.

Visual Overload

Existing OEM interfaces pack too much information onto single screens, forcing drivers to parse complex layouts while driving at speed.

Glance Time Violation

NHTSA guidelines require interactions to complete within 2 seconds of glance time. Most current systems routinely exceed this threshold.

Deep Menu Nesting

Essential controls like climate, media, and navigation are buried 3โ€“4 levels deep, requiring multiple taps and prolonged attention away from the road.

No Contextual Adaptation

Interfaces don't adapt to driving conditions. The same UI is shown whether the car is parked, cruising, or navigating heavy traffic.

When a driver takes their eyes off the road for more than 2 seconds, everything changes. Our job wasn't just to design a beautiful interface. It was to design one that could save a life by keeping attention where it belongs.

Understanding drivers, not just users

Research went beyond standard UX methods. The automotive context demanded understanding physical environments, safety regulations, and the unique constraints of interacting with a screen at 70 MPH.

Ride-alongs & Observation

Observed real driver behavior in various conditions: when they glance at the screen, how long their eyes leave the road, and what triggers interaction during driving.

User Interviews

Spoke with drivers across different profiles, including commuters, long-haul drivers, EV owners, and tech-savvy vs. tech-averse users, to understand frustrations with existing infotainment systems.

Benchmark Analysis

Studied existing OEM systems (Tesla, Mercedes MBUX, BMW iDrive), analyzing information architecture, interaction patterns, and where they succeed or fail at glanceability.

Safety Standards Review

Deep dive into NHTSA Visual-Manual Guidelines and ISO 15005 to understand regulatory constraints around driver distraction, glance time, and interaction complexity.

Key Insights

Three findings that shaped every design decision:

<2s

Glance Time Budget

Every interaction must be completable within a 2-second glance. This mandated large tap targets, minimal menu depth, and spatial predictability.

24/7

Persistent Climate Access

Users overwhelmingly expected climate controls to be always accessible, never buried in a sub-menu. Temperature is the most frequently adjusted setting while driving.

1:1

Spatial Predictability

Drivers build muscle memory for screen locations. Moving elements between screens or states caused confusion. Consistent spatial layout was non-negotiable.

One driver, four cognitive states

Traditional UX personas model different people. Automotive HMI models something harder: the same person in radically different cognitive states. A driver merging onto a highway and the same driver parked at a charger are effectively two different users. The interface must recognize this shift and adapt in real time.

This framework is grounded in the NHTSA Visual-Manual Driver Distraction Guidelines and ISO 15007 visual behavior standards. Each profile maps a driving context to a measurable attention budget, then prescribes specific HMI responses: zone activation, interaction modality, content restrictions, and layout density.

NHTSA 2013: Max single glance โ‰ค 2 seconds off-road NHTSA TEORT: Total eyes-off-road time โ‰ค 12 sec per task ISO 15007: Visual behavior quantification framework Cognitive Load: NASA-TLX demand scale (0โ€“100)
Critical
High-Load Active
Urban merge ยท Rain ยท Construction ยท Unfamiliar route
85%
85% Road 5% HMI 10% Mirrors
โ‰ค 0.8s
Glance Budget
72โ€“88
NASA-TLX
Voice
Primary Input
1 zone
Active Zones
HMI Response
  • Cluster-only mode, center & passenger dim
  • All touch interaction suppressed
  • ADAS alerts escalate to haptic + audio
  • Notifications queue until load drops
Elevated
Monitored Cruise
Highway cruise ยท ACC active ยท Clear weather ยท Familiar route
65%
65% Road 17% HMI 18% Other
โ‰ค 1.5s
Glance Budget
40โ€“55
NASA-TLX
Touch
Primary Input
2 zones
Active Zones
HMI Response
  • Center zone active: nav, widgets, media
  • Touch enabled with 48dp+ targets only
  • Passenger zone independent entertainment
  • Notifications shown but non-modal
Relaxed
Low-Load Idle
Stopped at light ยท Drive-through ยท Slow traffic queue
40%
40% Road 38% HMI 22% Other
โ‰ค 2.0s
Glance Budget
18โ€“30
NASA-TLX
Touch
Primary Input
2 zones
Active Zones
HMI Response
  • Centre active: nav, widgets, media
  • Passenger zone off (driver only)
  • Widget rearrangement enabled
  • Scrollable lists, smaller targets allowed
  • Quick-reply to messages enabled
Unlocked
Stationary / Parked
Park engaged ยท Charging ยท Waiting ยท Theatre mode
5%
5% Road 75% HMI 20% Other
โˆž
Glance Budget
5โ€“12
NASA-TLX
All
Primary Input
Full
Active Zones
HMI Response
  • Video playback unlocked on all zones
  • Full browsing, gaming, and media
  • Cluster transforms to content display
  • Theatre mode: ambient lighting sync

State Transition Map: How the HMI shifts between profiles

High-Loadmerge / rain detected
โ‡„
CruiseACC engage / clear road
โ‡„
Idlespeed < 5 mph / stop
โ‡„
Parkedgear = P / key off

35-minute commute: how the HMI adapts in real time

This timeline traces a single commute from driveway to office parking. It maps the driver's cognitive load, the HMI mode active at each segment, what the driver interacts with, and what the system restricts, showing how the four context profiles translate into a living, breathing interface.

Driveway0:00โ€“2:00
Residential2:00โ€“6:00
Highway6:00โ€“22:00
Merge22:00โ€“24:00
Urban24:00โ€“32:00
Parking32:00โ€“35:00
HMI Mode
PARKED
CRUISE
CRUISE
HIGH-LOAD
CRUISE
PARKED
Driver Action
Sets nav, picks playlist, adjusts seat
Eyes on road, voice confirms route
Glances at speed, taps "next track"
Both hands on wheel, zero HMI contact
Follows nav turn-by-turn, voice "call office"
Reviews trip stats, browses media
Zones Active
All 3
Cluster + Center
Cluster + Center + Passenger
Cluster only
Cluster + Center
All 3 (full unlock)
Cog. Load
LOW
MEDIUM
MEDIUM
PEAK
HIGH
LOW
t = 22:14
Highway Off-Ramp Merge
Speed drops from 65 to 35 mph. Vehicle detects deceleration + turn signal. HMI switches to High-Load: center zone dims, notifications queued, audio volume drops 40%.
t = 8:30
ACC Engagement
Driver activates Adaptive Cruise Control. System detects stable highway speed. Center zone lights up with widgets: weather, calendar, battery. Touch interaction re-enabled.
t = 32:45
Park Gear Detected
Gear shifts to P. Full content unlock: video playback, gaming, browsing. Cluster transforms from driving view to trip summary: energy used, distance, average efficiency.
The interface doesn't ask the driver to configure anything. Sensors, speed, gear position, ADAS state, and occupancy data feed the state machine, and the HMI adapts before the driver even notices the context has changed. That's the design goal: intelligence that's invisible.

Three-zone screen architecture

The 5120 ร— 1080px ultra-wide display is divided into three purposeful zones, each serving a distinct user and interaction context. The driver's zone prioritizes safety-critical information, the center handles navigation and vehicle controls, and the passenger side provides entertainment autonomy.

Interaction Screen Layout

Interaction Screen Layout

Driver's Interaction Screen Layout, Active, Neutral, Inactive

Driver's Interaction Screen Layout, Active, Neutral, Inactive

Passenger's Interaction Screen Layout, Active, Neutral, Inactive

Passenger's Interaction Screen Layout, Active, Neutral, Inactive

Built for darkness, motion, and speed

Every design token was chosen for the automotive context: high-contrast colors that work in direct sunlight and total darkness, Roboto for legibility at arm's length, 48dp+ touch targets for vibration-safe tapping, and a gesture vocabulary that maps to natural driving behaviors.

Color palette and typography system Interactions and gesture vocabulary

Meet Nova: Emotional intelligence on wheels

Nova is the in-car AI assistant designed with emotional awareness. Beyond executing commands, it proactively suggests actions, like offering ambient lighting when playing relaxing music, creating a warmer, more human connection between driver and vehicle.

Nova AI Assistant, emotional design and conversational UI

A touch language built for the road

Automotive touchscreens aren't phones. Drivers interact with gloved hands, bumpy roads, and split attention. Every gesture was designed for large target areas, single-hand operation, and zero ambiguity at speed.

Tap

Primary action. Select widget, toggle control, confirm dialog. Double-tap to quick-zoom maps or expand widgets to full-screen. All tap targets minimum 48dp with 8dp spacing.

48dp minimum

Long Press (Hold)

Reveal secondary actions: widget options, drag-to-rearrange mode, or contextual shortcuts. 400ms threshold.

400ms hold threshold

Swipe Left / Right

Navigate between screens in the same zone, dismiss notifications, cycle through media playlists.

Zone-scoped

Swipe Up / Down

Volume control (passenger zone), scroll lists, pull down quick settings panel. Velocity-mapped for precision.

Velocity-mapped

Pinch In / Out

Map zoom only, restricted to navigation zone. Disabled on driver cluster to prevent accidental triggers.

Nav zone only

Drag

Reorder widgets in customization mode, adjust seat position slider, move map center point. Requires long-press activation.

Requires hold first

Edge Swipe

Swipe from screen edge opens zone control panel. Left edge opens driver settings, right edge opens passenger app drawer.

Edge-triggered only

Three-Finger Gestures

Three-finger swipe down: screenshot current screen state. Three-finger hold: activate Nova voice assistant.

Power user shortcuts
Every gesture was validated against the 1.5-second glance rule. If a driver can't complete an interaction within a single glance away from the road, the gesture doesn't ship.

Screens & Implementation

Every pixel serves a purpose

Card-based, modular layout designed for glanceability. Each screen state maintains spatial consistency. The driver always knows where to look without cognitive re-mapping.

Main driving interface with navigation and media

Primary Driving View

Navigation with 3D map, real-time speed, battery status (75%), ADAS indicators, and the entertainment panel, all visible at a glance. The driver zone shows the road environment while the passenger side offers independent media control.

Navigation Media ADAS Speed Battery
Dashboard widget view with calendar, weather, battery

Dashboard Widgets

Context-rich home screen showing weather (San Francisco, 75ยฐ), FM radio, calendar with upcoming meetings, battery status (75%), maintenance schedule (73 days left), and battery usage analytics, surfacing information proactively so drivers don't need to dig.

Weather Calendar Battery Analytics Maintenance
Full music experience with Spotify integration

Music & Entertainment

Spotify-integrated media experience showing top artists, curated mixes (Soul, Pop, Romantic, Electronic, Upbeat, Mega Hit), active playlist "Caviar 2.0" with track list, and playback controls, designed for quick song switching with minimal eye movement.

Spotify Integration Top Artists Playlists Playback
Passenger entertainment and gaming

Passenger Entertainment & Gaming

The passenger zone transforms into a gaming hub with categorized games (Casual, Role Playing, Word, Puzzle, Adventure), independent from driver controls. Shows three layout states: compact, expanded center, and full passenger takeover.

Gaming Passenger Zone Categories Multi-layout
Seat controls: Position, Height, Headrest, Recline, Lumbar, Thigh

Seat Controls

Driver and Passenger seat adjustment with visual feedback: Position, Height, Headrest, Recline, Lumbar Support, and Thigh Support. The 3D seat model provides immediate spatial understanding of each adjustment.

Seat Position Driver / Passenger 3D Preview
Seat massage controls

Seat Massage

Massage modes visualized directly on the seat model: Rolling, Unwind, Wave, Stretch, and Deep with intensity control (0โ€“4). Start/Stop massage control with clear visual state for active zones.

Massage Modes Intensity Visual Feedback
Dual-zone climate control with heating/cooling visualization

Climate Control

Dual-zone (Front 75ยฐ / Rear 67ยฐ) climate with heating and cooling visualized directly on seat models, warm orange for heat, cool blue for AC. Sync toggle, auto mode, and independent zone control for driver and passenger comfort.

Dual Zone Heat Map Auto Mode Sync

ADAS: When every pixel is a warning

Advanced Driver Assistance Systems require a fundamentally different visual language. During safety events, the interface shifts. Non-critical elements recede, color coding intensifies, and the driver's attention is directed precisely where it needs to be.

ADAS overview: Steering Assist, AEB, Lane Keeping

Steering Assist, Automated Emergency Braking & Lane Keeping

Three critical ADAS states: Steering Assist with green lane markers, AEB with red emergency vehicle highlight and brake indicator, and Lane Keeping Assist (LKA) with green boundary lines showing the safe corridor.

BSM, Object Detection, Weather Adaptation, LDW

Blind Spot, Object Detection, Weather & Lane Departure

Blind Spot Monitoring with red vehicle highlight and rear-camera feed, Advance Object Detection (animal on road), Weather Adaptation with rain-sensing visuals, and Lane Departure Warning (LDW) with red lane boundary alerts.

Safety isn't a feature. It's the foundation every other design decision sits on top of. ADAS visualizations were designed to escalate progressively: ambient awareness, alert, intervention, emergency. The driver should never be surprised.

Occupancy-aware intelligence: the brain behind the screens

The interface isn't static. It dynamically adapts based on who's in the car, whether the vehicle is moving, and real-time sensor data. An occupancy state machine governs which zones activate, what content is allowed, and how the layout reflows, all without the driver ever needing to configure anything.

01

Decision State Machine

Event Trigger
Door Open ยท Seat Sensor ยท Ignition
System wakes on any occupancy signal: door ajar, weight on seat pad, or ignition cycle. Starts the state evaluation pipeline.
CAN busSeat-pad ADCBelt latch
Processing
Sensor Fusion Engine
Merges seat-pad weight, belt latch, IR cabin camera, and BLE phone proximity. Outputs an occupancy map with confidence scores.
IR CameraBLE ProximityML ClassifierConfidence โ‰ฅ 92%
Decision
Driver Present?
โœ“ YES
โœ— NO
Personalization
Profile Recall
Loads driver profile: seat, mirrors, climate, layout. If phone BLE matches, instant personalization. Guest gets defaults.
BLE IDLayout prefsTheme
No Driver
Theatre / Parked Mode
No driver detected + vehicle parked, all screens merge into a full canvas. Video, games, and immersive content unlocked across every zone.
Full canvasVideo unlockAmbient lighting
Decision
Vehicle Moving?
MOVING
โ—ป PARKED
Active
Drive Mode Layout
Safety-first. ADAS + cluster active. Video locked. Centre shows nav + condensed media. Tap targets expand to 56dp.
ADAS onVideo locked56dp targets
Stationary
Park Mode Layout
Full content unlock. Browser, video, gaming available. Driver cluster dims after 30s inactivity. Climate persists.
Full unlockAuto-dim 30sClimate on
Decision
Passengers Detected?
โœ“ YES
โœ— SOLO
Multi-Occupant
Passenger Adaptations
Passenger screen activates. Co-pilot mode for route tasks. Rear: Kids Mode if child classified. Voice uses zone routing.
Co-pilotKids ModeZone voice
Solo
Driver-Only Layout
Passenger screen off. Centre expands for driver: large nav, full media control, single-zone climate.
FP offExpanded center
Input Layer
Voice & Gesture Engine
Mic beamforms to active occupant. Multi-seat: wake-word + zone address. IR gesture maps to nearest zone.
BeamformingWake-wordIR gesture
Guard
Sensor Fault?
๐Ÿšจ DETECTED
โœ“ CLEAR
Degraded
Fail-Safe Default
Sensor failure triggers driver-only mode. All video locked. Non-essential displays disabled. Diagnostic logged to service bus.
Driver-onlyVideo lockedDiag logged
Renderer
Layout Engine: Dynamic Grid Reflow
All state branches converge here. Widgets resize/reflow based on final occupancy state. No black bars, adaptive grid fills every pixel. Retract โ‰ค 1s, 200ms ease transitions.
CSS Grid200ms easeNo black barsWidget scale
Continuous
Monitoring Loop: Re-evaluate on Change
System continuously monitors occupancy. Any seat change, belt event, or speed transition triggers re-evaluation from the top. Hot-swap at charging transitions seamlessly.
โ†ป LOOP: SENSOR FUSION
Priority
Emergency Signal?
๐Ÿšจ DETECTED
โ€” CLEAR
๐Ÿšจ
Override
Emergency Screen Protocol
Collision or SOS seizes ALL zones. Emergency HUD overlay. eCall status, rescue ETA, door unlock status, nearest hospital. All entertainment killed.
eCallSOSDoor unlockHospital nav
Recovery
Post-Emergency Handback
Emergency cleared, 5s cooldown, re-evaluate occupancy, restore pre-emergency layout. Event log persisted for service review.
5s cooldownLayout restoreEvent log
โ†ป RECOVERY: SENSOR FUSION
02

Screen Behaviour by Occupancy - Interactive Simulator

D
P
RL
RR
Gear = D
Gear = P
Emergency SOS
Driver Only
Any speed, standard driver layout
CLUSTER
NAV + MEDIA
OFF
DriverFull cluster + nav map
CentreLarge nav, condensed media, 1-zone climate
Pass.Off

Designed for the real world

The interface works across lighting extremes, from direct sunset glare to pitch-black night driving. Day and night themes aren't just color swaps; the entire information hierarchy, contrast ratios, and visual weight shift to maintain readability.

Night mode in sunset car interior

Night Mode

Dark interior with high contrast UI, optimized for pitch-black night driving. Reduced brightness, enhanced glow elements, and warm amber accents minimize eye strain while maintaining full readability.

Dark Theme High Contrast Low Glare
Day mode in daylight car interior

Day Mode

Light theme designed for bright environments: direct sunlight, sunset glare. Increased contrast ratios, bolder typography weight, and adjusted color saturation ensure legibility in all ambient conditions.

Light Theme Sun Readable High Visibility

Results & Reflection

Tested with real drivers, measured results

The Figma prototype was tested with 15 users. Iterations focused on color contrast improvements, minimizing menu depth, and refining climate controls based on direct feedback.

42%

Faster Task Completion

Improvement over previous OEM baseline for core tasks like adjusting climate, changing music, and checking navigation.

10โ€“15"

Display Scalability

Design system and modular layout scaled seamlessly from 10-inch to 15-inch displays without layout degradation.

HUD

Roadmap Consideration

Design language and information architecture were considered for future heads-up display integration.

Where this system goes next

The current design solves the core HMI challenge, but the roadmap extends into emerging technologies that will redefine the in-car experience over the next 3โ€“5 years.

AR Windshield HUD

Navigation directions, speed, and ADAS alerts projected directly onto the windshield as augmented reality overlays, eliminating the need to glance down at the cluster entirely. Currently exploring optical waveguide integration and field-of-view constraints.

Adaptive AI Interface

Nova evolves from reactive assistant to predictive companion, using driving patterns, time of day, and calendar data to pre-configure the interface before the driver even touches the screen. Machine learning models that personalize widget priority and layout in real-time.

Voice-First Driving Mode

A dedicated high-speed mode where the touchscreen dims to a minimal HUD and all interaction routes through voice and gesture. Reduces visual distraction to near-zero during highway driving, while maintaining full functionality through Nova's conversational layer.

OTA Design Updates

Over-the-air updates that deliver new widget types, theme packs, and interaction refinements, similar to how Tesla ships software. The component architecture already supports hot-swappable UI modules without requiring a full system refresh.

Biometric-Driven UI

Integration with steering wheel heart rate sensors and cabin camera drowsiness detection. The interface would dynamically adjust, increasing contrast, activating alert tones, or suggesting rest stops when fatigue is detected.

V2X Connected Display

Vehicle-to-everything communication feeding real-time traffic light timing, emergency vehicle proximity, and road hazard alerts directly into the cluster, turning the ADAS layer from sensor-only to network-aware.

What I learned

Designing for automotive HMI forced a fundamentally different way of thinking about interfaces, where physics, regulation, and human attention are all design materials.

1

Simplicity equals clarity, not minimalism. Removing features doesn't make an interface better. Making every element instantly understandable does. The goal is comprehension speed, not visual emptiness.

2

In-car UX must respect physics. You're designing for a body in motion, under vibration, with variable lighting and divided attention. Screen design rules that work for mobile don't automatically transfer to automotive.

3

Safety is a design material, not a constraint. Working within NHTSA and ISO standards didn't limit creativity. It focused it. The 2-second glance budget forced better hierarchy, larger targets, and smarter information architecture.

4

Cross-functional collaboration is essential. Automotive HMI can't be designed in isolation. It requires tight alignment with safety engineering, hardware teams, motion designers, and software developers.

The biggest takeaway: in automotive HMI, the best interface is the one the driver never consciously notices. It just works, safely, beautifully, and exactly when needed.
Next Project

Maritime Safety VR Training System

Designing an immersive, guided VR training experience for commercial fishing safety, backed by academic research and human-centered UX principles.

UX Research VR Design Unreal Engine Academic Research
Maritime Safety VR Training