Back to Projects
Mobile UX · MRO · Field Operations

MRO Maintenance App

Redesigning the maintenance workflow for aircraft technicians — from 14 screens to 5, cutting input errors by 40%.

−40%
input errors
14 → 5
screens
94%
task completion
01 — Overview

Context & Scope

Role Lead UX Designer
Timeline 9 months
Team 4 persons
Platform iOS / Android

Aircraft maintenance technicians — known as AMTs — operate in physically demanding environments: noisy hangars, confined spaces, variable lighting, and always wearing protective gloves. Their daily tools are supposed to help them move fast and stay safe. Instead, they were stuck with a legacy desktop system ported onto tablets, built for office use, not the hangar floor.

The existing MRO (Maintenance, Repair & Overhaul) application forced technicians through 14 sequential screens to complete a single part replacement validation — the most frequent task in their workflow. Touch targets were designed for a mouse cursor. Forms required precise text entry without glove-friendly input methods. Shift handover documentation was fragmented across five separate modules.

My mandate was to lead a full UX redesign of the mobile application, placing field usage patterns at the center of every decision. The constraint was real: the core data model could not change, only the interface layer. The goal was measurable — reduce input errors, reduce screen count, and reach a task completion rate above 90%.

Client
Airbus MRO Division
Year
2024 – 2025
Tools
Figma · Maze
Lottie · Jira
Deliverables
Research report
Prototype · Design system
02 — Research & Discovery

Understanding the field reality

Research was conducted directly in two active maintenance hangars. Observing technicians mid-task — gloves on, tools in hand — revealed usage patterns invisible in any requirements document.

1
Stakeholder
kickoff
2
Contextual
interviews
3
Cognitive
walkthrough
4
Benchmark
analysis
5
Synthesis
& insights
12

Contextual Interviews

In-hangar sessions across 2 sites (Toulouse & Hamburg). Technicians used the app during real tasks while narrating their experience. Recorded with consent for later affinity mapping.

8

Cognitive Walkthrough

Identified 8 critical failure points in the existing flow — moments where the interface forced incorrect actions or caused technicians to abort a task and restart from the beginning.

3

Benchmark Analysis

Compared 3 competing MRO tools (Boeing Toolbox, Ramco Aviation, SAP EAM Mobile) on task flow structure, touch interaction patterns, and offline capability design.

"With gloves on, I can't type anything into those small fields. I have to take one glove off, enter the data, put it back on — for every single step. It's absurd on a 12-hour shift." — Senior AMT, Airbus A320 line maintenance, 16 years experience
"The shift handover is the most critical moment of the day. I spend 20 minutes hunting through the app to leave notes for the next team. Something is fundamentally broken there." — Line maintenance technician, Hangar 3, 9 years experience
🧤

Gloves make precision input impossible

Technicians work with gloves throughout their shift. Touch targets below 56px were regularly missed, and text fields requiring keyboard input caused workflow interruption multiple times per hour.

🔁

Part validation: the critical daily task

The most frequent task — performed 4 to 12 times per shift — required navigating an average of 4 screens. Technicians had developed workarounds using paper sheets alongside the app.

⚠️

Error rate peaks at shift handover

Input errors were highest during shift handover documentation: technicians were fatigued, under time pressure, and the module required the most complex multi-field input in the entire app.

03 — Define

Primary Persona

AR
Antoine R.
38 ans · Senior Aircraft Technician · 14 ans d'expérience · Airbus MRO
"I need an app that works the way I work — fast, reliable, and glove-friendly. Every extra screen is time I'm not spending on the aircraft. Right now the tool slows me down instead of helping me."
Daily Tasks
  • Part replacement validation (4–12x per shift)
  • Pre-flight inspection sign-off
  • Defect logging and status updates
  • Shift handover documentation
Pain Points
  • Touch targets too small for gloved interaction
  • 14 steps to complete a single validation
  • No offline mode — hangar wifi is unreliable
  • Shift handover module is fragmented and slow
How Might We
HMW 01

How might we enable full task completion with gloves on, without any keyboard input for the most frequent workflows?

HMW 02

How might we collapse the part replacement validation flow into a single gesture-based interaction that remains traceable and audit-compliant?

HMW 03

How might we make shift handover documentation the fastest interaction of the day rather than the most error-prone?

04 — Lo-fi Iterations

Wireframes — 6 key screens

Starting from hand sketches, the core concept emerged quickly: eliminate deep navigation hierarchy, surface the most frequent action on every screen, and design every interactive element for a minimum 56px touch target.

01 — Shift Dashboard
TODAY'S TASKS
RECENT ALERTS
All active tasks visible without sub-navigation. Priority task surfaced with accent color. Shift start action always one tap away.
02 — Part Scan & Lookup
OR SEARCH BY P/N
QR/barcode scan as primary action — no gloves-off required. Manual part number search as fallback. Single large CTA, no nested menus.
03 — Single-screen Validation
PART DETAILS
All validation info consolidated on one screen. Swipe-to-confirm gesture replaces the existing 4-screen multi-step flow. Action guarded by haptic confirmation.
04 — Defect Logging
SEVERITY
CATEGORY
VOICE NOTE
Severity via large tap buttons, no keyboard required. Category via dropdown. Voice notes replace free-text fields — critical for gloved technicians in noisy environments.
05 — Shift Handover
AUTO-GENERATED SUMMARY
PENDING ITEMS (2)
Handover report auto-generated from the day's logged events. Technician reviews, adds voice notes to flagged items, signs off in one tap. From 20 minutes to under 3.
06 — Aircraft Status Board
HANGAR A — 3 AIRCRAFT
Fleet overview with color-coded aircraft status. At-a-glance hangar awareness without deep navigation. Tap any aircraft row to see its active task queue.

Core design decisions at lo-fi stage

05 — Hi-fi Design

Mobile-first — built for the hangar

High-fidelity screens designed in Figma with a custom design system built around three non-negotiable constraints: glove-friendly touch targets, high contrast for variable hangar lighting, and full offline capability.

9:41 ●●● 100%
Shift A · Hangar 3
NOW
Start shift
Shift Dashboard
9:41 ●●● 100%
Manual entry
Part Scan
9:41 ●●● 100%
P/N 4B2-X70981
→ SWIPE TO CONFIRM
Swipe Validation
9:41 ●●● 100%
AUTO-SUMMARY
Sign off handover
Shift Handover
Touch targets

All interactive elements respect a minimum of 56px height. Primary CTAs reach 64px to ensure reliable activation through thick work gloves. Tested on-site with six common glove types across both hangar sites.

Contrast & lighting

Dark base theme chosen after observing screens wash out under hangar fluorescent lighting. Accent green (#0EA271) achieves 7.2:1 contrast against the dark background — exceeding WCAG AA compliance requirements.

Offline-first

All critical flows function without network connectivity. Data queues locally and syncs when connection is restored. A persistent status indicator communicates sync state clearly at the top of every screen.

06 — User Testing

2 rounds · 14 technicians total

Testing was conducted in an active maintenance environment. Participants completed standardized task scenarios using the Figma prototype on a tablet, wearing their standard work gloves throughout the session.

Round 1
8 participants

Focused on the core validation flow and touch target sizing. Revealed that the swipe gesture needed haptic feedback and that the scanner viewfinder required a significantly larger active area.

Round 2
6 participants

Tested the complete end-to-end flow including the handover module. Round 1 participants were re-engaged to measure learning curve retention and long-term workflow adoption.

Metric Before redesign After redesign
Task completion rate 61% 94%
Input error rate 47% 7%
Time-on-task (part validation) 4m 12s 1m 48s
SUS score (System Usability Scale) 58 / 100 88 / 100
88 SUS Score 0 100
Before / After — SUS score
Before
58
After
88
Iteration 1 — Round 1 finding

Haptic feedback on swipe confirmation

Without tactile feedback, technicians were unsure whether the swipe gesture had registered. Adding haptic confirmation reduced false completion attempts by 62% in Round 2 testing.

Iteration 2 — Round 1 finding

Enlarged scanner viewfinder

The initial scanner window was 160px wide. With gloves, technicians struggled to align barcodes within the frame. Expanding to full-width with corner guides cut scan failure rate from 31% to 6%.

Iteration 3 — Round 2 finding

Voice note transcription preview

Technicians appreciated voice notes but worried about accuracy in noisy hangars. Adding a transcription preview with a large single "Confirm" button resolved all hesitation around the feature.

07 — A/B Testing

Multi-step vs. single-screen validation

The most significant design bet was consolidating the 3-screen validation flow into a single-screen swipe interaction. A/B testing quantified the impact before committing to the engineering handoff.

Version A — Original

Multi-step validation flow

The existing approach: part details on screen 1, compliance checklist on screen 2, confirmation form on screen 3. Each screen required a tap to advance. Total: 3 screens, average 7 taps to complete.

Avg. completion time
3m 54s
32%
preferred this version
Version B — Redesign ✓

Single-screen swipe validation

All part details, compliance status, and confirmation on one screen. Swipe gesture (64px handle) with haptic feedback completes the validation. Total: 1 screen, 1 gesture, audit-traceable.

Avg. completion time
1m 30s
91%
preferred this version — 68% faster
Metric Version A Version B
Average completion time 3m 54s 1m 30s
Input error rate 41% 5%
User preference 32% 91%
Gloves-on completion rate 58% 96%
08 — Results & Learnings

Impact & Key Takeaways

−40%
input errors across all validated tasks
14→5
screens in the core validation flow
94%
task completion rate in Round 2 testing
88
SUS score final (vs 58 baseline)

UX Conclusions

01
Designing for physical constraints first — gloves, hangar noise, variable lighting — produces better interfaces for everyone. The 56px touch target rule that solved the glove problem also made the app significantly more comfortable for all users in unrestricted conditions. Constraints are generative, not limiting.
02
The most impactful redesign decision was not visual — it was architectural. Shifting from multi-screen sequential flow to a single-screen gestural interaction eliminated the root cause of most errors. Screen count is a UX metric as meaningful as SUS score. If your core task takes 14 steps, the problem is structure, not aesthetics.
03
Contextual research in the real environment is irreplaceable. Three weeks of hangar observation revealed the glove problem, the voice note opportunity, and the wifi reliability issue — none of which appeared in the original product brief or in any stakeholder interview conducted in an office setting.
What I would do differently

Involve the compliance and certification team from the research phase rather than at the handoff review. The swipe-to-confirm interaction initially raised audit traceability concerns that required a late-stage redesign of the confirmation data structure. Earlier cross-functional involvement would have saved two full sprints of rework.

Let's work together

Have a project in mind?

Get in touch