Back to work
Case Study · mAI Food · 2024

AI DOESN'T NEED
TO BE PERFECT.

When AI makes a mistake, the UX needs a plan. mAI Food didn't — and that was the real problem. This is how I redesigned a broken kiosk experience from the inside out.

3mo
Timeline
5
Systemic failures fixed
MVP
Shipped
Role
Lead Designer (embedded)
Client
izy · via Resimator
Platform
Web · Kiosk POS
Status
MVP Shipped
mAI
01
What this was

AN AI PRODUCT WITH NO PLAN FOR FAILURE.

mAI Food is an AI-powered POS product built by izy for corporate cafeterias. It uses a camera to detect food items on a tray and generate a bill automatically — reducing manual input and speeding up checkout.

I was the lead designer, employed by Resimator and embedded within izy's product team. I owned the end-to-end redesign of a broken v1: the AI detection existed, but the UX around it — the feedback, the fallbacks, the edge cases — had no clear design thinking.

“The AI would always have moments of uncertainty. My job was to design those moments — not hide them.”
SCOPE
01
End-to-end checkout UX
From first instruction to payment
02
AI feedback states
Confidence scoring, verification flows
03
Weighted product flow
Sequential modal replacing broken list
04
Design system patterns
New shared component library
02
The problem

EVERY AI FAILURE BECAME A UX FAILURE.

When I mapped the v1 screens for the first time, the problem wasn't hard to find. Every screen was solving the same problems differently. No shared language, no consistent feedback logic, no fallback thinking.

01
Inconsistent feedback states
Severity: High

Users had no way to tell what the AI was doing — scanning, processing, or stuck.

02
No error states when AI failed
Severity: High

When detection went wrong, nothing communicated it. Users were left guessing.

03
Visually inconsistent UI
Severity: Medium

No design system. Each screen handled patterns independently.

04
Weighted product flow was broken
Severity: Medium

Items needing weighing were dumped into a list with no sequence or guidance.

05
Manual mode was hard to find
Severity: Medium

No clear path when AI failed. Users and staff were stuck with no obvious next step.

“Users abandoned checkout. Staff intervened constantly. The system needed more help than it gave.”
03
Context & constraints

THE CONSTRAINTS WERE THE BRIEF.

Understanding what I couldn't change shaped every decision I could make.

NO DIRECT USER ACCESS

Users were based in Norway. All insights came through written PO notes and tickets. The PO became my primary proxy — I designed from patterns in feedback, not direct observation.

AI / ML LOGIC WAS FIXED

I couldn't change how the AI detected items or its confidence thresholds. Failure states were not edge cases — they were core flows.

NO EXISTING DESIGN SYSTEM

v1 had no shared component library. Every screen had drifted independently. I had to establish new patterns while simultaneously redesigning the product.

“With direct user access, I would have explored user-driven AI annotation — where users feed the system when it fails them, making it smarter over time. That became a north star for future iterations.”
04
Research & define

THREE PATTERNS. ALL POINTED THE SAME WAY.

Without direct user access, I built my understanding from PO notes and support tickets, an audit of v1 screens, and a discovery workshop I ran with the cross-functional team.

Confident but wrong
The AI detected items incorrectly with high confidence — and gave users no way out. No correction shortcut. No rescan.
PO notes
Scanning sequence confusion
Users didn't know whether to place food first or press scan first. This filled the AI pool with unidentified images.
Support tickets
No recovery path
When detection failed, no shortcut to correct it — no rescan, no quick edit, no clear route to manual mode.
PO notes · tickets
PRIMARY USER · EMIL

Busy office worker. Picks up food quickly between meetings. Low tolerance for friction or confusion. Drove the primary design decisions.

SECONDARY USER · INGRID

Canteen staff. Manages 100+ employees daily. Needs reliable tools. Steps in when the system fails users. Informed fallback and manual mode flows.

“Trust in AI isn't built by being right every time — it's built by handling mistakes well.”
05
The decisions

FIVE DECISIONS. ALL FROM THE SAME QUESTION.

Every decision came back to one thing: what does the system communicate when it isn't sure?

01
Tap to Verify
Confidence-based trust calibration · only surface correction when AI is uncertain

When the ML team flagged a low confidence score, the temptation was to auto-switch users to manual mode. I rejected this — an abrupt mode switch feels like a failure. Instead: an orange warning on the detected item with a “Tap to verify” prompt. High-confidence items confirm automatically with no interruption.

It preserved user agency. A system failure became a conscious decision moment.
02
Fallback states
Designing for failure first, not last · every AI state has an explicit UX response

In v1, AI failures produced silence. No message, no next step, no guidance. I mapped every possible AI state and designed an explicit response for each — not as afterthoughts, but as first-class flows built into the core interaction model.

This is the work users never notice when it's done well — and can't stop noticing when it isn't.
03
Weighted product modal
Sequential guided flow · one item at a time, connected to live pricing

v1 dumped all weighted items into a list with no sequence and no guidance. I replaced it with a modal-driven sequential flow: one item at a time, with a clear physical instruction and a live connection to the order list.

Abandonment of weighted products dropped after launch.
04
Manual mode
Always available — never forced

Both a contextual prompt and a persistent trigger — never forced, always available. Smooth to enter, smooth to exit. v1 comparison: hidden with no clear trigger. Redesign: always surfaced at the right moment.

05
Design system
Three new patterns on an existing base

AI scanning interaction, manual mode flow with microinteraction, and the order list component — documented and adopted by the team in ongoing development.

06
The end-to-end flow

SIX MOMENTS. ONE GOAL.

The redesigned flow covers six key moments — from the first instruction a user sees to checkout. The system should always have something useful to say, whether the AI is working or not.

End-to-end flow
01
Starting Screen
Place food on tray — clear first instruction
Entry
02
AI Detection
Camera scans tray · order builds live
High confidenceTap to verify
03
Weigh Modal
One weighted item at a time · live price
If weighted
04
Scan Complete
Scan again · or switch to manual
State
05
Review Order
Verify flagged items · adjust quantities
Decision
06
Continue to Pay
Eat In / Take Away · total
Checkout
Scanning path — AI detection
SCREEN 01 · ENTRY
Starting Screen
"Place your food on the tray to begin" · Start Scanning CTA
SCREEN 02 · AI STATE
Detecting Food Items…
Camera reads tray inside bounding box. Order list populates in real time.
Auto-added (high confidence)Tap to verify (low confidence)
NORMAL ITEM
Added to Order
Qty +/− · Price shown · Delete option
WEIGHTED ITEM
"Weigh the Item"
Modal appears · place item on scale
Weight detected
e.g. 0.1 kg · price calculated live
✓ Weighing Complete
Item resolved · order updated
SCREEN 04
Scan Complete
"You can scan more items to order."
OPTION A
Scan Again
Re-trigger AI detection
OPTION B
Enter Manual Mode
Browse menu to add items
Manual path — always available
TWO ENTRY POINTS
From starting screen (Chef Mode)Via "Enter Manual Mode" after scan
SCREEN 05 · MENU
Menu Grid
Highlights · Recommended · Salad · Bakery · Drinks · Tasty Starters · Sandwich · Wraps
INTERACTION
Search + Browse
Filter by name or category · tap item card to add to order
ORDER UPDATES
Item Added
Qty defaults to 1 · +/− to adjust · delete to remove
EXIT OPTION
Exit Manual Mode
Returns to scan view · order preserved
CONTEXTUAL
Banner changes to “Scan your Employee ID” — applies discount / subsidy before checkout.
Persistent throughout session
ORDER PANEL — ALWAYS VISIBLE.
Right-side panel active in every state. Scanned and manually-added items coexist in the same order.
DINING PREFERENCE
Eat In
Take Away
Toggle before paying
ITEM STATES
Normal item — price shown
⚠ Tap to verify — pending
⚖ Weighted — 0.1 kg · NOK X
SUMMARY + CTA
Discount
Subsidy
Sub-TotalNOK —
Continue to Pay →
Screen 01 · Starting screen

Clear instruction before any action. Removes scanning sequence confusion — users know to place food first, then press scan.

Screen 02 · AI detection & verification

Green = high confidence, auto-confirmed. Orange = low confidence, prompts a tap to verify. Colour carries meaning — the user decides, the system doesn't decide for them.

Screen 03 · Weighted modal

One item at a time. A clear physical instruction, sequential guidance, and a live price update directly connected to the order list.

Screen 04 · Manual mode

Contextual prompt and persistent trigger. Never forced, always one tap away — smooth to enter, smooth to exit.

07
What happened next

MVP SHIPPED. DATA STILL EARLY.

The MVP shipped. User data is still early — these outcomes are observational, based on PO feedback and team observation. I'm not claiming metrics I don't have.

The design system built for this project became the foundation for the team's ongoing development.

Faster
Checkout flow, less interrupted
Less
Staff intervention during checkout
Adopted
Design system, used in ongoing dev
08
What I still think about

THE WORK I DIDN'T FINISH.

LOOKING BACK

I'd push for user-driven AI annotation from the start. The Tap to Verify pattern was good for communicating uncertainty — but it's still reactive. What I really wanted was a model where users actively feed the AI when it fails them — making the system smarter with every interaction.

Designing for AI uncertainty taught me to treat every failure state as a design opportunity, not a fallback. Proxy research demands more rigour — working through a PO made me disciplined about finding patterns, not accepting observations at face value.

“Trust in AI is a UX problem. Users don't distrust AI because it's imperfect. They distrust it because it doesn't explain itself. That's entirely designable.”