Tag Archives: interaction design

From Idea to Spaghetti: The UX Gap Killing Home 3D Printing

Here we are, a month on from Christmas, and a new 3D printer hums away in our home office. Our 11-year-old wants to print a simple fidget toy to show his mates on the school bus. Small object, quick reward, low stakes. The marketing.and the social shorts imply this is exactly what the printer’s for.

The reality is different. The printer works, of course it does, and the model exists. But the user has hit a wall.

That wall is the missing middle between “I want this object” and “here’s how to manufacture it.”

Consumer 3D printing hardware has improved fast: cheaper, sturdier, more reliable. Model libraries are abundant. The breakdown happens in the software, specifically the slicer. This is the gateway to printing, and it’s built like an expert tool.

The mismatch is structural. A beginner wants a reliable outcome; the slicer demands process control. More specifically:

  1. Language doesn’t map to intent
    Slicers expose machine concepts and internal mechanics. They describe parameters you can change: retraction distance, Z-offset, support interface, seam position. These settings are real, and they matter. But they’re barely framed around what the user is trying to achieve.

Beginners don’t think, “I need to adjust my retraction.” They think, “Dad, why’s it suddenly all stringy?” They don’t think, “support roof.” They think, “Dad, how do I get this off without snapping it?”

When labels map to the machine rather than the outcome, users can’t predict consequences. They can only guess, or disappear down Google rabbit holes.

  1. Choice isn’t prioritised
    Most slicers present “available” and “appropriate” as equals. The result is a dense panel of options with weak hierarchy and next to zero guidance on what matters first.

It may be designed with the intention of empowerment and precision. In practice it lands as cognitive burden. For a novice, the implicit message is: if this print fails, it’s because you couldn’t figure out to configure it correctly.

  1. Feedback arrives too late
    3D printing has a slow loop. Prints take hours and failures often show up late, or worse, out of sight. The cost of learning is time, material, and patience. When you’re 11, with limited downtime in the week and busy weekends, the threshold for giving up is pitifully low.

When things go wrong, the slicer rarely helps you diagnose or recover. And when the workflow itself is fragmented, ie. slice on one device, move a memory card, print on another, the feedback loop gets even weaker. People end up in forums, LLMs, and YouTube. There they meet the expertise gap: explanations (from well meaning nerds) built on mental models they don’t yet have.

A home office with a desktop 3D printer mid-print, tangled filament on the build plate, and a child sitting nearby watching the failed print in silence.

The net result is the domestic print system collapsing like a soufflé. The child loses interest because the reward is delayed and fragile. The parent becomes a reluctant technician, spending evenings debugging through YouTube and ChatGPT rather than, y’know, making. Eventually the printer becomes background noise, a source of family tension and, ultimately, a dust collector.

None of this requires better hardware. It requires different system behaviour.

A simpler learning curve would start with intent, not settings:

Does this need to be strong, or just look good?
Is speed important, or a reliable outcome?
Are you OK with supports, or should we minimise them?

Translate those answers into parameters quietly, and surface the trade-offs in plain language:

Cleaner finish = harder support removal.
Faster print = higher failure risk.
Stronger part = longer print time.

Then, add risk detection and guided recovery through intelligent prompting:

“First layer contact looks low for this material; this often fails. Increase it?”
“Stringing likely from this preview; reduce temperature or increase retraction?”

If a print fails, treat it as evidence, not user incompetence:

“It didn’t stick” – ie. adhesion failure – propose bed/temp/first-layer changes.
“The layers are in the wrong place” – ie. layer shift – propose speed/acceleration/belt checks.
“The supports damaged the print” – propose support style/density/contact changes.

That’s the missing middle: decision support, progressive disclosure, supervised recovery. As ever, the software work is not adding more controls to the slicer UI. It’s helping novices get to a successful print without turning a weekend hobby into an apprenticeship.

At this point someone will say, “Plenty of crafts are hard.” True. But many have immediate feedback, you see the mess you make with a brushstroke straight away. Others take longer, ceramics, for example, but typically a coach is alongside you, and you start small.

With 3D printing, the existence of model libraries and exciting videos creates a false sense of readiness. You’re effectively handed the Mona Lisa in week two and told to have at it. Or you’re asked to kick a 40-yard conversion in a stiff breeze, with no useful feedback as to why it fell short or why she’s got a wonky eye.

Until slicers take responsibility for the learning curve they impose, home 3D printing will keep making the same breezy social media promise that “anyone can make!” and delivering the same experience: anyone can… eventually.

AI: I used AI for the tags, the excerpt, image generation, and a light sub-edit. The ideas, references, observations, and anecdotes are mine.

Tagged , , , , , , , , , , , , , , ,

Christmas Shopping Observations, Part Two

What happens when the system finally learns to listen.

Last week in Part One, I described why Christmas shopping feels hostile, why even the most basic purchase turns into a strange performance of archaeology, jargon and filters masquerading as understanding. The real problem wasn’t the products but the machinery. The fiction that a PLP grid is somehow an acceptable translation layer between human intent and retail stock.

This week is the other half of the story: the thing that replaces it.

Because the truth is, we’ve spent twenty years designing for systems that never deserved that level of obedience. We pretended the homepage was the grand entrance, the digital lobby with its scented candles and seasonal banners. We treated it like the flagship store: polished, high-stakes, endlessly debated at internal stakeholder meetings. Meanwhile, almost no one arrived through it, or if they did, they were there for a split second. Most people dropped in sideways, via Google, a WhatsApp link, an email, or a moment of panic at 11 p.m. The homepage was the UX and UI theatre we performed for ourselves and our clients.

Agentic systems make that fiction impossible to sustain. They don’t care about your reception desk and your neatly prioritised way finding. They don’t even see it. They take what you mean, “something thoughtful, about forty quid, she hates clutter, nothing scented” and drop you straight into the one, tiny corner of the site where the decision will live or die. A place that, inconveniently, most retailers still treat as a functional afterthought: the product-detail page.

A minimalist Scandinavian study at dusk, softly lit by a small desk lamp. Snow falls outside the window. On the wooden desk sits an open laptop showing a clean product page with only a few curated gift suggestions. A small, neatly wrapped present rests beside it, suggesting a calm, intentional shopping experience rather than the usual frantic grid of options.
A glimpse of the future: no endless grids, no filters, no festive panic, just a system that actually starts where you are.

The PDP becomes the real front door because in an agentic journey the start isn’t a place, it’s a sentence.

This is where that old inventory-obsessed model buckles. Catalogue commerce was built on the premise that customers begin at the top and drill down. Agentic commerce begins at intent and works sideways. The sitemap is your fiction, not theirs. The system no longer needs your categories. It needs your clarity.

Be under no illusion though, this ain’t easy. This only works if the agent can explain itself. When a system gives you two options instead of two hundred, you need to know why. Not academically, emotionally. Why this jacket and not the other one? Why this feels like her. Why this fits your mental model of who she is. The explanation is the reassurance loop. Without it, the whole thing becomes another opaque machine; efficient, yes, but untrustworthy in all the ways that matter.

And then there’s the serendipity problem. Efficiency is addictive, but clinical. If we strip out every detour, we drain the pleasure along with the friction. The answer isn’t a return to the grid; it’s controlled looseness. A suggestion or two just off-axis. Something adjacent. Not twelve rows of “you may also like” tat, just enough to keep the experience human. Discovery without the search-and-filter trauma.

None of this is a theoretical exercise for me. I genuinely spent years trying to push natural-language intent into car retail at JLR, long before the technology was mature enough to meet the ambition. I saw how people really shopped: not by wheelbase or trim code, but by anxiety, context, and use-case. “Capable in the mud.” “Seven-seater that doesn’t look ridiculous.” “Can get all the family crap in it for Cornwall, without a roof box.” All perfectly rational human requests – treated as nonsense by the old machinery. The ideas weren’t wrong. They were simply early.

Now the technology has finally caught up. And with it, the entire structure of how we design retail subtly shifts. From catalogue to conversation. From homepage theatre to product truth. From filters to language. From the warehouse to the person.

None of this saves Christmas, of course. But it does save us from the annual pantomime of pretending that people enjoy buying gifts and products more generally through a system that refuses to understand how they think or consume any of the deeper context that matters. The future isn’t more choice. It isn’t more filters. It isn’t even more intelligence.

It’s fit.

Fit between intent and suggestion.
Fit between the context you’re in and the thing you’re shown.
Fit between the human messiness of December and the machinery that finally stops treating you like a clumsy clinical user story.

Christmas shopping isn’t a test of skill. It’s a test of whether the system knows how to listen. And for the first time in a long time, it might.

AI: This piece was assisted with Ai. I used it for the tags, the post excerpt, image generation and some sub-editing. Ideas, references, and anecdotes are all mine.

Tagged , , , , , , , , , , , , , , , , , , , ,

Christmas Shopping Observations, Part One.

Why Christmas shopping feels hostile, and why ‘catalogue commerce’ makes it worse.

December always brings the same rituals. Sitting in front of a website with a sense of mild dread. The kind one reserves for using a train station toilet, or getting into the coffee queue after parkrun. The intended tasks isn’t difficult or unpleasant in theory, just buy something thoughtful for someone you care about, but Christmas shopping always manages to feel like cognitive trench warfare. Retailers would have it as “the season of gifting”, the rest of us call it, problem solving with a shot glass of Baileys.

So, for some context, let’s go back to a couple of of weeks ago when I was trying to get myself a replacement down jacket. A bit like when I was trying to get Jo some new Asics, this wasn’t an extravagant task. It wasn’t even particularly interesting. Just a bit of a like-for-like replacement for a much-abused Rab. All I needed was a sub expedition-grade jacket. Black, simple. I know my sizes, I know I needed about 850+ fill power and I was ambivalent about much else. I had a shortlist of brands I like. But dozens of models, filters that are inconsistent across brands, categories that mean nothing to people outside of the industry and a product hierarchy that is the baffling output of a Content Management System (CMS) that’s been operated by a chimp1.

I wasn’t searching as much as performing archaeology. Sifting through layers and brushing off the irrelevant collateral.

A narrow, snow-dusted street in Stockholm’s Gamla Stan on a muted December afternoon. Warm ochre buildings rise on either side as bundled-up shoppers walk away from the camera. Soft shop-window lights and minimalist Christmas displays glow against the cold, creating a calm, human-scale contrast to typical frantic holiday retail.
The Christmas shopping we think we’re doing, before the dropdown menus, filters, and “Gifts for Her” pages slap us back into reality.


In design terms this is what we might call the Gulf of Execution, or as my colleagues and I at Dare liked to call the Experience Gap: the distance between what a human means and what the system is willing to accept. My intent was simple – “warm, minimalist natural down for standing around on platforms, by sports pitches and walking to the pub” – but the interface insisted I drop that down into a dialect of drop-down, checkboxes and jargonist euphemisms. A human request translated into machine-and-catalogue syntax. Little wonder the whole thing feels like a joyless chore.

And Christmas retail only amplifies this.

Every major high street site trots out its annual performance of “Gifts for Her”, a festival of generic filler: candles, scarves, bath sets, socks. The occasional novelty gift set embossed with typography that looks like it was designed at 4pm on a Friday whilst sucking on a fetid vape. It’s all indexed by price bands: “Under £10”, “Under £50”, “Over £250” – as if women are primarily sorted by budget code rather than, say, personality or taste.

No mother wants another hand cream selection.
No thirty-something woman wants coordinated gloves.
No partner wants to receive something that clearly began life as a procurement exercise.

The whole structure is built around the warehouse, not the person. It’s inventory logic masquerading as emotional intelligence. And the moment you notice it, you can’t unsee it: most “gift guides” reveal almost nothing about the recipient and everything about that the retailer wants to shift.

This is the failure baked-into catalogue commerce. It doesn’t matter which brand you pick; the underlying assumption is the same: that human desire can be expressed through filters, and that personality cab be captured in a category label. It’s tidy, rational and optimised. It’s also completely blind as to what makes shopping human in the first place.

Because real gift-buying begins long before the visit to the website. It begins in the cluttered contradictory emotional territory that sits just outside the browser window: What does she already have? What does she love? What has she told me about? What will she pretend to love? What feels thoughtless? What feels too much? What feels like you didn’t think at all (Hint: anything at Boots that comes in a gift box)? Retail ignores all of this and forces you straight into the grid (what we call the Product Listings Page (PLP) ), as if the process were orderly. Spoiler alert, it never is.

This is why Christmas shopping feels hostile. It’s not that the options are universal bad, just that the interface tries to convince you it understands and reflects your mental model when it plainly does not. Handing you a hundred variants of the same filler and expecting conversion gratitude. Somewhere between the filters, the categories and the bath sets you sense the truth: this isn’t built for you. It’s built to organise the warehouse.

Don’t worry though, there’s a better story coming, and the technology to enable it is finally here. But this isn’t the piece for solutions, it’s about naming the problem plainly as it is and without the retail gloss.

Next time I’ll get on to the other half of the picture: the system-level shift that’s going to quietly rewrite the entire experience from how we search to where the journey really begins.

For now its enough to acknowledge the obvious: Christmas shopping isn’t about solving and indecisiveness problem for dumb consumers. It’s a broken model designed around systems that are not built to reflect how people think, feel or choose, especially in December.

Part Two: How agentic solves this, and more.

AI: This piece was assisted with Ai. I used it for the tags, excerpt, the image generation and some very light sub-editing. The ideas, references, and anecdotes were all mine.

  1. Plot twist. I ended up with the Shackleton Ronne. I browsed online for weeks. I did huge amounts of research and comparison and then I went to the wonderful store on Piccadilly and spoke to a great sales assistant there who worked with me to ensure it was absolutely the right fit and will see me out for prob 5-10 years of use. ↩︎

Tagged , , , , , , , , , , , , , , , , , , , ,