I remember when behavioural economics was the clever bloke at the party. Late 2000s. Slightly rumpled like its genial flag bearer, Rory Sutherland1. Saying interesting things while everyone else was still banging on about best practice.
And as a one-time Psych grad, I swallowed it whole. Loss aversion, scarcity, social proof, that small but seemingly ever-growing catalogue of cognitive quirks that explained why perfectly rational adults turned into anxious pigeons the moment you asked them to choose between two identical hotel rooms.
Then I did what most of us early adopters did. I took those ideas and applied them to all the booking flows, creating a second layer of UX and UI polish. “Only two rooms left!” “Five people are looking at this right now.” Little interruptions multiply in the corners and the shouty bits of the checkout. I told myself it was science. But mostly it was just persuasion dressed up in pseudo-academic language.
And the internet did what the internet does. It copied and pasted the same mechanics and ran them into the ground. More fake scarcity. Countdown timers. Urgency theatre. Some of this was just cheeky pestering, the digital equivalent of a shop assistant hovering, but plenty of it crossed a line into deception: designed to manufacture urgency, hide real costs, or make ‘no’ harder than it has any right to be. That was a dishonesty that’s technically deniable but emotionally obvious. Users learned the patterns, practitioners got squeamish. Behavioural’ became shorthand for ‘manipulative’, and anything adjacent to nudging got lumped in with deceptive patterns, née dark patterns2, for reasons that still feel faintly performative. Sometimes these labels were applied fairly, sometimes lazily.
Meanwhile, Rory didn’t really change. The medium did. His style, heavily anecdotal, contrarian, the world slightly upside down, really suited the algorithmic churn of social feeds far better than it ever suited a conference room. And irritatingly, he’s still right about a few core things: humans are not neat rationalists; context does more work than features; and the “obvious” fix is often the wrong one.
So you end up with this weird stalemate. Practitioners don’t want to touch behavioural ideas because the last decade trained them to associate them with cheap tricks. Users don’t trust anything that looks like psychological leverage. Theorists keep publishing, but the bridge from theory to design practice is messy and full of bad incentives.
So, herewith the awkward admission: I still use behavioural thinking constantly. I just don’t tend to label it. If you’ve worked on complex journeys, you can’t avoid it. Sequencing, defaults, framing, expectation-setting, reassurance, when to show less rather than more, darling, that’s all behavioural design, whether you call it that or pretend you’re simply reducing friction.
Ergo, the real problem is where in the journey it got applied. When behavioural economics becomes synonymous with end-of-funnel UI hacks, it’ll always feel grubby, because there it’s operating at the point of maximum vulnerability and minimum patience. To the numbers-fixated, that’s exactly where the temptation to push is strongest, and where user suspicion is most justified.
I think we should want to bridge the 15-year gap to the bigger ideas, and the way back is boring, structural, and I hope therefore, credible.
Firstly, move it upstream. Use behavioural insight to shape the service and the whole journey, not just the microcopy. If the product is confusing, no amount of “Only 2 left” pop-ups will rescue it. If the decision is overloaded with complexity, the win is reducing the choice set, clarifying trade-offs, and placing reassurance where anxiety is highest. That’s judgement, not sleight of hand.
Take the UK’s driving-test booking fiasco: on paper it’s “too much demand”, but behaviourally it’s an uncertainty machine that turns normal people into refresh-addicts and slot-hoarders, so it’s hardly surprising when a grey market blooms. When a system is opaque, time-bound, and framed as a win/lose binary (a slot exists or it doesn’t), you don’t get compliant queueing; you get panic economics: people book anything anywhere “just in case”, cling to dates they’re not ready for (because letting go feels like falling off a cliff), and outsource hope to various apps and bots.
The upstream fix is to stop rewarding speed and start redesigning allocation: move away from pure first-come-first-served and into a batch or lottery mechanism that collects requests over a window and allocates oversubscribed slots randomly, with cancellations rolling into the next batch so you can’t transfer a slot by cancelling and instantly rebooking under someone else’s name. Theory and lab evidence from market-design work on appointment booking shows this structure makes scalping unprofitable because speed stops being the advantage. Add a small, refundable booking deposit (say £5–£10, returned on attendance or timely cancellation) to put a bit of skin in the game without pricing people out, and you’ve damped the casual “book three and see what happens” behaviour that also fuels the chaos. Then fold in DVSA’s change limit (two changes per booking, including swaps) and the restriction on moving test centres, but actually explain these rules inside the journey so learners don’t experience it as punitive post-facto. Once people can predict the system and trust that releasing a slot doesn’t reset their entire life, the gaming collapses under its own boredom; you don’t need scarcity theatre when you’ve fixed the incentives. See, no need to go crazy in Figma.
Secondly, be explicit about ethics. Not an intention or vibes, the actual lines: what behaviour you’re trying to encourage, who benefits, and what the failure state looks like if it works too well. If you can’t say “this benefits the user” without shifting awkwardly in your Herman-Miller, you’ve learned something useful.
Thirdly: replace the anecdote-as-proof culture with evidence that doesn’t insult anyone (this one’s the hardest for me, I love an anecdote). Small experiments tied to meaningful outcomes. Clear reporting. A willingness to bin interventions that, whilst driving short-term conversion, corrode customer trust. Most teams simply need permission to run proper tests and speak plainly about consequences.
Of course, we never stopped shaping behaviour, we simply got self-conscious about admitting we did. The route back is behavioural thinking with its assumptions stated, its trade-offs owned, and its use grounded in real user conditions; people don’t need to be told “nudges are good” in 2026.
My thanks to Tom Harle for the original provocation.
AI: I used AI for the tags, the excerpt, and a light sub-edit. The ideas, references, observations, and anecdotes are mine.
- To be clear: Rory didn’t originate behavioural economics. He became its most visible adland interpreter, a jolly and witty TED-friendly translator of work done by Kahneman/Tversky, Thaler, Sunstein, and others. ↩︎
- Dark Patterns were coined by Harry Brignull, who gets too little credit for it. ↩︎




