
When Siri was released in 2010, many prophesied the death of the keyboard. Voice would become king, and we’d all be directing our Star Trek-style computers with effortless utterances. However, Siri was underwhelming, never expanding beyond a basic set of commands that it frequently misunderstood. “I’m sorry, I can’t do that yet, but I found this on the web.” – no thanks.
Fast forward to today, and I recently came across WisprFlow (not sponsored, unfortunately). They provide an AI speech-to-text service that is mind-bogglingly good. You can talk, or whisper like a psycho, and it dictates your voice into any text interface on your computer. It cleans up your ums and ahs, recognises when you’ve misspoken, and nails difficult names. It is the technology Siri should have been.
Lately, I’ve also been “vibe coding.” For the uninitiated, this is the process of using AI to build apps and websites using only natural language prompts. I’ve been churning out dashboards, animations, and portals just by describing them. I had an upcoming ski trip (I know, oh so Jersey) but also was managing a frightening workload at the same time. Frolicking in snowy mountains for a week didn’t feel entirely sensible, but this trifecta of tech and circumstance gave me an idea: What if I could code on a ski lift using only my voice?
If you told someone in the 90s that “work” could look like talking to a phone on a mountain, and that you’d be 100x more productive than someone punching keys into a beige box, they’d scarcely believe you. To test the concept, I decided to develop a ski-lunch planning app. A classic middle-class quandary: when you arrive at a new resort, you don’t know where the good snow is or where the best fondue is hidden. I wanted to enter the resort, ski ability, and food cravings, and have the application plan the itinerary and book the table.
Ski Lift 1: The Concept
I laid out the requirements and asked for designs. By the time I reached the top, the AI had given me four different concepts. I chose one, expanded a few features, and told the agent to start building.
Ski Lifts 2-5: The Build
Following a few more iterations, all delivered via chair lifts, I had a functioning app that could plan a day out for every major resort in Europe.
Frighteningly easy and I didn’t type a word.
Not to be outdone, a friend of mine decided to make my lunch-app experiment look like child’s play. He had OpenClaw running on his laptop back at our base, and was using Telegram as a remote-control for his AI. Between runs, he was performing heavy-duty data migrations and building out a highly scalable infrastructure for a new site, all via his phone. It’s one thing to prompt a design; it’s another to architect a backend whilst swinging 30 feet above the ground.
I will admit, the “social signaling” of voice-messaging an AI bot doesn’t translate perfectly to public spaces. Tapping away at a keyboard is socially acceptable; whispering to your phone while suspended 30 feet in the air makes you look like you’re losing the plot.
However, whilst whispering removes you from immediate social interaction, the speed of it is liberating. You finish your task so much faster that you can return to being present sooner. Plus, not having to stare at a glowing screen in the sun is a revelation.
I’d encourage you to try it – maybe at home first so you don’t get bullied by your colleagues for whispering at your computer as if you’re trying to enchant it to life – I think you’ll find yourself stepping into what feels like the future of work. Faster, mobile, less screentime. That has to be a good thing.

