AI ascendant, the computer recedes

AnalysisOct 7, 2024
Computerless Computers

By Darrell Etherington

In the second of a series on the potential impacts of AI as we see innovation and adoption begin to speed along the upward arc of an exponential acceleration graph, I want to look at what fundamental assumptions about how we live and work with computers might change in the near- to mid-term future. My first in this series looked at how, for the first time, we have creative collaborators in the form of a technology that we developed – here, I’ll explore how changes in our interfaces and interaction surfaces will make computing tomorrow look drastically different than computing today. 

Crawling to walk 

Recently, we got another report about Sam Altman dabbling in an AI-first hardware device – in partnership with renowned device designer Jony Ive, the creator of Apple’s most iconic hardware, from the iPod, to the iPhone, to the original unibody Mac. While we don’t yet know what that will look like (if it ever sees the light of day), we do have a growing collection of what I would call preliminary AI-native hardware, and directionally, they represent a good indicator of what we can probably expect from interface paradigms that thrive the era of omnipresent AI. 

This class of devices includes Rabbit’s plucky but challenged R1; Humane’s over-designed and overhyped AI pin; Limitless AI’s Pendant, and a handful of other new and upcoming gadgets that share roughly the same set of design choices and aims. 

Common to these is an idea that a graphical user interface, or GUI, is either unnecessary, or at least de-emphasized, in a world where we have AI at our disposal. Most of these are little more than internet-connected microphones, which can take speech-based input, turn them into text entry for AI models of varying size, scale and complexity (both local and cloud-based) and then – in the ideal case – take action based on the inference resulting from said input. 

Also common to basically all of the above, is that, so far as we’ve been able to test , they either significantly underperform the capabilities their creators said they had – or, in the better case, they don’t aspire to do all that much just yet anyway, in a tacit admission of the current limitations of the state of the art. 

To their credit, at least some of them seem to be improving through iteration on the software side, and in all likelihood their successors and the next wave of devices like this from other startups, and from giants like Meta, which is doubling down on its smart glasses products, will benefit from all the learnings gleaned from the challenges encountered by these first out the gate. 

The ‘Baby Walker’ transition period 

One popular position tech industry pundits have taken is that these devices are likely a revolutionary dead-end; after all, why would anyone seek yet another networked, microphone-toting portable electronic when they’re all already walking around with smartphones in their pockets? 

It’s an especially understandable position, given Google recently putting the focus squarely on Gemini for its Pixel 9 launch, and Apple selling its iPhone 16 lineup on the strength of its ‘Apple Intelligence’ suite of AI features – which, by the way, were delayed and subsequently haven’t shipped, despite underpinning all of Apple’s iPhone 16 go-to-market activities. 

I think the case for this being the future steady state is overblown, however. The missteps and fumbles of the crop of devices mentioned in the first section is one reason, and the seeming stability of smartphones as a permanent fixture in our daily lives is the second. To the first, I’ve already talked about how a lot of these problems seem due to vision outpacing current capabilities. To the second, thinking a computing paradigm has any really immutable sticking power is to ignore computing’s only constant to date: dramatic changes in the interaction model. 

From punch-cards, to keyboard, to trackball, to mice, to desktops, to laptops, to smartphones, to tablets, to smartwatches, we’ve seen our means of interfacing with computers shift in just about every dimension imaginable. Some of those methods of interaction have indeed proved to be dead ends, but so, too, have unlikely paradigms ended up becoming defaults so sticky that they would probably confuse the heck out of an extraterrestrial visitor. 

AI still needs to be tethered to the smartphone for a few reasons, but the most powerful is probably that it still consistently needs its work checked for accuracy. And, while we see a lot of hype about the coming wave of intelligent ‘agents,’ which will see AI empowered to act on our behalf, no one has shown that they can yet do this confidently and with a high degree of accuracy relative to actually accomplishing what you wanted to do across a wide variety of domains. 

The best interface for most is probably none at all 

Ultimately, I think that smartphones are training wheels for the computing environment of the future. Some of its attributes, like a screen for consuming video content, might prove stickier than others, but I wouldn’t be surprised to see that functionality shift to a more ‘thin client’ tablet for occasional use. Phones might evolve to be an ambient computing core either tied to a very lightweight edge processor module we carry with us, or to cloud compute – sequestered, and maybe linked in a federated fashion for privacy-preserving, collaborative thought with the private clouds of others.  

But while we’ll probably continue to have and use screens for the foreseeable future, I think they’ll get knocked down a rank or two in terms of their importance in our lives, as computing recedes into the background and becomes a primarily ambient affair. The computers of tomorrow will be attendant and attentive, but not imposing, not insistent.  

In the final instalment in this series, I’ll go into more detail about what that could look like, and in particular how it might upend our working lives and business spaces.