Sunday, December 07, 2025 | Jumada al-akhirah 15, 1447 H
broken clouds
weather
OMAN
19°C / 19°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

How GPT-5 surprised me

minus
plus

I seem to be having a very different experience with GPT-5, the newest iteration of OpenAI’s flagship model, from most everyone else. The commentariat consensus is that GPT-5 is a dud, a disappointment, perhaps even evidence that artificial intelligence progress is running aground. Meanwhile, I’m over here filled with wonder and nerves. Perhaps this is what the future always feels like once we reach it: too normal to notice how strange our world has become.


The knock on GPT-5 is that it nudges the frontier of AI capabilities forward rather than obliterating previous limits. I’m not here to argue otherwise. OpenAI has been releasing new models at such a relentless pace — the powerful o3 model came out four months ago — that it has cannibalised the shock we might have felt if there had been nothing between the 2023 release of GPT-4 and the 2025 release of GPT-5.


But GPT-5, at least for me, has been a leap in what it feels like to use an AI model. It reminds me of setting up thumbprint recognition on an iPhone: You keep lifting your thumb on and off the sensor, watching a bit more of the image fill in each time, until finally, with one last touch, you have a full thumbprint. GPT-5 feels like a thumbprint.


Right now, the AI companies are not making all that much money off these products. If they eventually do make the profits their investors and founders imagine, I don’t think the normal tax structure is sufficient to cover the debt they owe all of us, and everyone before us, on whose writing and ideas their models are built.


Then there’s the energy demand. To build the AI future that these companies and their investors are envisioning requires a profusion of data centres gulping down almost unimaginable quantities of electricity — by 2030, data centres alone will consume more energy than all of Japan does now.


If we had spent the past three decades pricing carbon and building the clean energy infrastructure we needed, then accommodating that growth would be straightforward. That, after all, is the point of abundant energy. It makes new technologies possible, and not just AI: desalination on a mass scale, lab-grown meat that could ease the pressure on both animals and land, direct air capture to begin to draw down the carbon in the atmosphere, and cleaner and faster transport across both air and sea. The point of our energy policy should not be to use less energy. The point of our energy policy should be to make clean energy so abundant that we can use much more of it and do much more with it.


But President Donald Trump is waging a war against clean energy, gutting the Biden-era policies that were supporting the build-out of solar, wind and battery infrastructure. There’s something almost comically grim about powering something as futuristic as AI off something as archaic as coal or untreated methane gas. That, however, is a political choice we are making as a country. It’s not intrinsic to AI as a technology.


So what is intrinsic to AI as a technology? I’ve been following a debate between two different visions of how the next years will unfold. In their paper “AI as Normal Technology", Arvind Narayanan and Sayash Kapoor, both computer scientists at Princeton University, argue that the external world is going to act as “a speed limit” on what AI can do.


In their telling, we shouldn’t think of AI as heralding a new economic or social paradigm; rather, we should think of it more like electricity, which took decades to begin showing up in productivity statistics. They note that GPT-4 reportedly performs better on the bar exam than 90% of test takers, but it cannot come close to acting as your lawyer. The problem is not just hallucinations. The problem is that lawyers need to master “real-world skills that are far harder to measure in a standardised, computer-administered format.” For AIs to replace lawyers, we would need to redesign how the law works to accommodate AIs.


“AI 2027” — by Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland and Romeo Dean, whose backgrounds range from working at OpenAI to triumphing in forecasting tournaments — takes up the opposite side of that argument. It constructs a step-by-step timeline in which humanity has lost control of its future by the end of 2027. The scenario largely hinges on a single assumption: In early 2026, AI becomes adept at automating AI research, and then becomes recursively self-improving — and self-directing — at an astonishing rate, leading to sentences like “In the last six months, a century has passed within the Agent-5 collective.” Even if you believe that AI capabilities will keep advancing — and I do, though how far and how fast I don’t pretend to know — a rapid collapse of human control does not necessarily follow. I am quite sceptical of scenarios in which AI attains superintelligence without making any obvious mistakes in its effort to attain power in the real world.


And yet I am a bit shocked by how even the nascent AI tools we have are worming their way into our lives — not by being officially integrated into our schools and workplaces but by unofficially whispering in our ears. The American Medical Association found that 2 in 3 doctors are consulting with AI. A Stack Overflow survey found that about 8 in 10 programmers already use AI to help them code. The Federal Bar Association found that large numbers of lawyers are using generative AI in their work, and it was more common for them to report they were using it on their own rather than through official tools adopted by their firms. It seems probable that Trump’s “Liberation Day” tariffs were designed by consulting a chatbot.


I find myself thinking a lot about the end of the movie “Her", in which the AIs decide they’re bored of talking to human beings and ascend into a purely digital realm, leaving their onetime masters bereft. It was a neat resolution to the plot, but it dodged the central questions raised by the film — and now in our lives.


What if we come to love and depend on the AIs — if we prefer them, in many cases, to our fellow humans — and then they don’t leave? — The New York Times


SHARE ARTICLE
arrow up
home icon