On a road to nowhere
AI as a normal and failed technology.
Welcome to Around the Web.
This is the fifth time in two years I tried to write issue 24 of this newsletter. I somehow made it? Anyway, this is less of a link digest and more of a collection of thoughts I had over the last few weeks. Thoughts. In this economy. Refreshing.
So, here we go.
This ain’t intelligence
Welcome to the section in which we boil the oceans to create sparkling spirals of addiction.
In the current discourse, when we speak of «AI», we almost exclusively mean chatbots and Large Language Models as their underlying technology.
They are stuffed into ever more products, while at the same time failing to produce any measurable results. This, among other things, lead Ethan Marcotte to the framing of AI, this specific kind of AI, as a failed technology. AI needs to be mandated for it to be adopted.
Even this mandates seems unable to slow down adoption and even the poster child of current AI solutions, coding, so far, fails to produce traceable results. Sure, there are your Cursors and Windsurfs, but where are the myriad of products vibe-coded into existence? Mike Judge asked himself this question and analysed some metrics which might give an indicator. By now, he concludes, the vibe coded shovelware is nowhere to be found.
With every passing day, it becomes clear, however, that AI is creeping into the lives of normal users.
State of the simulacrart
As with mostly anything, the «labs» are completely ill-equipped to handle the spectre they have unleashed on us. Kelly Hayes describes this slow process as «engineered dependence», situating the increasing use of chatbots as a cycle of addiction rather than friendship. This shift from professional to everyday use, is something that’s becoming ever more common. At least, according to OpenAI’s own numbers.
The New York Times detailed quite well how this addiction unfolds. In the portrait of Allan Brooks, the delusional spiral starts with a fairly mundane question: Explain Pi in simple terms. Going on from there, Mr. Brooks started a conversation about math and physics. ChatGPT, ever the sycophant, was at every point very helpful in leaving the path of established reasoning. While comparing Brooks to Da Vinci, Einstein among others, a new theory of math emerged.
Powerful enough to break temporary cryptography.
And non-existing beyond the chat window.
If you are not using chatbots too much — and for all that we know by now, I’d rather hope that you don’t participate in this mass psychological experiment by Silicon Valley’s goons — the portrait of Brooks is recommended to read to get a glimpse of the way ChatGPT structures speech and gets you into believing the thing it generates.
In the end, Gemini, of all things, made Brooks realised that he has been fooled by bootlicking pattern matching. The story ended without considerable drama, and may even seem fairly benign. But reading this it feels easier to understand the outrage users voiced when GPT-5 became less sycophantic and get a glimpse of the process that lead to psychological dependence and, in extreme cases, suicide assisted by chatbots.
A metaphor that might help to understand these patterns is that of the role-play. The way chatbots are set up, leads to them to always engage something akin to simulating a role. Which role is obviously dependent on the kind of conversation people initiate? Or rather, it would be too simple to even think of a single role. As Shanahan et al. argue, as the models make up every bit of the conversation as they go along, there always exist multiple roles overlayed each other:
To better reflect this distributional property, we can think of an LLM as a non-deterministic simulator capable of role-playing an infinity of characters, or, to put it another way, capable of stochastically generating an infinity of simulacra. According to this framing, the dialogue agent does not realize a single simulacrum, a single character. Rather, as the conversation proceeds, the dialogue agent maintains a superposition of simulacra that are consistent with the preceding context, where a superposition is a distribution over all possible simulacra.
The longer the context, the smaller the number of possible simulacra. At the same time, those simulacra that remain are increasingly enforced. Which in turn leads to a shifting weight from the initial safeguards the companies added to whatever the model calculates the user wants.
In the end, the let’s-go-absolute-invented-math-bonkers role trumps the helpful-not-harmful-slighty-annoying-sycophant role. This is by design. Brainrot, workslop and delusional spirals.
OpenAI has by now announced to guess your age to prevent more deaths among its user base.
Addiction, also called stickiness, or whatever the marketing term de jour might be, has of course long been the core goal of Big Tech companies of the last two decades or so. Keeping users on your platform to feed them with ads. We will get back to that later.
Highly personalised simulations of social interchange are, in this sense, the holy grail of technology. Max Read made this point in a recent post:
But far from marking a break with the widely hated platform giants that precede it, the A.I. of this most recent hype cycle is a «normal technology» in the strong sense that its development as both a product and a business is more a story of continuity than of change.
And this becomes a problem as AI is not only a failed technology, but one that becomes increasingly normalised.
Always on is a bug not a feature
Some proponents of AI-assisted or even lead therapy purport that the non-stop availability of the chatbots is a feature. People in need can always turn to their chatbots and get the help they need. Looking closer at the addiction patterns of chatbot use, this seems to be a bug, not a feature. The steady availability of bots and the non-stop communication lead to the loss of a key part of human understanding: reflection.
Reflection and human connection. Relying on the theory of the philosopher György Lukács, Rob Horning details the effects of integrating chatbots into our everyday life. The immediacy of the calculated answer gets in the way of a critical, mutual understanding of society. Users of chatbots «surrender the ability to see and shape reality», the ability to act based on this understanding.
Knowledge production is never neutral, always political. It’s an intermediation of processes within communities. With Generative AI, we might see a tendency to outsource this intermediation to the algorithms of some weirdos who happen to own data centres. AI, in this sense, can be considered an epistemic carcinogen, embedding itself into our ability to think and form communities.
Carcinogens, however, can be resisted. Marcotte calls for organising in your workplace and everyday life. A point Horning makes too at the end of his essay:
Thinking rather than prompting; collaborating with other people and socializing rather than withdrawing into nonreciprocal machine chat — these become clarified as sources of strength and means of de-reification. Of course, this means they will continue to be under constant ideological attack. Capitalism has to produce ignorance and apathy to perpetuate itself; “AI” is merely the latest means of production.
Speaking of capitalism:
At least there’s money, right?
Okay, we mere humans are subjects in the experiment, but at least people are getting rich! One might think.
The problem is: everything is, still, wildly unprofitable. Anthropic achieved the unimaginable and built a profitable API business. Anthropic, overall, still burned a lot of money. But at least one function of their business didn’t. But with the release of GPT-5, OpenAI decided to nuke the API business for everyone.
The bet they make is simple: sell GPT-5 through the API at a rate far lower than Anthropic did. API is a far larger share of Anthropic’s revenue than OpenAI’s. Of course, OpenAI makes profit neither with ChatGPT nor with their APIs and – in the foreseeable future – don’t seem intent on doing so. They mostly make money through venture capital, which they burn immediately.
Or, in the latest case of circle jerking: Nvidia investing in OpenAI, so that OpenAI can buy more Nvidia chips. How’s that supposed to work? Even investor advice is by now shifting to investing in the current hype cycle is probably a bad idea. Edward Ongweso Jr. wrote the current definite rundown of the sparkling investment bubble.
In the short to medium term, we will see something else: ads!
Everyone’s favourite part of the internet.
Now, none of these companies have a great track record of enabling anything in their products that just works. If the myriad ways in which ads are broken on Google, Facebook and every other digital product are broken are anything to go by, the day ads appear in ChatGPT will be a tragedy repeating as a farce.
In other news: MAGA populists call for holy war against Big Tech. Let them fight.
For better or worse, we are stuck with this world. For better or worse, I’ll try again to stick to this newsletter. Maybe it won’t take two years until the next issue.
Until then: stay sane, hug your friends, and be human after all.
0 Webmentions
0 Likes
0 Reposts
Using Webmentions and webmention.io.