The Ed Hardy shirt of argumentative figures

Around the Web
Issue No. 021

How Large Language Models work, the era of global boiling, passport privileges, a swan song to masculinity, and Barbie’s merchandise.

Welcome to Around the Web.

Pop culture isn’t universally known for its backbone. The brighter shine those who put integrity before commercial success. This week, Sinéad O’Connor died, and with her pop lost a good part of its backbone.

Screen grab of Sinéad O’Connor ripping an image of the pope in pieces.

Rest in power, Sinéad.

This ain’t intelligence

Let’s start this section with a step back. I’ve written a lot about Large Language Models and their impact on society over the past issues. But how do they … work? It’s fancy autocomplete, but who puts the fancy in the complete? Timothy B. Lee and Sean Trott explain LLMs with a minimum of math and jargon.

Another primer, but with a tad more jargon, is this explanation of how in-context learning emerges by Jacky Liang. It’s called in-context learning when LLMs learn new tasks for which they haven’t been trained originally.

Whenever I talked about AI to friends who aren't following the discourse too closely, one question loomed large: When will AI overpower humans? «Don’t worry, for now» I said. AI companies and influencers have been very vocal about a prospective future in which a superintelligent AI Model will kill us all. Essentially creating an illusion of AI’s existential risk.

A group of authors took to Noema to dispel the myth and bring reason back to the discourse. AI isn’t going to kill us, but there are certainly dangers to the fabric of our societies in the here now which AI might amplify. For all of these problems, like autonomous weapons and the impact of training models on the climate crisis, boring, is quite capable to act as a fire accelerant.

As it stands, superintelligent autonomous AI does not present a clear and present existential risk to humans. AI could cause real harm, but superintelligence is neither necessary nor sufficient for that to be the case. There are some hypothetical paths by which a superintelligent AI could cause human extinction in the future, but these are speculative and go well beyond the current state of science, technology or our planet’s physical economy.

If you are worried about AI, read this piece. You won’t be less worried after it, but your worry will be better focussed.

ChatGPT losing it?

You might have heard that ChatGPT has gotten worse over the last months. After all, there is a paper saying so, isn’t there? Not really. First, the paper used partly questionable methodology, for example, the inclusion of explanations in the coding example counts as being less proficient in coding. But maybe most important, the paper never claimed that ChatGPT got worse, but that its behaviour changed.

As Arvind Narayanan and Sayash Kapoor in AI Snake Oil explain, there is a difference between model capability and model behaviour.

Chatbots acquire their capabilities through pre-training. It is an expensive process that takes months for the largest models, so it is never repeated. On the other hand, their behavior is heavily affected by fine tuning, which happens after pre-training. Fine tuning is much cheaper and is done regularly.

The authors of the paper found no evidence of capability degradation. However, by documenting the shifting behaviour, the paper highlights a different problem: It’s incredibly brittle to build products and do research with OpenAI’s API offerings. There are no changelogs, the available snapshots of the models are deprecated and removed frequently. As Narayanan and Kapoor conclude:

It is little comfort to a frustrated ChatGPT user to be told that the capabilities they need still exist, but now require new prompting strategies to elicit. This is especially true for applications built on top of the GPT API. Code that is deployed to users might simply break if the model underneath changes its behavior.

There might be another reason that the paper found such fertile ground. A few months after its release and the accompanying PR blitz, the novelty of Generative AI is worn off. Or, as Baldur Bjarnason puts it, «Generative ‹AI› is just fucking boring.»

The only thing that isn’t boring about generative “AI” is the harm tech companies and their spineless hangers on seem intent on inflicting on our society and economy: replacing the variety of human creative work with the tedious sameness of synthetic work in the name of “productivity” or, worse, “cost”.

Or, as Paris Marx concludes in Disconnect:

Once again, the tech industry has deceived us in another bid to expand their power and increase their wealth, and much of the media was all too happy to go along for the ride. Generative AI is not going to bring about a wonderfully utopian future — or the end of humanity. But it will be used to make our lives more difficult and further erode our ability to fight for anything better. We need to stop buying into Silicon Valley fantasies that never arrive, and wisen up to the con before it’s too late.

Faded – collapsing new models, watch them – collapsing

While it’s highly unlikely that existing models use their capability, the popularity of these models will present a different problem for future models.

As model output becomes more prevalent across more and more domains, it will be harder to train new models on datasets that do not contain output of other AI models.

This matters, as generative models – be it language models or image generation – rely on massive amounts of random data. And this randomness needs to contain some long tail, rare data. AI models are not good at producing this. Remember, they compute the highest likelihood that something matches. Not the most creative. So, the more AI-generated content we get, the less likely are outputs with a low probability.

In The Curse of Recursion: Training on Generated Data Makes Models Forget found evidence for exactly this issue, they call it Model Collapse. This «refers to a degenerative learning process where models start forgetting improbable events over time, as the model becomes poisoned with its projection of reality».

This “pollution” with AI-generated data results in models gaining a distorted perception of reality. Even when researchers trained the models not to produce too many repeating responses, they found model collapse still occurred, as the models would start to make up erroneous responses to avoid repeating data too frequently.

The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content

If you remember the explanation of in-context learning above, this was also reliant on the long tail random data points. So future models might not only collapse and produce nonsense, they might also lose some capabilities today’s models have.

Having Real Human Data™ will be even more valuable in the future. Shame that click-workers on platforms such Mechanical Turk are already using language models to train language models.

At one point in the future, you might see AI researchers standing in dark street corners, dealing with 2022 Common Crawl backups like it’s crack.

Self-serving regulation

«Look at us!» some AI vendors scream, «We are regulated!» Some large AI companies pledged to let government institutions inspect their code, and – among other things – watermark the output of their models. As we’ve just seen, they have a considerable incentive for watermarking, going beyond regulatory compliance. And, of course, the agreement isn’t legally binding. If only there was some kind of precedent of what happens when companies pinky-promise to not do evil things.

So, yeah. Welcome to the bare minimum. Again.

In an unexpected contribution to the AI ethics discourse, the Pentagon claimed that their AI-driven war machines are more ethical than other AI-driven war machines, because of the ingrained Judeo-Christian society. You know straight away that absolute bullshit is being said, when someone wants to justify something with «Judeo-Christian». The Ed Hardy shirt of the argumentative figures.

Mustafa Suleyman, co-founder of Inflection AI, proposed a new Turing test for AI models. Instead of faking being a human, it shall hereto forth be the goal to fake capitalism. That’s probably more illustrative of the limits of imagination of the AI zealots as Suleyman intended.

Speaking of capitalism, Microsoft announced ridiculously high prices for its Autopilot office suite and the stock market is going like yay money, who cares if anyone’s going to pay.

Benedict Evans tries to situate automation through Machine Learning models in capitalism’s wider history of automation and changing jobs. It’s certainly an interesting read, which gives better roots to the often ahistorical approaches in other publications.

Facebook published the second version of its Large Language Model, Llama 2. In contrast to OpenAI, Llama 2 is open-source and has a generous free usage license, though no information about the dataset it has been trained on has been published yet.

One of the applications of AI that falls squarely into the «That’s useful»-category is reviving ancient languages and making translation of their texts easier. One recent example where researchers managed to do just this is Akkadian cuneiform. Note, however, that this is no Large Language Model, but neural machine translation, the technology that powers, among others, Google Translate.

Certainly, no useful application: The Free Democratic Party in Switzerland faked an image of climate protesters blocking an ambulance because climate protesters don’t block ambulances, but you somehow need to cultivate the image of your enemy, I guess.

EOL of Humanity

Amid an excruciating heat wave, climate researches warned that global warming could push the Atlantic past a tipping point this century. Or as early as 2025. Which, yes, is in two fcuking years.

July 2023 has been the hottest month on record. Or, in the words of the UN secretary general, António Guterres: «The era of global boiling has arrived.»

Isn’t it romantic that we get live pictures of a cargo ship transporting (electric) cars burning in the North Sea while we need to read all this? The Zeitgeist certainly knows a thing or two about drama. Maybe someone in Hollywood hire this thing.

I’d rather watch Tenacious D running along a beach, being as happy as happy can be. Luckily, this is totally possible.

What are you looking at?

For Algorithmwatch, Josephine Lulamae travelled to the German city of Mannheim to report on the video surveillance in its city center. In Mannheim, an automated system reports hugs to the police, she reports.

In reality, the surveillance program highlights a perfect storm of lofty promises, police violence and racist sentiments.

One lady who works at the square shares that officers made her teenage son stand facing the wall outside her shop and empty his pockets. “Do you know how much this hurt me?” she says. Around two years ago, she filmed officers tying a 15-year-old boy’s hands behind his back with a cable while he was lying on the ground of the square, because, she says, the officers had felt threatened by the puppy that he and his friends had been playing with.

The encrypted European radio standard TETRA, used by police and critical infrastructure throughout Europe, contains massive security flaws, researchers found.

In the UK, the Home Office plans to push for facial recognition systems to surveil shoppers.

Can I talk to you about e-mails?

Thanks for making it this far. Maybe you are interested in getting Around the Web as an e-mail whenever a new issue is published?

There’s also an RSS feed. It’s like e-mail, but better (imho).

Powered by Buttondown.

The Border Regime

Thought about your passport lately? Given the assumed demographics of the readers of this newsletter, probably only to use it when travelling. In Passports and Power Rafia Zakaria takes a closer look at the power dynamics embedded in this seemingly innocuous document.

What could better illustrate the sheer entitlement of the wealthy and the increasing lack of moral shame or outrage at the reduction of one group of humans to a subordinate category while others can afford to reduce anything to an “experience” for “making content”? Both faddish words are examples of the awkward lingo that is meant to sound uplifting. There is no moral shame attached to the consumption of these “experiences,” in which the “thrilling” nature bypasses the depravity with which others with a different set of documents have no choice but to contend.

In the EU’s current push to tighten the border regime to the point where basically no-one uninvited can reach its land anymore, gigantic sums of money are poured into countries such as Tunisia, despite widespread reports of torture, such as displacing refugees into its desert.

Libya seemingly tried to ban Frontex planes and drones from its airspace.

Social Mediargh

Elon Musk finally did it. He killed the bird. And replaced it with a logo so generic that every corporate sans serif typeface of the last twenty years seems incredibly unique. X, as it is now called, will be, if all goes according to plan (it won’t), the US equivalent to WeChat.

Ryan Broderick summarises the dumpster fire (incredibly, still burning):

And so, the answer to “why is he turning Twitter in WeChat” is because he simply cannot imagine an internet beyond Twitter, just like all the users still using it currently. He wants his own WeChat because he wants to control all of human life both on Earth and beyond and he can’t conceive of other websites mattering more than Twitter because Twitter makes him feel good when he posts memes. As far as I’m concerned, Musk is simply doing the billionaire equivalent of when someone breathlessly explains insular Twitter drama at you irl like it’s the news. He thinks Twitter is real life and he’s willing to light as much of his fortune on fire as possible to literally force that to be true. Now matter how cringe it is.

Elon Musk thought he was buying the whole internet

Now, we have Threads, whose sole raison d’être seems to be that Facebook can sell more ads, X fka Twitter, and some smaller alternatives such as Bluesky, somehow locked into Beta limbo and with a flailing content moderation approach.

It’s perfectly fine to be a “feminine” man. Young men do not need a vision of “positive masculinity.” They need what everyone else needs: to be a good person who has a satisfying, meaningful life: Against Masculinity

Zuckerberg, meanwhile, thinks that Threads can attract a billion users.

I haven’t watched Barbie yet, I’ll definitely watch Barbie soon, and I appreciate Jessica Defino’s explanation of how the sprawling merchandise industry undermines the feminist aspirations the movie has.

No matter. Barbie profits from both the feel-good performance of embracing cellulite and wrinkles and the practical tools of erasing them.

“Things can be both/and,” Gerwig has said. “I’m doing the thing and subverting the thing.” But in terms of production and consumption, they can’t be, and she’s not.

CrowdView is a search engine that only searches in forums.

In The Nib, Tom Humberstone illustrates (really!) why we should all Luddites to build a better future, with technology that serves humans rather than corporations: I’m a Luddite (and So Can You!) (Unfortunately, the images miss alt text, which is a shame.)

An illustration of a person holding a sledgehammer on their shoulder. The person is seen from diagonally behind. The lighting indicates a sunset. In the bottom half of the illustration the words «Welcome to the future. Sabotage it.» are written in comic style.

That’s it for this week. Stay sane, hug your friends, and nothing compares 2 u.

Collections