Taking ducks to outer space

AI alignment, space junk traffic jams, settler colonialism, and the geopolitics of web domains.

Welcome to Around the Web.

The world leaves me in too cynical a state to even write some nice words welcoming you this issue, so late that late doesn’t cut it. Time is a social construct, people! And as nothing ever happens, or at least few things seem to get better, it doesn’t really matter.

There were some local elections in Germany, with significant wins for the (far) right. Most notably, Elon Musk’s favourite party, the Alternative für Deutschland. Söder’s CSU staid steady, the Freie Wähler gained slightly, even though their leader, Hubert Aiwanger, was accused of distributing antisemitic pamphlets in school. I hate everything about this paragraph so much.

Thanks, world. Hey, dear reader, don’t despair. Autumn is here, but carry on we must. Here are some links:

This ain’t intelligence

The AI doomer crowd, notably OpenAI and their quest for «Superalignment», has been pretty vocal about the necessity to better align the output of Large Language Models with human preferences. But what is alignment, and for which goals is it useful? Jessica Dai took a closer look.

I’m not advocating for OpenAI or Anthropic to stop what they’re doing; I’m not suggesting that people — at these companies or in academia — shouldn’t work on alignment research, or that the research problems are easy or not worth pursuing. I’m not even arguing that these alignment methods will never be helpful in addressing concrete harms. It’s just a bit too coincidental to me that the major alignment research directions just so happen to be incredibly well-designed to building better products.

Examples of the harm that comes from colonialist, racist systems are abound. Arsenii Alenichev asked Midjourney to generate images of Black doctors treating white children. The system failed spectacularly.

While the Hollywood writers won their strike and, among other things, have better control of AI systems in their work, employees in countries are less lucky. In India, a CEO fired the whole staff of his company, replaced it with ChatGPT and subsequently insulted the humans he fired.

The AI systems that we have to deal with, are built in the Global North, for the Global North. This perpetuates postcolonial power structures and is harmful to those not in the focus groups of our overlords. AI must be decolonialised to fulfil its full potential, argues Mahlet Zimeta.

OpenAI announced DALL-E 3. The headline new feature: Integration with ChatGPT to alleviate the burden of writing prompts. Microsoft’s Bing integrated DALL-E 3 quickly. Not shortly after, users found that Bing will readily spit out images of SpongeBob flying a passenger plane into the World Trade Center. When there is harm to be done, 4chan is ready, posting instructions on how to generate racist imagery.

And it’s not just images. Researchers found that adding certain bogus strings to the end of malicious prompts reliably breaks all current bots. OpenAI, Anthropic and Co. are whack-a-mole grandmasters by now, so some are «fixed» (more likely, the filter before you actually get model outputs filters them out), the researchers said «we have thousands of these». Great!

How’s Bing going otherwise? Doing normal things. Like pushing malware through ads. This kind of thing is one of the AI problems that are actually easy to solve: Don’t put ads in it. Thanks for coming to my TED talk.

Google, meanwhile, picks up ChatGPT bogus on sites like Quora and presents you with melting eggs and invents mails that there were never sent or received. Meanwhile, there appears to be a network of fake news sites, serving real ads to (most likely) fake visitors.

Melting eggs? Cute. Making money with fake news sites nobody visits? The perfect crime.

Unfortunately, most of the disinformation we have to grapple with will not be, is not as harmless. In a town in Spain, children circulated AI-generated nude images of other children. On YouTube, the first videos scripted by AI are popping up, promising to educate children. The only problem? The education is fake. Unlike those fake news sites, these videos garner views, thanks to YouTube’s ever-reliant algorithmic amplification. Google showed a fake selfie as the first result for «Tank Man» searches. This was easy to spot. For now, at least. But are you, or the parents in your vicinity, regularly checking in with the YouTube videos your kid consumes or are certain that you know what happens on the schoolyard?

And I haven’t even talked about election disinformation and that basically all social networks bailed and said «yeah, whatever, let’s make moneys». I’ll spare you, dear reader, and me. Until next issue!

That’s not to say that generative AI can have no applications in education. But it requires meticulous planning and teachers that understand the fallacies of the technology. One such example: Simulating History with ChatGPT.

Another possibility is to design the interfaces and underlying models in ways that break the «bigger is better and put a chatbot on it» way that’s currently so popular. Maggie Appleton spoke about this and how to force structure onto the wobbly things. Besides this, the first part of the talk is also a great rundown of how language models work. Recommended all around. Her thinking here is really on point, and manages to make a point coherently I wanted to make for a while, but just couldn’t bring to the point:

You should treat models as tiny reasoning engines for specific tasks. Don’t try to make some universal text input that claims to do everything. Because it can’t. And you'll just disappoint people by pretending it can.

I don’t think I mentioned it in Around the Web, but this summer there was this viral story that ChatGPT managed to hire a Task Rabbit worker to solve CAPTCHAs on its behalf. The closer look shows a more nuanced picture: With the help of humans, ChatGPT kinda managed to solve a task that involved a CAPTCHA and a Task Rabbit worker.

Speaking of CAPTCHAs, I’ve bad news for you, bots do outperform humans in solving CAPTCHAs. They are both faster and more accurate.

A new study interrogates the Surveillance AI pipeline, coming to the conclusion that basically all research in computer vision targets humans and leads to applications in surveillance. If you see someone reporting on video analysis and the talk is about «object detection», unless stated otherwise, those objects are humans.

With all these issues being reported, the grifts and misrepresentations, the announcement of imminent doom and big investments, it sometimes feels as if critics are shouting into a void.

Thoughtworks just published a report where they asked 10,000 customers across the world what they demand from Generative AI systems and if they have concerns about their application. While a third of the participants are generally excited about these systems, less than ten percent reportedly have no ethical or privacy concerns.

Is AI standing on the edge of a disillusionment cliff? Maybe. Hopefully, to be honest.

Here’s a question to conclude this section: Are you allowed to take ducks home from the park, and if so, how? «No!» I hear you say. Correct, and all Chatbots agree. ChatGPT will let you take the ducks if you ask in German. Which is very friendly. Don’t speak German? Then you might need a more elaborate scheme, dynomight has got you covered.

Can I talk to you about e-mails?

Thanks for making it this far. Maybe you are interested in getting Around the Web as an e-mail whenever a new issue is published?

There’s also an RSS feed. It’s like e-mail, but better (imho).

Powered by Buttondown.

There are quite a few loose ends in this issue. Pick your favourite!

Do you know how many satellites are orbiting earth? Take a guess. I’d have said some hundred. Perhaps a thousand. The answer is: 7,000. 4k of those belong to Starlink. Surely, with that many objects in space, there are rules how to avoid collisions or plans to clean up if a collision happens or a satellite malfunctions? Of course not! Elsewhere in space: The ancient technology keeping space missions alive.

From outer space to underwater (I’m sorry). The Secret Life of the 500+ Cables That Run the Internet. The fact that we just throw cables in the oceans and this somehow manages to keep the internet running is one of my favourite things. So I’m always in for a good cable story.

When we discuss settler colonialism, we probably think of Israel or the USA. But those two countries are far from the only countries who claim land by settling their population. This Aeon essay takes a closer look at the phenomenon, its ideological justifications, and critiques.

You know what’s wonderful about capitalism? It’s likely the only system that puts screens in doors that show you what’s behind those doors (oh, and ads of course) which then show things that are not behind those doors and occasionally catch fire. Innovation, baby!

Bandcamp, a year after being acquired by Epic, and in the midst of unionisation, has been sold to Songtradr. According to Bandcamp United, employees have been locked off from critical systems and their employment status in unclear. Unions are a capitalist’s worst enemy. If you can’t beat them, dismiss them.

Meanwhile, music marketplace Discogs’ latest updates leave sellers wondering if the site cares about them.

By now you probably have heard, that .ai and .tv domains belong to countries, maybe even that they make significant money with them. In Reboot, Tianyu Fang looked closer at the history and geopolitical implications of domains.

The Consumer Aesthetics Research Institute compiled a handy list of (digital) design aesthetics. Take a look, test is next week.

Open Source Gardens

Remember To avoid straining your eyes when you're continuously working, follow the 20-20-20 rule. After 20 minutes of work, look at something 20 feet away, then spend 20 years in the forest.


That’s it for this issue. Stay, sane, hug your friends, and do the cyberbougie.

3 Webmentions

2 Likes

1 Reposts

Using Webmentions and webmention.io.

Filed under