Around the Web
Issue No. 012
Bots, AI regulation advances, Facebook does not find its legs, cops are disgusting, and how QR codes work.
Welcome to Around the Web, the newsletter not generated by an AI model but cynicism.
I went to Stuttgart on Thursday to talk about artificial intelligence and its impact on labour and state surveillance. I will link to the recording once it is published.
With this issue Around the Web passes 500 linked stories from all corners of the web. If any of you clicked on all the links (and has the browser history to prove this), I owe you one. If you have the history, however, you might want to think about deleting it, or at least get rid of cookies.
Before we get to the usual linking, I take a closer look at the impact bots have on social media, specifically.
Beep Beep Bot?
Not only since the train-wreck that is Musk’s takeover attempt of Twitter, bots are the topic of heated discussion. Musk claims that there are more bots on Twitter than Twitter said. Which served as a pretext for him to try to bail out of buying Twitter, until he changed his mind and wants to buy Twitter again.
Bots are said to destroy democracy, CAPTCHAs turn ever weirder in their never-ending quest of telling computers and humans apart. Wired recently published a series of articles on the topic, called Bots Run The Internet.
So, let’s take a moment and talk about bots. And humans. And the internet. And COVID-19. And fun, too. 🎢
At their weirdest incarnation of the theory that the Internet has been taken over by bots, we have the Dead Internet Theory. I’ve linked to it before, but I’ll never not link to it if I have the chance. It simply is the best conspiracy theory ever. It stipulates that the Internet in its entirety is run by bots. Which is genius and completely bogus at the same time.
But, as we see reflected in the Twitter takeover, bots are believed to be a very common phenomenon on social media. And, undoubtedly, they are, right?
The answer to this is more nuanced than it seems at first glance. The main reason for this is that it’s rather hard to differentiate between bots and humans. When we discuss bots, we often mean a certain behaviour rather than a technical implementation. If an account posts frequently, maybe even advocating a political view we don’t prescribe to, it’s easy to mark it as a bot.
In Bot or Not Brian Justie traces the history of CAPTCHA systems which are built to achieve exactly this distinction, as well as the role of the bot accusation in public discourse.
But those wielding “bot” as a pejorative seem largely agnostic about whether their targets are, in fact, automated systems simulating human behavior. Rather, crying “bot!” is a strategy for discrediting and dehumanizing others by reframing their conduct as fundamentally insincere, inauthentic, or enacted under false pretenses.
In his talk Social Bots, Fake News und Filterblasen the data journalist Michael Kreil analysed «bots» on Twitter and the difficulties with recognising bots and their impact. Spoiler: It’s complicated. In a follow-up talk he analysed the impact of bot networks on elections and if there is such a thing in the first place.
In the wake of the COVID-19 pandemic, the New York Times took a closer look at the topic.
“So, even if there are a lot of bots in a network, it is misleading to suggest they are leading the conversation or influencing real people who are tweeting in those same networks,” Dr. Jackson said.
While bots on social media might not be as prevalent or impactful as it might seem on the surface, there is however an increasing volume of automated traffic. This led to the developer of the search engine Marginalia proclaiming a Botspam Apocalypse.
The only option is to route all search traffic through this sketchy third party service. It sucks in a wider sense because it makes the Internet worse, it drives further centralization of any sort of service that offers communication or interactivity, it turns us all into renters rather than owners of our presence on the web. That is the exact opposite of what we need.
The sketchy third-party service is, of course, Cloudflare.
There is also the problem (though I wouldn’t really call it a problem) of online ads which are only seen by bots.
While there certainly are bots and a problematic account of automated traffic, we should be cautious to equate the existence of them with political influence. Evidence for this claim is thin. At this point, bots – at least on social media – seem to be more of an insult than an injury.
After all this, we shouldn't forget that bots can be incredibly funny and entertaining. To the bots mentioned in the article, I’d like to add Threat Update which combines a colour coded threat level with a more or less nonsensical request. It’s one of the best pieces of my timeline.
This ain’t intelligence
Facebook announced their video generation model Make-A-Video. Google followed suit. It should not be a surprise that the data for these models has been scraped from whatever sources Facebook could find. Andy Baio used the release for a closer at AI Data Laundering, as he calls the practice of big tech companies trying to mask their products as science to later put them into commercial use.
It’s currently unclear if training deep learning models on copyrighted material is a form of infringement, but it’s a harder case to make if the data was collected and trained in a non-commercial setting.
As more and more models do more and more things, AI hype will get louder and louder. To resist this cycle, stick to the these tips for reporting on AI (which are very handy to read reporting on AI, too).
Adrienne Williams, Milagros Miceli and Timnit Gebru took another close look at the – often precarious - human labour that powers AI and argue that this labour should become the center of AI ethics.
This episode of The Gradient Podcast with Laura Weidinger on Large Language Models and their ethical implications offers a wealth of knowledge. It should have been in the last issue, but slipped through the cracks.
Two ex-Google engineers started their own company called Character.ai which lets users chat with bot versions of Donald Trump or Elon Musk. This company is a symptom of a trend where developers start their companies to avoid those pesky questions of ethics for technological advances. Or, as Timnit Gebru puts it:
We’re talking about making horse carriages safe and regulating them and they’ve already created cars and put them on the roads.
A robot gave evidence in the House of Lords of the UK parliament. It shut down in the middle of giving a (pre-recorded) answer. Which is an apt symbol for the state of robotics and that Terminator fears are over-blown at the moment. Why you would let a robot testify in parliament in the first place … whatever.
Tesla also did something with a robot, and I believe it looked something like this.
Tesla unveils an “ai bot”
Boston Dynamics and five other robot companies pledged that they won’t weaponise their robots. This pledge is a response to several incidents over the last months. In an interview with IEEE Spectrum, Brendan Schulman of Boston Dynamics calls for legislation to enforce this. How it could be enforced in the first place remains an open question. In essence, giving another example of how tech companies get things wrong. Maybe, instead of building potentially harmful products and realising ethical complications after the fact, this order should be reversed? What a world this would be.
Tales from the law
Good news from Brussels. According to reporting by Euractiv, European lawmakers seem to favour a broad ban on facial recognition technology through the upcoming AI Act.
While the purpose of the preliminary discussion was precisely to get the views of the political groups out in the open, two European Parliament officials told EURACTIV that there appeared to be a clear majority in favour of the ban.
Keep in mind though that it’s still early in the discussion, and the lobby power of technology companies is nothing to sniff at. Keeping up the pressure will be important in the coming months until the law is passed.
Accompanying the AI Act is the AI Liability Directive. This directive would allow European citizens to sue if they are harmed by AI systems. The problem here is the need to prove that the harm is a direct consequence of AI.
“In a world of highly complex and obscure ‘black box’ AI systems, it will be practically impossible for the consumer to use the new rules,” Pachl says. For example, she says, it will be extremely difficult to prove that racial discrimination against someone was due to the way a credit scoring system was set up.
Not only is this difficult enough to prove on its own. Research shows that humans tend to look at discrimination by automated systems with less outrage than discrimination through humans. Bigman et al. call this algorithmic outrage deficit.
The paper further finds that people are «less likely to find the company legally liable when the discrimination was caused by an algorithm». The AI Liability Directive needs to account for this if it should have an impact.
The Biden administration in the USA announced the AI Bill of Rights, a white paper which could serve as the beginning of a legal framework similar to the AI Act. For now, it is non-binding, though, and reactions have been mixed.
What are you looking at?
The US Department of Defence extended their contract with Palantir. Palantir came under criticism as Bloomberg unveiled their strategy to «buy their way in» contracts with the British NHS. In this scheme, Palantir was seeking to buy smaller companies which already had contracts with the NHS, thus enabling them to expand their business with lower levels of scrutiny.
A contract by state police with in North Rhine-Westphalia, Germany, exploded in costs and time. Meanwhile, the Gesellschaft für Freiheitsrechte filed a constitutional complaint against the so called «Palantir paragraph» in NRW’s police law, which allows to compile and analyse a broad swath of personal data.
And, to end on a good note, Palantir’s stock price crashed by more than 60% year over year. No tears.
In the US, a cop used state surveillance technology to gather data on women, got them hacked and extorted them with sexually explicit imagery stolen from their Snapchat.
According to a sentencing memorandum, Bryan Wilson used his law enforcement access to Accurint, a powerful data-combing software used by police departments to assist in investigations, to obtain information about potential victims. He would then share that information with a hacker, who would hack into private Snapchat accounts to obtain sexually explicit photos and videos.
Prosecutors recommend the lowest sentence. Fuck, and I can’t stress this enough, all of this.
Cops using the data available through official means for being bad persons is – of course – no isolated incidence. In Germany, police came under scrutiny for allegedly supplying personal information available to them to the right-wing author(s) of letter threatening persons of colour and left politicians.
Apple AirTags are now used to track stolen election campaign signs.
Social, they said
Facebook had their Connect conference touting virtual reality, the Metaverse and this hype which won’t be. For a brief moment, it even seemed like legs have finally arrived in Facebook’s famously torso-centric Horizon World. But, alas, no legs. The sequence showing legs was made with motion capture technology, not the real imagined shizzle.
The virtual reality revolution is so revolutionary that even Facebook’s employees aren’t on board. Likely because they don’t like revolutions? Nah. They don’t use it because it’s buggy and bad. At least, it has found a «creative» new method of tracking: Facial expressions.
Anyway. Facebook not finding its legs is a pretty adequate metaphor for its current state. And I’ll leave it that.
Nieman Lab had a look on state the of echo chambers and found that most Twitter users don’t have one … because they don’t consume political content in the first place.
In other words: Most people don’t follow a bunch of political “elites” on Twitter — a group that, for these authors’ purposes, also includes news organizations. But those who do typically follow many more people they agree with politically than people who they don’t. Conservatives follow many more conservatives; liberals follow many more liberals. When it comes to retweeting, people are even more likely to share their political allies than their enemies. And when people do retweet their enemies, they’re often dunking on how dumb/terrible/wrong/evil those other guys are. And conservatives do this more than liberals, overall.
The tool Cover Your Tracks is a handy little helper to see if your browser fingerprint is unique.
Loose ends in a list of links
Sara Bazoobandi and Dastan Jasim wrote about the socio-economic and ethnic background of the protests in Iran.
Ever wondered how QR codes really work? Me neither. Dan Hollick explained it nonetheless, and now I’m all the wiser and impressed by the technology.
This infographic visualises noise pollution through car traffic, and it’s bad.
Two climate activists threw canned tomatoes at a painting of van Gogh and glued themselves to the wall of the museum. On social media, they were ridiculed quickly. While you do not have to agree with those actions, you need to defend them, and criticise climate change, Nathan J. Robinson argues in Current Affairs.
Why ask what’s wrong with them rather than asking what’s wrong with everyone else? Is not climate change act of vandalism (and ultimately, theft and murder) far, far worse than the spilling of the soup? If we are sane, should we not discuss the thing they were protesting about rather than the protest itself?
The internet was fun once, and this Internet Explorer 1.0 ad clearly shows this.
That’s all for the last weeks. Read you next time. Stay sane, hug your friends, and enjoy the colours of autumn.