
Nothing to lose but our fear
Around the Web
Issue No. 018
A crisis prayed into existence, the end of writing, how not to fight the climate crisis, and mechanical cows.
Welcome to Around the Web.
Around the Web is one issue away from its first anniversary. Here’s a little wish: If you have a friend (or two) who might enjoy this little newsletter, why not recommend a subscription to them? It’s free (very), fun (maybe kind of), and informative (mostly).
Another note: I’m struggling a bit with winter and getting my brain to think straight. So writing this has been rather laborious and took longer than it should. I’m glad I made it, though. Hope you enjoy and, as always, thanks for reading.
Lay-offs or A crisis prayed into existence
On January, 20th, Google laid off 12,000 workers, around 6% of its total headcount. Microsoft got rid of 10,000 people. Amazon laid off 18,000 people. Sundar Pichai wrote a letter which sounded roughly like every letter, which McSweeney succinctly parodied.
Speaking of: PagerDuty CEO Jennifer Tejada wrote a letter after laying off a part of her stuff so blunt that ChatGPT somehow managed to generate almost the same thing. She apologised later, while staying in office.
Only weeks after laying off 11,000 workers, Facebook announced a $40 billion stock buyback program. It’s hard to imagine pressing economic reasons to lay off this many people when a company plans to spend this much money on their stock. And lost another $13.7 billion dollars in the Metaverse experiment nobody but Zuckerberg is interested in.
All of this is because of some crisis, recession, which – realistically – fails to materialise.
Most if not all of the people let go from these companies could be retained, but corporations - and in particular tech companies - have consciously colluded with each other to push a false narrative about how they are the victims of an economy that continues to enrich them. And that’s because their leadership isn’t judged by how well they treat their employees, but rather by how they protect the interests of their shareholders.
To have an excuse, we have upper management praying a crisis into existence not because they have to, but because they want to. In the end, they are not the ones bearing the brunt of it. On the contrary, the top 1% of the population took two thirds of all new wealth created since 2020. This disparity is combined with a cost of living crisis, during whichfood companies rake in record profits.
Capitalism is alive and kicking. There is no crisis. There is money to be made. Prices are not rising, they are being increased.
But why all the lay-offs, then?
The goal, besides increasing shareholder value (shareholders love layoffs), is instilling fear in the workforce (you, yes you, might be next).
Layoffs suck for those laid off, obviously, but they also work as a disciplinary measure for this left behind, leading to a condition that Anne Helen, reflecting on her time at BuzzFeed, recently described as Layoff Brain:
Layoffs are the worst for the people who lose their job, but there’s a ripple effect on those who keep them — particularly if they keep them over the course of multiple layoffs. It’s a curious mix of guilt, relief, trepidation, and anger. Are you supposed to be grateful to the company whose primary leadership strategy seems to be keeping its workers trapped in fear? How do you trust your manager’s assurances of security further than the end of the next pay period? If the company actually “wishes the best” for the employees it let go, why wouldn’t they fucking recognize the union whose animating goal was to create a modicum of security for when the next layoff arrived, as we all knew it would?
That’s why companies so afraid of powerful unions. A perspective of solidarity and comradeship is their ultimate enemy. There’s a cruelty involved, and this cruelty is the point. After years of generous compensations and free coffee, tech CEOs remembered that there have to be chains and discipline.
It was easy to be disgusted when Musk took over Twitter and blame him for being a bad manager (which he is, don’t get me wrong). The past months have shown that he is but a symptom of a capitalist reality that does not care about you. If you blame Musk but not mention the systemic issues behind all of this, you miss the point.
Marx and Engels said that we have nothing to lose but our chains. To do so, we must lose our fear.
[whispers] strike
This ain’t intelligence
Microsoft has bought 49% of OpenAI. But how exactly the investment is supposed a profit is unclear for now. For now, OpenAI announced to charge $20 a month for a premium offering.
Billy Perrigo published a piece in Time Magazine, which shines a light on the working conditions making ChatGPT slightly less toxic. To achieve this, OpenAI hired Sama, a content moderation company relied upon by many western technology companies. Sama’s workers in Kenya were paid as low as $2 an hour to label toxic content to improve OpenAI’s filters.
To get those labels, OpenAI sent tens of thousands of snippets of text to an outsourcing firm in Kenya, beginning in November 2021. Much of that text appeared to have been pulled from the darkest recesses of the internet. Some of it described situations in graphic detail like child sexual abuse, bestiality, murder, suicide, torture, self harm, and incest.
Someone with the face of a politician had ChatGPT write a speech to hold in the US congress. It’s boring.
With ChatGPT being used for homework and university exams, OpenAI announced a ChatGPT detector. Problem: it’s incredibly bad at detecting:
In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as “likely AI-written,” while incorrectly labeling human-written text as AI-written 9% of the time (false positives).
I’m bad at maths, but 100% thankful for Rusty deciphering the math in Today in Math:
If the numbers don’t leap right out at you, imagine a college class with 100 students where 10 of them use ChatGPT to write an essay. If you run all 100 essays through the OpenAI classifier, it will correctly flag 26% of the AI essays—2 or 3 of the 10. But of the 90 human-written essays, it will incorrectly flag 9%, which is 8, as AI-generated.
Great.
With Microsoft’s investment into OpenAI finished and Google playing catch-up, we’ll soon see chat search interfaces. The problem remains: Those models are bad at it, not designed for it and traditional search should stick around.
ChatGPT is, too, the newest craze for get rich quick scheme hustlers on YouTube and TikTok. Elsewhere in scamtown: some kind of ML implementation is, for now, a sure-fire way to boost your stock value.

OpenAI, Midjourney and so forth will not stop shoving their creations down our throats. Their models will increasingly produce the world, subverting our sense of truth and reality. An issue Rob Horning investigates, drawing on Baudrillard’s theory about hyperreality.
More generally, the fact that AI models will give plausible answers to any question posed to them will come to be more valuable than whether those answers are correct. Instead we will change our sense of what is plausible to fit what the models can do. If the models are truly generative, they will gradually produce the world where they have all the right answers in advance.
So, now we have a lot of linear algebra pushing into our life, where does this leave us, as humans, you know?
Before we all get sucked into that black hole, let’s remember the idea of human language. Language connects us. Language connects one human being to another. Through space and time. Language transports meaning between minds, sense between bodies, it can make us understand each other and ourselves. It can make us feel what others feel. Language is a bridge.
Maybe we aren’t in such a bad spot after all if we keep communicating. The whole piece is really great, go read it.
CNET tried the thing with ending writing already, and it didn’t really end well. At first, they tried to hide it, then it was discovered that a model writes articles including «very dumb errors». Which wasn't the end: CNET’s AI-written articles aren't just riddled with errors. They also appear to be substantially plagiarised.. It stands to reason that this is what happens when private equity converts journalism into a cash cow. Futurism really did a stellar job pursuing the story, leaking internal Red Venture communication as a follow-up to the follow-up to the follow-up.
In legal trouble, Getty Images is suing Stability AI, which tends to show Getty’s famous watermark on Stable Diffusion’s output.
Maybe AI can represent AI in court? Start up DoNotPay certainly thinks so, fails at it, and consequently updated their Terms of Service to ban people from testing their claims.
This overview of the inventions leading us to the state of the art of generative AI we are witnessing now is nonetheless worth your time. I’ll let you, dear reader, decipher the parts where the author falls into the hype trap.
Social Mediargh
AlgorithmWatch is trying to better understand TikTok’s algorithm. If you are based in Germany and use TikTok, do the right thing and donate your data.
I had lost interest in Twitter for a bit. Sure, publishers were annoyed, everything was slowly falling apart, apps had to shut down. It has all signs of slowly turning into tumbleweed.
How things changed this week! First, accounts started to see an increase in tweet visibility once they lock their account. Which led to Musk locking his account to «investigate the issue». If only he had something like an engineering department. Shortly after, Twitter Dev announced that the free API tier will be shut down on February, 9th, bringing an end to a whole host of third-party services. While the changes for the regular API are still forthcoming, it seems like the research API has been shut down already. Per the EU’s Digital Services Act Twitter is required to allow access to its data for researchers.
The Financial Times set up and teared down a Mastodon server, in a kind of chaotic move in which nothing really adds up.
What are you looking at?
Uber’s drivers in Geneva are trying to better understand Uber’s systems – using Uber’s data. The data is a mess, though, so making sense of it is basically impossible without the help of data scientists and additional data sources.
Over the past week, Uber drivers have been turning up at the University of Geneva's FaceLab to get an independent analysis of their data. The drivers have all been offered individual compensation packages by Uber for the back-dated pay and expenses they are owed, after a court found in May last year that drivers in Geneva, Switzerland, were employees, not independent contractors.
Meanwhile, Uber attempts to add gamblification elements to gig work. You are promised a nice bonus after completing one hundred rides, but the algorithm gives you fewer rides the closer you get to the bonus marker.
The police in Mecklenburg-Western Pomerania (that’s in northern Germany) has been sentenced to look a bit less into the life of citizens. Germany’s highest court ruled parts of a «security law» to be against the constitution. Politicians from the Greens and the CDU in Hesse are, meanwhile, pushing forward with a new assembly law which would drastically restrict the right to free assembly.
EOL of Humanity
The push towards electric vehicles is way under way. They won’t solve the crisis, though. The solution to overconsumption isn't more consumption. IEEE Spectrum had a whole series on EVs and hopes and problems tied to them. Besides the first linked piece, I’ll recommend the one on their impact on the job market.
Oil companies, incidentally, have reported record earnings across the board. They were also very active in lobbying around the latest COP summit in Egypt and bought ads on Facebook and Instagram for around $4 million.
At least we have carbon offsets, which magically turn money into climate change. Right? Of course not.
The research into Verra, the world’s leading carbon standard for the rapidly growing $2bn (£1.6bn) voluntary offsets market, has found that, based on analysis of a significant percentage of the projects, more than 90% of their rainforest offset credits – among the most commonly used by companies – are likely to be “phantom credits” and do not represent genuine carbon reductions.
But what’s an apocalypse if it doesn’t turn a profit?
Okay, maybe we just give up, make extinction prevail and use genetic engineering to bring all those animals we exctincted back to life. It sounds like a bad idea because it is a bad idea, which of course does not stop companies with a «vision» from thinking that it’s a good idea.
It’s called de-extinction, and its newest goal is the Dodo. Not that any of the animals they tried to de-extinct previously have actually been de-extincted.
Loose ends in a list of links
Crucial Computer Program for Particle Physics at Risk of Obsolescence. It’s the XKCD comic about open-source dependencies in real life.
Here’s a tab about cows and their intersection with machines. Cow tabs tend to be excellent.
Perhaps it is helpful to consider mechanical cows as a window to a worldview. Plant-based milks or automatic milking systems might play a significant role in the agriculture and food policies we’d like to see in the world. A mechanical cow can be a starting point to examine identity, climate anxiety, or animal welfare, and an opportunity to exercise skepticism towards promising food technologies and the people who control them.
Molly Nilsson wants a world without billionaires. Me too, Molly, me too.
One of the weirdest news cycles going on is the ongoing drama around George Santos in the USA. Mother Jones now tries to call top donors to his 2020 campaign. Somehow, they don’t exist.
Second contender for the oddest news cycle is the outrage over M&Ms. Parker Molloy summarised the malaise: Right-Wing Rage About «Wokeness» at Candy Company Known for Using Child Labor Gets Results!
Marie Kondo gives up. Chaos will prevail.
Trans women athletes have no unfair advantage under current rules, report finds. It’s almost like TERFs make shit up out of thin air. Who would have thought.

That’s it for this issue. Stay calm, hug your friends, and be human after all.