How Do We Escape Today’s AI?

by Christopher Newfield

Published on: September 3rd, 2025

Read time: 19 mins

Another of my perversely non-escapist beach reads this month was Karen Hao’s new book, Empire of AI. You can tell she means the word “empire” literally from her subtitle, Inside the Reckless Race for Total Domination. Hao has written a superb work of deep reporting and reflection. By centring the story on OpenAI, she offers a systematic corrective to the sense of so-called “Artificial Intelligence” that you get from the breathless fluff and propaganda of our mediasphere.

 The branch of computer programming mislabelled “artificial intelligence” has infiltrated all mainstream thinking about many domains—labour, capital investment, social development, and human consciousness itself. Hao has an engineering degree and broke some of the early stories about OpenAI while working for the MIT Technology Review and the Wall Street Journal. Yet these qualifications don’t quite explain her immunity to technology bullshit.

The key source may be her grasp of the fact that technology has value only when properly shaped by interactive relations with culture and society. One might think of David Edgerton’s The Shock of the Old (2006), in which tech is inseparable from its culturally-determined adoption. Hao notes that “technology revolutions” typically have a double nature: “their promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable” (89). Her immunity to bullshit comes in part from her empirical finding that AI, for the majority, is the reversal of progress.

I’m going to recount some of the broader AI history before getting to the OpenAI era whose confusions we suffer. The history helps explain our conceptual chaos. It also helps explain the persistence of late neoliberalism, of which the Silicon Valley of crypto and AI is the leading champion.

 

***

Computer programming had long focused on the automation of control, particularly of big systems. Computers have always been better than people at handling very large amounts of data, repetitious procedures, very long strings of calculations, very large quantities of rules, and at doing such things with high reliability and at high speeds. Computers excelled at speed and quantity within a pre-established set of “unambiguous instructions” commonly known as an algorithm. The computer’s powers of automation are extremely useful for all sorts of things and have constructed much of the contemporary world.

But for some, automation was not enough. In 1956, a computer scientist at Dartmouth College named John McCarthy issued a call for papers on a topic he called “automata studies.” He was disappointed with the quality of the papers that came in and, as Hao tells it, after some discussions with colleagues, renamed the topic of the conference as “artificial intelligence.”

Two years later, Frank Rosenblatt, a computer scientist at Cornell University, devised what he called the “Perceptron,” which

could perform basic pattern matching to tell apart cards based on whether they had a small square printed on their left or their right. Over his main collaborator’s objections, Rosenblatt advertised his system as something akin to the human brain. He even ventured to say that it would one day be able to reproduce and begin to have sentience. (Hao 90)

The New York Times immediately popped up to play the role of the press with this technology, which is to report an unfounded marketing claim as truth. It “announced that the Perceptron would in the future ‘be able to walk, talk, see, write, reproduce itself and be conscious of its existence.’”

Hao cites an historian describing this renaming of automation as intelligence as the field’s original sin. (Having reviewed Alan Blackwell’s excellent book Moral Codes, I’d say the original sin was Alan Turing’s six years earlier, when he encouraged people to see computing as a semblance of human intelligence.)

Both the term “AI” and Turing’s “Imitation Game” created the public expectation that programmes wouldn’t just become faster and more powerful but more intelligent and more human-like until, like humans surpassing apes, computers would surpass humans. This science fiction fantasy, though lacking any basis in an accepted definition of intelligence, became fundamental not just to science fiction but to the cultural imagination of computers.

 

***

 

What’s sometimes called Good Old-fashioned AI (GOFAI) depended on complete instruction sets detailing each response to each stimulus, which gradually proved to be unworkable. This was an opportunity, starting around 1980, for a previously lagging branch of programming to move to the fore. Hao has a particularly lucid summary of the split within the “powerful elite” that has always shaped AI:

Following the Dartmouth gathering, two camps emerged with competing theories about how to advance the field. The first camp, known as the symbolists, believed that intelligence comes from knowing. . . . Achieving AI must then involve encoding symbolic representations of the world’s knowledge into machines, creating so-called expert systems. The second camp, called the connectionists, believed that intelligence comes from learning. . . . Developing AI should focus instead on creating so-called machine learning systems, such as by mimicking the ways our brains process signals and information. This hypothesis would eventually lead to the popularity of neural networks, data-processing software loosely designed to mirror the brain’s interlocking connections, now the basis of modern AI, including all generative AI systems. (Hao 94)

One devoted connectionist was the now-renowned Nobelist Geoffrey Hinton, who in the 1980s moved this camp towards multiple stacked layers of nodes, also known as a “deep neutral network.”

Hinton also authored another fateful marketing term, converting the stacking of data-processing nodes, which he’d called “deep neural networks to perform machine learning,” into “deep learning.” This shift from the symbolist’s coding to the connectionist’s learning made it much easier to see the programmes as thinking, acquiring and developing knowledge over time.

This shift from GOFAI to the connectionist’s “second wave” AI has been well-studied (Brian Cantwell Smith), but Hao judges that its success has been driven less by superior tech than by a superior fit with Silicon Valley venture capitalism. Symbolic AI had been very hard to commercialise. Although connectionist neural networks were less efficient and accurate, and could not show causal relationships, they were easier to commercialise. “Strong statistical pattern-matching and prediction go long way in solving financially lucrative problems” (99). The path to statistical solutions was shorter and thus cheaper than that to machine reasoning, increasing return on investment. And the necessary training of neural networks “affords the greatest competitive advantage to players with the most data” (100).

The pieces were falling into place for the dominance of the connectionist paradigm: increases in processing power in the 2000s, the triumph of the image-recognition techniques of Hinton and his students Ilya Sutskever and Alex Krizhevsky at ImageNet in 2012, the founding of OpenAI in 2015 by Sutskever, Sam Altman, Elon Musk, Greg Brockman, and others, and the publishing of transformer architecture in 2017. These led through a long series of important technical developments and corporate dramas to the unexpected turning point for the whole industry—the launch of ChatGPT in late 2022.

 

***

 

Hao covers the technical and the corporate sides of OpenAI’s evolution, including the departure of Musk, the failed coup against Altman, the mindboggling boom in GPU (Graphic Processing Unit) chip demand and production and in data centres, and the rise of a global data-annotation industry, all intertwined via her skilful novelisation of palace politics.

I was particularly moved by Hao’s investigation of the data-annotation companies on which LLM training depends. As others have also explained, these operate in the Global South while being owned and run in the North. They have been built up for training software for self-driving cars and for the content moderation of social media companies; similarly, the training of neural networks requires labelling and other forms of human feedback on an epic scale. It is generally outsourced to contractors who specialise in hiring the impoverished cognotariat of the Global South. Firms with names like Scale AI, Sama, and Together follow economic disaster into places like Venezuela and Kenya, where they pay micro wages to desperate people with good educations, like destitute Venezuelan nurses, teachers, and engineers. The firms also drop workers and communities at the first sign of ordinary labour demands, such as employment contracts, and would sometimes abandon an entire country. (In this spirit, the tech industry funded a ballot proposition that successfully overturned California state legislation that had mandated employment status to the “independent contractors” who drive for Uber and similar companies.)

I’d note a feature of OpenAI that I previously hadn’t fully appreciated. Its founding rationale was not to build a better Siri but to achieve “superintelligence.” Influenced in part by a philosopher’s 2014 book by that title, the OpenAI founders sought to build a thinking technology that would completely surpass the human mind. Many of the technologists were and are true believers: some were doomers, like Elon Musk, who worried about AI’s ability to destroy humanity; others were boomers, like Sutskever, Brockman, and Altman, foreseeing revolutions in effective psychotherapy, Star Trek level healthcare, and the abundance of everything in a coming “Age of Intelligence.” All thought superintelligence was arriving fairly soon.

With society not so far notably improved, people now feel obliged to ask, “Are we all about to lose our jobs to AI?”  But for the dominant technologists in this field, everyone losing their jobs is the point. They won’t say it this way, and instead they get us to wade around in the cultivated ambiguities of “extending” vs. “replacing” human labour. But global capital won’t line up with tens of billions for OpenAI, Anthropic and similar companies if they are simply building a digital assistant, so the companies must keep “replace labour” in play. Machine superintelligence is to be so superior to humans that the machine must take your job if the benefits of AI are to be realised.

Given this capital-driven promise, it’s not surprising that human labour is simultaneously exploited and disposable—and also hidden. Take this Wikipedia overview of Fei-Fei Li’s crucial contribution:

While at Princeton in 2007, Li led the development of ImageNet, a massive visual database designed to advance object recognition in AI. The project involved labeling over 14 million images using Amazon Mechanical Turk and inspired the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), which catalyzed progress in deep learning and led to dramatic improvements in image classification performance. The database addressed a key bottleneck in computer vision: the lack of large, annotated datasets for training machine learning models. Today, ImageNet is credited as a cornerstone innovation that underpins advancements in autonomous vehicles, facial recognition, and medical imaging.

Who the hell labelled 14 million images? This typical text says Amazon Mechanical Turk, but of course it wasn’t; it was who knows how many thousands of people hired anonymously through Amazon Mechanical Turk. The Great Automata rests on the mass labour of people.

In other words, AI depends on unprecedented armies of unknown workers doing mind-numbing labour for the lowest, most insecure wages the billionaires of Silicon Valley could arrange. That includes its most respected academic researchers. Meanwhile, the corporate contributions to the surrounding societies in Africa and elsewhere are non-existent. There is none of the taxation-based social development that capitalism once proudly claimed to create in its mid-20th century welfare phase. Instead, AI capitalism intends to bypass the material creation of infrastructure and mass employment, and bypassing it with the absurd claim that AI will make or do “abundance” for us.

The version of AI embodied by Large Language Models was created by exploiting less-skilled, unprotected labour while explicitly aiming to make a superintelligence that will replace high-skill labour. Hence Hao’s use of “empire” (like Rachel Adams’) and her regular references to colonial practices by these firms. Meanwhile, the playing around with “extending” vs “replacing” human labour adds no clarity at all while inspiring fear and deference. The skepticism of economists plays little role in the discourse.

Public toleration for this Valley vision makes little sense, but Hao provides the empirical detail to confirm that it is their goal.

 

***

 

The goal has attracted more capital investment than any product in recent memory. The turning point was the launch of ChatGPT, whose only real innovation was to link the LLM to a chatbot. Taking OpenAI completely by surprise, the product set a record for rate of adoption, convincing investors and the media that this would change everything about the economy and the future.

But why? It wasn’t the quality of the output, which was (and is) famous for “hallucinations” that would be better termed “fabrications.” This is in the nature of probabilistic sequence completion algorithms, the “connectionist” methodology, which finds patterns that may or may not correspond to reality. The method was always inefficient in this way. Three years later, further study suggests that the need to check your bot’s automated work makes good work slower, even in the core use case of programming itself.

The key factor for the capital boom seems to have been anthropomorphic projection—the Turing trick in which the human user readily imagines that the machine’s response comes from a kind of person. This allowed the world to see LLMs as people who could do the work of people, including all the correspondence and communicating as well as the learning and thinking that happens in any kind of organisation.

The most famous use case, certainly amongst academics, has been students deploying ChatGPT to plagiarise their homework, and the thoughtless tech disruption of education has yet to be reckoned with, even as educators scramble in their usual way to find silver linings in this latest ed-tech assault on human cognitive gain. But each increase in users intensified the competition in which everyone else had to use this digital assistant, lest one fall behind. Many companies came to require employees to use it, driven by competition anxiety while also perhaps imagining a Great Replacement in which they could stop paying their workforce by firing most of its humans.

Commentators often call for wider social discussion of AI design and policy, and I’ll mention Hao’s version below, but the entire industry is predicated on secrecy and autocracy. A small network of Silicon Valley venture capitalists makes all the important financial decisions, small numbers decide tech directions, and opacity is the power principle inside and out. Neural networks are both unreliable and “inscrutable,” as Hao observes (107). These traits are confirmed by expert discussions of Deep Neural Networks that involve topics like “generalization collapse,” encountered by specialists trying to make large corporate LLMs reasonably reliable yet failing to understand what’s inside. Rather than generalised discussion, the public gets concealment and ambiguity, cutting AI World off from the world at large.

By Spring 2023, Sam Altman had been anointed prophet of the age of superintelligence. He piled up the information asymmetries between High Church AI and the views of the rest of society, having “personally met with at least one hundred US lawmakers” by June, and having appeared at every high-level global conference anyone could think of (302).

On the day in May that he testified before the US Congress, he bumped from the schedule a group of Hollywood concept artists who’d been planning to testify about the threat to their sector by the AI industry.

They had crowdsourced funding for their airfare and accommodation and . . . had planned to speak candidly about the devastating effects that generative AI was already having on their profession. Generative AI developers had trained on millions of artists’ work without their consent in order to produce billion-dollar businesses and products that now effectively replaced them. Those jobs that were being erased were solid middle-class jobs—as many as hundreds of thousands of them. (303)

Having lost out to Altman’s testimony on the first day,

That second day, they were once again competing for attention. Altman was attending an exclusive dinner with sixty House members at the Capitol, feasting on an expertly prepared buffet with roast chicken. At the same time, the artists were hosting an interactive cocktail hour and trying to attract as many staffers with the best their budget could buy: wine and Chick-fil-A. (303)

The lawsuits of artists, writers, and newspapers are ongoing, but the knowledge of commercial LLMs’ effects on creative workers is swamped by the AI sector’s power.

The more open the publicity machine, the more closed the science. OpenAI started as a non-profit to share research in the tradition of Mertonian science. It has wound up guarding its research as a trade secret in a competition for capital and markets.

As open was replaced by closed science, plural avenues for research withered. The most visible experts had financial stakes in the ideas they were pitching as true. The open-source AI movement carried on, but with little input to policymakers in Washington D.C., Westminster, and elsewhere who were looking for magic bullets.

“Independent AI expertise had atrophied,” Hao reports. In September 2023, Deborah Raji, a UC Berkeley AI accountability researcher,

found herself the singular academic, with financial ties neither to the industry nor Doomer community, testifying to Congress next to Altman, Musk, Nadella, Gates, Zuckerberg, Pichai, and Jack Clark, among other tech executives. . . As her fellow witnesses spouted spectacular, unbacked claims about the promises and perils of AI, peppered with well-timed references to beating China that straightened the backs of attending senators, what shocked Raji the most was how much many in the audience appeared to buy into everything. (311)

 

***

 

Will independent research be outgunned forever? Investors have already sunk such a crazy amount of money into this magical tech that they need critique to stay on the sidelines. Counter-mechanisms need to be built anyway, to make serious research something more than throwing mud at the bullet train as it flashes by.

In the tradition of critical books, Hao’s solutions are crammed in at the end. They offer two kinds of socialisation of LLMs and related tech—that is, of re-embedding the technology in society in order to bring its collective intelligence to bear.

One involves Indigenous practices that refuse the Silicon Valley-backed split between society and technology. A Maori company in New Zealand/Aotearoa, Te Hiku Media, is using speech-recognition software to transcribe an audio repository of native speakers of the Maori language. The key difference is that the project is rooted in “consent, reciprocity, and the Maori people’s sovereignty—at every stage of development” (411). One of ISRF’s collaborators is part of a similar project now underway in Canada. Such projects can be multiplied.

Indigenous projects based on consent, reciprocity, and people’s sovereignty reject AI as “a land grab all over again” (412). Models can be small and their training data visible and understandable to the user. Such projects imagine an alternative to what Hao has come to hate: not AI as such, but “a vision for the technology that requires the complete capitulation of our privacy, our agency, and our worth, including the value of our labour and art, towards an ultimately imperial centralization project” (413).

The other socialisation path is a massively rebuilt educational infrastructure. This isn’t how Hao puts it, but she argues that knowledge about tech needs to be redistributed, and this means “greater funding to support its production outside the empire” (419). Yes, absolutely!

This means public and state involvement, but decentralised through a range of institutions like Te Hiku Media. The infrastructure will also need access to corporate detail about training data, weights, and other technical specifications that they currently—and legally—withhold. This will involve state regulation, which some executives like Jack Clark say they would support. And it will involve “broad-based education”—“The antidote to the mysticism and mirage of AI hype is to teach people about how AI works, . . . about the worldviews and fallibility of the people and companies developing these technologies” (420).

This would have to be combined cultural-technological education. It would be a new combination of high-quality with mass education, publicly funded, in de-privatised colleges and universities, since most people who will need to get this knowledge can’t afford it out of pocket. To reject “two cultures” tech supremacy would amount to a cultural revolution. Actually, so would de-privatisation. But that’s what’s required by the social pathway to post-AI intelligence: Hao is right about this. Hundreds of billions in annual AI investment is a powerful blockage. It needs redirection through the tax system.

If it seems like the society vs. capital battle is always won by capital, an alternative is capital vs. capital. Financial analysts like the unstoppable Ed Zitron project capital’s self-immolation on the altar of AI. I agree that AI is a capital bonfire, but I expect a bottomless taxpayer bailout, thanks to the marks who run education and government, and to the world’s Departments of Defence.

Ultimately, escape will need all the research offered up by the assembly of researchers: Te Hiku and Zitron certainly, others linked above, Kosmyna et al. on ChatGPT increasing “cognitive debt,” Gerlich on AI and “cognitive offloading,” plus learned societies such as the Institute of Electrical and Electronics Engineers (IEEE) warning its members of AI’s threats to critical thinking.

This assembly has Karen Hao as one of its strongest voices, and the last words of her book are “rise up and build.” The building can draw strength from a slogan of Taiwan’s Cyber Ambassador (2016-2024) Audrey Tang: “The People are the Superintelligence.”

Feature image by Ron Lach, via Pexels.

Bulletin posts represent the views of the author(s) and not those of the ISRF. Unless stated otherwise, all posts are licensed under a CC BY-ND 4.0 license.

Copyright © 2025 Independent Social Research Stichting | Registered Head Office: WTC Schiphol Airport, Schiphol Boulevard 359, 1118BJ Amsterdam, Netherlands