Published on: March 4th, 2025
Read time: 17 mins
I love technology—airplanes, smartphones, electric vehicles, vaccines, computers, the internet—when it expands human capabilities. I hate technology as an imposed force which we must laboriously gear up to confront for the sake of safeguarding our already impaired agency.
The “Artificial Intelligence” rollout is as imposed a technology as we have faced in recent years. ISRF is hosting a workshop this month called Social and Cultural Frameworks for ‘Artificial Intelligence’, which is a good deadline for me to organise my thoughts about how we got here.
Figure 1. Source: Google ImageFX
AI is an old concept whose current wave is marked by a new literal-mindedness about how foundation models really think, and whether they really are as intelligent as humans. The claim that machine learning has produced intelligence has been inescapable since the public release of ChatGPT on November 30, 2022.
But the claim to intelligence goes back much further than that. In October 2015, for example, DeepMind’s AlphaGo program beat a professional human Go player. In his January 2016 announcement, DeepMind co-founder Demis Hassabis wrote, “Our goal is to beat the best human players, not just mimic them. To do this, AlphaGo learned to discover new strategies for itself, by playing thousands of games between its neural networks, and adjusting the connections using a trial-and-error process known as reinforcement learning.” Hassibis describes AlphaGo as an intelligent agent learning to improve its own thinking. (The accompanying Nature paper is more muted.)
This claim to genuine intelligence was bolstered by AlphaGo’s defeat in March 2016 of arguably the world’s best Go player, Lee Se-Dol, four matches to one. Lee later retired, saying, “Even if I become the number one, there is an entity that cannot be defeated.”
An even more important push for AI came from the business world. Two weeks before Hassabis’s AlphaGo announcement, the Founder and Executive Chairman of the World Economic Forum (Davos), Klaus Schwab, proclaimed the Fourth Industrial Revolution on the basis of “cyber-physical” systems that would go beyond automation and achieve true intelligence.
Schwab built on claims going back to the 1990s that the digital economy had created a new (post-)industrial revolution, and on recent work such as Erik Brynjolfsson and Andrew McAfee’s The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014). Schwab is aware of the social and cultural problems raised by the tech-driven revolution, but insists that people and their governments must simply adapt. “The Fourth Industrial Revolution, finally, will change not only what we do but also who we are.”
These business prophecies of the mid-2010s unleashed a steady bombardment of predicted transformations: US$15.7 trillion in new AI-based value for the global economy by 2030; “30% annual growth of gross world product (GWP), occurring by 2100”, or “US$2.6 trillion to US$4.4 trillion in economic benefits annually when applied across industries.” In other words, Generative AI would create a new economy the size of Britain’s.
Academic research centres, think tanks and governments agreed that a revolution was happening. In 2017, the House of Lords asked the British people if they were “Ready, Willing, and Able?” The only question was how big, and how quickly you’d climb on board. This new revolution and its untold riches had AI at its heart, and the best way to miss out was to have doubts, raise problems, ask questions or hesitate.
The estimates of “total economic potential” are speculative fictions. But they appeared close enough to the top of the knowledge pile to be taken seriously. Meanwhile, those of us at the bottom—writers of actual fiction, or teachers of writing—were having a very difficult time getting our expertise heard. Help comes at the end of this story, but from an economist.
My root worry about AI has always been that while it was making machine learning better, it was also making human learning worse. I am not alone in this. Teachers, who are responsible for helping students think, were increasingly furious about what AI was doing to the student brain.
A week before ChatGPT was released, Jane Rosenzweig, director of Harvard College’s Writing Center, made what should be an obvious point: “Writing—in the classroom, in your journal, in a memo at work—is a way of bringing order to our thinking or of breaking apart that order as we challenge our ideas. If a machine is doing the writing, then we are not doing the thinking.”
This message has often been reiterated, but to little effect. A recent international survey by the Digital Education Council found that while two-thirds of surveyed faculty see AI as a positive opportunity for education, only 5% thought their institutions had “comprehensive guidelines” for using AI in teaching. 82% of faculty are worried “that students may become too reliant on AI”. The number one skill “that educators need in the age of AI and digital” is “facilitating students’ critical thinking and learning”. Putting these results together suggests most educators share Rosenzweig’s concern that AI allows students to bypass critical thinking, as in, “If I don’t have Chat, I don’t have a paper”. They are also convinced that universities aren’t facing this fact. One can now read weekly warnings in major dailies about the link between declining capacity for thinking and the use of AI tools.
I learned during my decades of teaching university-level writing that students can mostly find a general topic that interests them. But they struggle with the next question: what do you want to say about your topic? What’s your thesis, your claim, about it? This stage turns out to be very hard, and the simple reason is that it’s where independent thinking has to happen. It’s where the student diverges, however slightly, from what has already been said. If a GPT product is available, the student—or anyone, myself included—will be tempted to use it to skip this thinking stage.
This issue was nicely summarised in the spring after ChatGPT’s release by Owen Kichizo Terry, an undergraduate at Columbia University in New York City. ChatGPT was sending mixed messages. It could write your paper as well or better than you could and “in today’s world” you had to know how to use it. These themes signalled governments and investors. On the other hand, you shouldn’t use it to plagiarise and cheat. This was a sop to education, but it avoided the issue of cognitive development.
Terry, the undergraduate, wrote that Columbia students were doing their papers with GPT, and he wasn’t happy about it. He also wasn’t happy that professors didn’t know how students were using it. Students don’t ask ChatGPT to write a whole paper from one topic prompt, he explained:
The more effective, and increasingly popular, strategy is to have the AI walk you through the writing process step by step. You tell the algorithm what your topic is and ask for a central claim, then have it give you an outline to argue this claim. Depending on the topic, you might even be able to have it write each paragraph the outline calls for, one by one, then rewrite them yourself to make them flow better.
Already, a major chunk of the thinking had been done for me. As any former student knows, one of the main challenges of writing an essay is just thinking through the subject matter and coming up with a strong, debatable claim. With one snap of the fingers and almost zero brain activity, I suddenly had one.
In other words, this use of AI doesn’t improve thinking but blocks essential stages of it. Writing is a discovery mode of your own thoughts and thinking. But this use of AI allows the student to bypass the core cognitive processes that writing assignments seek to develop.
It also teaches students that they don’t have to think to get their work done. To the extent that this AI bypass is routinised in universities, this use of AI damages higher education’s ability to increase the level of intelligence in the general population. It dumbs education down—and society with it.
AI does not have to play out like this. Large Language Models (LLMs) and machine learning more broadly are powerful tools that can be used to expand thinking and develop everyone’s power to do it.
In his new book, Moral Codes: Designing Alternatives to AI, Alan F. Blackwell invites us to differentiate between two kinds of AI. The first is cybernetics or control systems that measure and automate responses. The second kind “is concerned not with achieving practical automated tasks in the physical world, but with imitating human behaviour for its own sake”. Later, he summarises this distinction as “practical automation versus fictional imitation”. In this distinction, the Artificial General Intelligence (AGI) often heralded by advocates is a Type 2+.
The AI business has gone all-in on the second kind of AI. It presents ChatGPT and related models not as a powerful way for writers and scientists of all stages to automate selected subroutines in research and related tasks, but as artificial intelligence. One widely-referenced paper located “sparks of artificial general intelligence” in GPT-4. It is apparently now one minute to midnight for full AGI. And AI luminaries alarmed the planet with warnings that the intelligence was so powerful that it would surpass the human mind and perhaps enslave or murder us all.
This raises the question of how we can push the public conversation towards (1) making Blackwell’s type of distinction and then (2) demystifying the AI Type 2 and Type 2+ claims to impending autonomous machine intelligence, leading to the happy result of (3) using machine learning to support while also expanding general human intelligence throughout society.
The way to develop these elements is not for educators, scholars, and critics to work harder to get the word out.
To repeat, we are at the bottom of the knowledge pile. And the slope of that pile got much steeper with the election of Donald Trump and his executive assistant Elon Musk, who are conducting a war on knowledge. It’s a war specifically on knowledge workers whose expertise offers perspectives that compete with corporate information technology, or what we might call managerial IT.
The new administration is going after judicial expertise, of course, but also agricultural scientists, public health physicians, public works engineers, education researchers and general science funding. The damage is taking place with the backing—explicit or tacit—of many, if not most, tech executives. We all saw the world’s leading tech barons—founders and/or CEOs of Amazon, Apple, Microsoft, Meta/Facebook, Tesla—line up behind Donald Trump at his inauguration. They are not objecting to Musk’s dismantling of the administrative federal state or its firing of unknown thousands of technical knowledge workers.
At a time when we need cultural and social frameworks to enable AI to do what we want it to do, the dismantling has already increased tech’s power to decide its development unilaterally. Ethnicity and gender is being erased back to a state of “bare life” by one wing of the new regime; non-managerial IT expertise is being erased by the other, including the neutralising of the U.S. Digital Service.
Judging from these top-level actions, the outcome is not going to be the cultural embedding of AI but AI that transcends and subjugates culture. The measures desired by leading AI critics such as Timnit Gebru, Emily Bender, Safiya Umoja Noble, Lauren Goodlad, Ruha Benjamin, Wendy Hui Kyong Chun, and Meredith Whittaker among others, are being deleted in advance.
Or such is the endeavour. I’m fairly sure it won’t succeed. Educators, scholars, and critics are going to continue their offensive. And some other disciplines are also helping. One of them, at least occasionally, is economics. This is important since arguably the strongest pillar of “fictional imitation” AI is the claim of revolutionary profitability and accumulation.
One especially helpful paper appeared in 2024, written by Daron Acemoglu, the prominent MIT economist, and entitled “The Simple Macroeconomics of AI”. He generates estimates that are much more disciplined and also lower than those coming from McKinsey and other consultancies and research centres. The paper demonstrates that announcements of the AI wealth revolution were premature.
Technology doesn’t create economic value by existing but by being used in ways that increase people’s productivity (without just making them work harder and longer). Acemoglu and collaborators had already specified several channels. I cite the whole summary typology because they illustrate Blackwell’s two AIs distinction as a simple set of workplace pathways:
Automation (or more precisely extensive-margin automation) involves AI models taking over and reducing costs in certain tasks. In the case of generative AI, various mid-level clerical functions, text summary, data classification, advanced pattern recognition and computer vision tasks are among those that can be profitably automated.
Task complementarity can increase productivity in tasks that are not fully automated and may even raise the marginal product of labour. For example, workers performing certain tasks may have better information or access to other complementary inputs. Alternately, AI may automate some subtasks, while at the same time enabling workers to specialise and raise their productivity in other aspects of their job.
Deepening of automation can take place, increasing the productivity of capital in tasks that have already been automated. For example, an already-automated IT security task may be performed more successfully by generative AI.
New tasks may be created thanks to AI and these tasks may impact the productivity of the whole production process. (p. 5)
The first and third of these pathways replace human labour (and intelligence) with automation. The second and fourth have humans using AI as productivity tools. This is Acemoglu’s first break with a common kind of AI determinism in which, for example, artists are replaced by Dall-E-style image generators. AI, he assumes, will often work under humans rather than replace them.
Acemoglu’s second crucial assumption is that we must identify the specific fraction of the workforce that will actually be affected by AI. He calculates that “the GDP share of tasks impacted by AI within the next 10 years” to be “4.6% of all tasks (or occupations)” (p. 26). This isn’t the same as the percentage of workers affected, but it’s a small number.
His third crucial assumption intersects with the issue of AI “intelligence.” Acemoglu distinguishes between “easy tasks” and “hard tasks”.
Easy-to-learn tasks, which are relatively straightforward for (generative) AI to learn and implement, are defined by two characteristics:
there is a reliable, observable outcome metric and
there is a simple (low-dimensional) mapping between action and the outcome metric.”
How to boil an egg (or providing instructions for boiling an egg), the verification of the identity of somebody locked out of a system and the composition of some well-known programming subroutines are easy tasks. The desired outcome—an egg that is boiled to the desired level, allowing only authorised people to access the system, or whether the subroutine works or not—is clear. In none of these cases do the successful outcomes depend on the complex interaction of many dimensions of actions. (p. 17)
You can imagine what “hard tasks” are. But I again block-quote Acemoglu because he cuts through a lot of nonsense about GenAI intuition and the like.
‘Hard tasks’ typically do not have a simple mapping between action and desired outcome. In hard problems, what leads to the desired outcome in a given problem is typically not known and strongly depends on contextual factors, or the number of relevant contexts may be vast, or new problem-solving may be required. Additionally, there is typically not enough information for the AI system to learn or it is unclear exactly what needs to be learned. Diagnosing the cause of a persistent cough and proposing a course of treatment is a hard problem. There are many complex interactions between past events that may be the cause of the lingering cough and many rare conditions that should be considered. Moreover, there is no large, well-curated dataset of successful diagnoses and cures. In hard tasks, AI models can still learn from human decision-makers, but because there is no clear metric of success, identifying and learning from workers with the highest level of expertise will not be straightforward either. As a result, there will be a tendency for the performance of AI models to be similar to the average performance of human decision-makers, limiting the potential for large productivity improvements and cost savings. (p. 18)
Hard tasks are hard to solve even with billions of parameters. They generally demand human input. If we accept more rigorous definitions of intelligence than operate in the AI domain, most if not all “high-skill” jobs will require human input for the foreseeable future.
Acemoglu estimates that AI will add at best a quarter of the value to hard tasks than they might add to the easy tasks. He offers a range of provisional quantifications of AI benefits to the economy. These include:
Average overall costs savings in exposed occupations are less than 20% (p. 29). Savings in “hard tasks” are smaller still.
AI is likely to add 0.53% to total productivity gains over the next ten years (p. 33).
Positive impacts on growth in gross domestic product due to AI is likely to be 0.93% over ten years (p. 34).
Effects on wage rates and wage inequality are ambiguous.
AI’s likely economic impact is modest at best.
I draw several conclusions here.
First, there is no AI economic revolution that justifies social deregulation so that AI can “be all that it can be” as defined unilaterally by its corporations.
Second, the high-value economic benefits of AI require fully empowered human use of AI as tools. Benefits will depend on society devoting much more effort than it now does to the expansion of human capabilities, rather than seeing technology as rescuing society from the self-inflicted enshittification of its human systems. The rigourous teaching of writing and thinking is more essential than ever.
Third, the same goes for the social and cultural effects of AI. Acemoglu’s type of research should catalyse vast numbers of intensive and informed public deliberations about which kinds of AI people do and don’t want, which kinds to encourage, which kinds to prohibit completely, and how to socialise what we keep.
The general framework sometimes called the “social construction of technology” is a mainstream result of decades of science and technology studies and yet each big tech wave erases it from the public memory banks. But it’s still here, and it should be used to adapt AI to social requirements as much as vaccines or nuclear fission or fossil fuels or anything else.
I prompted Google’s AI service ImageFX to picture the situation. My prompt was “annunciation tech angel gives AI to the people”.
Figure 2. Source: Google ImageFX
In this image, the technology is so great as to inspire prayer in a permanently humble people—who are monitored and patrolled by drones.
What this AI “knows” is a techno-feudal future that a range of knowledge workers can align to prevent—and build a great alternative to.
Feature image by Taylor Vick via Unsplash.
Bulletin posts represent the views of the author(s) and not those of the ISRF.
Unless stated otherwise, all posts are licensed under a CC BY-ND 4.0 license.