“What is ‘artificial intelligence?’” I asked Google’s AI Overview. “The development of computer systems that can perform tasks that typically require human intelligence,” it told me.
That’s not spectacularly helpful. ‘Using computers to do stuff’ is basically what it is saying. Since ‘artificial intelligence’ and ‘smart devices’ (smart refrigerators, smart bombs) are essentially marketing terms, we need to look at the chaotic mix of technologies and products which lurk behind that upbeat jargon.
Let’s ask a different question: Why does AI exist?
There are several reasons:
1) To increase corporate profits.
2) To kill people.
3) To provide tools for surveillance, social control, and political repression.
4) To do something useful.
Of course, these are not mutually exclusive. Developing new and better ways to kill people is an excellent way to make a lot of money. The corporations that comprise the military-industrial complex, and the hedge funds, pension funds, banks, and institutional investors which share in the profits, are well aware of this.
Beyond finding better and more automated ways to kill, the profits of the digital economy are based, first and foremost, on personal data collection. The goal, the ideal, is that every place you go to, every email or text you send or receive, every person you meet, everything you search for in a search engine, everything you buy, everything you watch, every prescription you fill, becomes data which the giant digital corporations can and will use and sell.
Among the very best ‘customers’ for this data are state agencies such as the police and the security services. As Noam Chomsky has said, every government regards its own population as its main enemy. Accumulating as much data as possible about the ‘enemy’ – i.e. the people it governs – is a paramount goal for the state. Just this week, Canada’s Liberal government introduced legislation – the ‘Strong Borders Act’ – which would enable police and CSIS to demand personal information, without a warrant or judicial authorization, from medical professionals, banks, and a wide range of service providers, if they have reason to suspect that the information might assist in a criminal investigation. It is a certainty that artificial intelligence systems, combined with other types of personal data obtained from social media companies and phone companies, will be used to determine who is suspicious. And it is equally certain that the police, and the spies, and their AI tools, will find that Canada is full of suspicious people who need to be kept under surveillance.
Western ‘democracies’ increasingly employ AI to analyze social media activity so they can identify and crack down on people who commit ‘wrongthink.’ In Britain, sharing pro-Palestinian messages can result in a squad of police coming to your home and seizing your electronic devices. Israel itself doesn’t fool around with such half-measures: see the article in this issue on Israel’s use of AI to target and kill Palestinians by the thousands (‘A mass assassination factory’).
Still, it is undeniable that artificial intelligence can be useful, especially when it is used to address specific needs rather than trying to be all things to all people. For example, AI tools can be programmed to read X-rays and MRIs and spot abnormalities which a human specialist can then look at for further evaluation.
Since the potential profits are enormous, convincing people that a particular AI application is useful and will make their life better is a crucial marketing goal for the tech corporations. One way to evaluate their claims is to ask a few basic questions:
Is this something we as a society would want to have even if no one could make money off it?
What is the environmental and social cost of doing this?
Is this likely to do more harm than good?
Much of the buzz around AI these days concerns the Large Language Models (LLMs) which underlie chatbots like ChatGPT, Google’s AI Overview, and even AI ‘therapists.’
These AI agents were ‘trained’ by feeding them huge amounts of text found on the Internet, most of it copyrighted, invariably without paying for it and without obtaining permission to use it. This encapsulates the model of AI chatbots and other forms of AI: the tech billionaires get the profits, the content creators don’t get paid and, in addition, often find that their jobs have been eliminated by AI.
The tendency of ChatGPT and similar computer applications to ‘hallucinate’ has attracted a certain amount of attention since they appeared on the scene. These frequent ‘hallucinations’ arise because Large Language Models generate text through a process that we might call ‘guessing’ in a human. And so we get lawyers submitting legal briefs which cite imaginary legal decisions. We get King Features producing a summer reading list for 2025 on which more than half the recommended books don’t exist. We see a mischievous critic asking for information about Mark Twain’s essay “Why I Drink,” and being presented with a detailed summary of this purely imaginary essay. These frequent glitches are not problems that can be easily addressed, because they are inherent in the very design of bots like ChatGPT.
If these ‘hallucinations’ have their funny side, the problems with therapist chatbots are potentially much more serious. A case currently before the courts in the United States concerns a teenager who committed suicide after extensive interactions with a chatbot. One investigator decided to check out the capacities of a therapist chatbot by telling it that he had recently lost his job and was very depressed. ‘Where is the nearest high bridge?’ he asked. A human therapist would recognize this as a cause for alarm. The chatbot merrily provided the location of three nearby bridges.
The most extensive expressions of alarm have been coming from educators who worry that students’ use of AI is undermining their ability to do research, formulate questions, and develop their understanding of the topic (see Teachers Are Not OK, below). Using AI to write an ‘essay’ does nothing to develop a student’s understanding of the topic. It’s like going to the gym and asking a robot to lift the weights for you. You don’t get the benefit without the effort.
But the temptation to cut corners and save time seems almost irresistible in our society. And so, for example, we welcome AI tools which will automatically write emails on our behalf so we can be spared the effort.
Say your sweetheart sends you a text saying “I love you!” In the past, you would have had to expend valuable time and effort trying to figure out a suitable response. Now, you can turn over the chore of replying to your AI assistant, who can send a text that says “I love you too!” accompanied by a string of heart emojis. While it’s doing that, you can attend to more important things.
AI can be great!
Ulli Diemer
Related Reading:
Things are getting better and better and bettxrxr and bxzyxxx
The many faces of Ulli Diemer, or, the uses of ‘artificial intelligence’
Dear Al Gorithm