Note: the title of this post is a nod to an old, rather slow sci-fi movie with a mind-blowing ending that avoided cheesiness while actually being ahead of its time in many ways...
This post is a continuation of my series of posts on economic precarity. As I mentioned in my most recent post in this series, we have been exploring the impact of machine artificial intelligence (AI) on the future of work, whether that work requires advanced education or not. But perhaps it might be good to start with a more basic preliminary question: what might be the impact of AI on human life in general? Of course, the answer to that question is dependent on two factors, namely, the kinds of predictions that are being made concerning the development of AI, and the likelihood of those predictions coming true. Over the last several years, prognosticators have predicted massive disruptions to human life resulting from the massive and rapid development of AI capabilities. The tone of these predictions has varied between the optimistic and the dystopian. Let's limit ourselves to the optimistic for now and ask whether we would want to live in a world in which the most optimistic predictions came true.
One of the more optimistic points of view can be found in a book published in 2021 titled, AI 2041:
Ten Visions for Our Future, by Kai-Fu Lee and Chen Qiufan. Dr. Kai-Fu Lee holds a PhD from Carnegie Mellon University and has founded or led a number of tech companies as well as doing extensive research and writing in the field of artificial intelligence. Chen Qiufan is Chinese science fiction writer who formerly worked for tech companies Google and Baidu before launching into a full-time creative career. AI 2041 is a multifaceted picture of Kai-Fu Lee's predictions of the evolution of AI capabilities from now to the year 2041, combined with Chen Qiufan's short stories portraying fictional settings in which each of these predictions comes true. Among the things which Dr. Lee believes we are most likely to encounter are the following:
- The use of deep learning and big data paired with social media to guide customers of financial products into decisions and lifestyles which have the least risk of adverse outcomes and the greatest chance of net benefit as calculated by an AI objective function. (See the story "The Golden Elephant.")
- The use of natural language processing and GPT as tools for creating customized virtual "teachers" for children. (See the story "Twin Sparrows.")
- The use of AI tools for the rapid analysis of pathogens and the rapid development of drugs for emerging new diseases, as well as the use of automation in management of epidemics and pandemics. (See the story "Contactless Love.")
- The displacement of skilled manual laborers by AI, and the use of AI to create virtual solutions for this displacement which return some sense of purpose to workers who have lost their jobs. (See the story "The Job Savior.")
- The ways people cope with the likely displacements and disruptions which will be experienced by societies in which having one's basic needs met becomes decoupled from having to work to earn a living. (See the story "Dreaming of Plenitude.")
Note that I have listed only five of the ten possible scenarios sketched by Dr. Lee. However, these five are most relevant to the topic of today's post. It is already becoming possible to design AI-powered virtual "life coaches" to guide people in their life decisions. (In fact, if you really want to let your bloody smartphone tell you how to run your life, you can find apps here, here, and here for starters.) However, when using these apps, one must remember that at their heart they are simply machines for optimizing objective functions which have been designed by humans and which have been tuned by massive amounts of human-supplied training data. Thus these "coaches" will be only as smart (or as stupid) as the mass of humanity. And they can be made to encode and enshrine human prejudices, an outcome which is especially likely whenever decisions involving money or social power are involved. This is illustrated in the story "The Golden Elephant." (For a harder-edged, more pessimistic view of this sort of AI application, please check out the short story "The Perfect Match" by Ken Liu.)
The use of AI tools in medicine for discovery of pathogen structure and rapid drug development is a fine example of the emerging use of machine implementation of multi-objective function optimization. I truly have nothing but praise for this sort of application, as it has saved countless lives in the last half decade. For instance, this sort of technology was instrumental in the rapid development of safe and effective COVID vaccines. However, when we get to the use of AI to replace the kind of skilled labor that has historically depended on the development of human cognitive capabilities, I think we're headed for trouble. Consider the case of teaching children, for instance, as exemplified by Chen Qiufan's short story "Twin Sparrows." Teaching in modern First World societies has evolved into the delivery of a standardized curriculum by means of standardized methods to children, and the evaluation of the learning of these children by means of standardized tests.
Now I know a little about teaching children, as I volunteered for a few years to be an after-school math coach. And I can tell you that teaching arithmetic to one or a few children requires more than just knowing arithmetic. It also involves emotional intelligence and the skill of careful observation as well as a certain amount of case-by-case creativity. We must ask whether these things can be captured by an AI application that has been "optimized" to maximize learning. How does one measure things like student engagement? For instance, do we write some polynomial regression function in which one of the terms stands for whether the kid's pupils are dilated, another term stands for whether a kid's eyes are open and looking at the teacher or whether they're closed, another term captures whether a kid is sitting quietly or throwing a fit, etc.? And what happens when we move beyond a standard curriculum? How, for instance, do you make an AI "virtual" art teacher?
I won't attempt to answer these questions here, although I will mention that China has already begun to deploy AI in primary school education, as noted in the 2020 Nesta article titled, "The Future of the Classroom? China’s experience of AI in education" and the 2019 article "Artificial intelligence and education in China," which is unfortunately behind a Taylor and Francis paywall. It will be interesting to see comprehensive, multi-year studies which document whether the use of AI in education is actually living up to its promise.
But let's say that the deployment of AI in education really does turn out to be effective. What happens to the human teachers in such a case? Kai-Fu Lee says in AI 2041 that teachers will still be needed to be confronters, coaches, and comforters. In fact, this seems to be a rather stock answer given whenever the potential massive occupational disruptions promised by the widespread deployment of AI are mentioned. We are told that when jobs that formerly required powers of observation, quick assessment, logical reasoning, computational or motor skills, or memorization are taken over by AI, the newly-displaced workers can be retrained as "compassionate caregivers." But it might be good to confront the fact that the widespread deployment of AI under an optimistic scenario would certainly mean the de-skilling of large numbers of people. What possibly unforeseen effects would this de-skilling have on the displaced workers even if they were retrained as "compassionate caregivers?"
Consider, for instance, what might happen to London cab drivers if they were replaced by self-driving taxis. To become a London taxi driver, a person must memorize a huge amount of London metro local geography, then pass a special test administered by the British government. (From what I hear, you can't cheat on the test by using a GPS!) All that memorization (especially visual memorization of London streets and intersections) induces strong development of key regions of the brains of aspiring London taxi drivers. If this challenge is taken away from a London cabbie, he or she will lose that brain development. Consider also the personnel who comprise flight crews of airliners. Up to the 1960's, one of the positions on the flight deck of an airliner was the navigator. But the navigator position was eliminated by autopilots. So flight crews shrank from four to three people. But then, further advances in automation eliminated the position of flight engineer. So now flight crews consist of only two people. What development was lost in the brains of the navigators when they were replaced by machines? (What navigational feats are humans capable of when those humans are pushed to their cognitive limits? Consider for instance how the peoples of Oceania learned to sail between their islands reliably and successfully without needing maps or a compass.)
AI has eliminated not only aircraft navigators and flight engineers, but an increasing number of other degreed professionals including medical radiologists, as well as receptionists, telephone operators, fast-food cooks, waiters, and waitresses. AI "expert systems" are threatening the jobs of an increasing number of skilled, educated technical professionals, as noted here and here, for instance. An increasing number of news stories are documenting the ongoing erosion of human labor markets by AI. It must be asked what will happen to people whose jobs required the development of hard cognitive skills when those skills are replaced by AI. Preliminary answers to that question are not encouraging. For instance, the British Journal of Medicine published a 2018 article titled, "Intellectual engagement and cognitive ability in later life (the “use it or lose it” conjecture): longitudinal, prospective study," in which the authors concluded that lifelong intellectual engagement helps to prevent cognitive decline later in life. There is also a 2017 article published in the Swiss Medical Weekly whose authors concluded that "low education and cognitive inactivity constitute major risk factors for dementia." In other words, by ceding to AI the hard cognitive challenges which have traditionally been the hallmark of many kinds of paying work, we may well be at risk of turning ourselves into a society of de-skilled idiots.
Ahh, but there's more. Let's consider the obvious fact that when AI takes over a job, one or more humans is thrown out of work. Let's consider the response of various politicians to this fact. For instance, let's consider the rhetoric spouted by crooks like Donald Trump and other Republican Party politicians (as well as their millions of adoring fans) in the run-up to the 2016 election. Let's also consider the "scholarly" articles, ethnographic studies and books such as Hillbilly Elegy which sought to "explain" the Trump phenomenon. One of the key assertions of the Trump crowd in 2016 was that the reason why the white American working class was becoming increasingly poor was the threat posed by immigrants (especially dark-skinned immigrants) taking jobs away from "real" Americans. Thus America needed to build walls - made both of barbed wire and cement, and of policies and legislation - in order to keep the great unwashed from stealing what "rightfully" belongs to America. In other words, one of the biggest drivers of the growth of Trumpism was the loss of jobs and income among the white American working class. But if concern about job losses was really so bloody important to the architects of Trumpism, why is it that they did not utter a single word in protest against the threat to jobs posed by the deployment of AI? Why is it that NO ONE in the Rethuglican Party nowadays has anything bad (or even cautionary) to say about the use of AI by American businesses? The silence of the Rethuglicans regarding the disruptions of AI can be explained quite simply. AI helps business owners increase profits while reducing labor costs. Thus AI helps the rich get richer. Also, Trumpism is not and never was about bringing jobs back to the "working class". It was rather always an expression of collective narcissism. Thus all the talk about jobs, like all the rest of the rhetoric of the American Right, was and is utter crap.
To be sure, we do need to start having urgent conversations, both locally and on a wider scale, regarding the deployment of machine artificial intelligence in society. Such conversations need to ask what AI can reasonably be expected to be able to do, as well as asking whether we really need machines to do what AI is promised to do. If we decide that it is actually in our best interest to continue the massive development and deployment of AI, we need to figure out how to do this in such a way that we maximize the benefits of AI while minimizing our exposure to the potential downsides and negative externalities of AI. Lastly, we need to start asking whether it might make sense to establish a basic universal income and other social structures which allow the people in our societies to develop their full human potential even in an era of the expanding use of AI.
No comments:
Post a Comment