Monday, May 13, 2024

Precarity and Artificial Intelligence: What "HAL" Might Do To "Dave's" Future

Note: the title of this post is a nod to an old, rather slow sci-fi movie with a mind-blowing ending that avoided cheesiness while actually being ahead of its time in many ways... 

This post is a continuation of my series of posts on economic precarity.  As I mentioned in my most recent post in this series, we have been exploring the impact of machine artificial intelligence (AI) on the future of  work, whether that work requires advanced education or not.  But perhaps it might be good to start with a more basic preliminary question: what might be the impact of AI on human life in general?  Of course, the answer to that question is dependent on two factors, namely, the kinds of predictions that are being made concerning the development of AI, and the likelihood of those predictions coming true.  Over the last several years, prognosticators have predicted massive disruptions to human life resulting from the massive and rapid development of AI capabilities.  The tone of these predictions has varied between the optimistic and the dystopian.  Let's limit ourselves to the optimistic for now and ask whether we would want to live in a world in which the most optimistic predictions came true.

One of the more optimistic points of view can be found in a book published in 2021 titled, AI 2041:
Ten Visions for Our Future, by Kai-Fu Lee and Chen Qiufan.  Dr. Kai-Fu Lee holds a PhD from Carnegie Mellon University and has founded or led a number of tech companies as well as doing extensive research and writing in the field of artificial intelligence.  Chen Qiufan is Chinese science fiction writer who formerly worked for tech companies Google and Baidu before launching into a full-time creative career.  AI 2041 is a multifaceted picture of Kai-Fu Lee's predictions of the evolution of AI capabilities from now to the year 2041, combined with Chen Qiufan's short stories portraying fictional settings in which each of these predictions comes true.  Among the things which Dr. Lee believes we are most likely to encounter are the following:
  • The use of deep learning and big data paired with social media to guide customers of financial products into decisions and lifestyles which have the least risk of adverse outcomes and the greatest chance of net benefit as calculated by an AI objective function.  (See the story "The Golden Elephant.")
  • The use of natural language processing and GPT as tools for creating customized virtual "teachers" for children.  (See the story "Twin Sparrows.")
  • The use of AI tools for the rapid analysis of pathogens and the rapid development of drugs for emerging new diseases, as well as the use of automation in management of epidemics and pandemics.  (See the story "Contactless Love.")
  • The displacement of skilled manual laborers by AI, and the use of AI to create virtual solutions for this displacement which return some sense of purpose to workers who have lost their jobs.  (See the story "The Job Savior.")
  • The ways people cope with the likely displacements and disruptions which will be experienced by societies in which having one's basic needs met becomes decoupled from having to work to earn a living. (See the story "Dreaming of Plenitude.") 
Note that I have listed only five of the ten possible scenarios sketched by Dr. Lee.  However, these five are most relevant to the topic of today's post.  It is already becoming possible to design AI-powered virtual "life coaches" to guide people in their life decisions.  (In fact, if you really want to let your bloody smartphone tell you how to run your life, you can find apps here, here, and here for starters.)  However, when using these apps, one must remember that at their heart they are simply machines for optimizing objective functions which have been designed by humans and which have been tuned by massive amounts of human-supplied training data.  Thus these "coaches" will be only as smart (or as stupid) as the mass of humanity.  And they can be made to encode and enshrine human prejudices, an outcome which is especially likely whenever decisions involving money or social power are involved.  This is illustrated in the story "The Golden Elephant."  (For a harder-edged, more pessimistic view of this sort of AI application, please check out the short story "The Perfect Match" by Ken Liu.)

The use of AI tools in medicine for discovery of pathogen structure and rapid drug development is a fine example of the emerging use of machine implementation of multi-objective function optimization.  I truly have nothing but praise for this sort of application, as it has saved countless lives in the last half decade.  For instance, this sort of technology was instrumental in the rapid development of safe and effective COVID vaccines.  However, when we get to the use of AI to replace the kind of skilled labor that has historically depended on the development of human cognitive capabilities, I think we're headed for trouble.  Consider the case of teaching children, for instance, as exemplified by Chen Qiufan's short story "Twin Sparrows."  Teaching in modern First World societies has evolved into the delivery of a standardized curriculum by means of standardized methods to children, and the evaluation of the learning of these children by means of standardized tests.  

Now I know a little about teaching children, as I volunteered for a few years to be an after-school math coach.  And I can tell you that teaching arithmetic to one or a few children requires more than just knowing arithmetic.  It also involves emotional intelligence and the skill of careful observation as well as a certain amount of case-by-case creativity.  We must ask whether these things can be captured by an AI application that has been "optimized" to maximize learning.  How does one measure things like student engagement?  For instance, do we write some polynomial regression function in which one of the terms stands for whether the kid's pupils are dilated, another term stands for whether a kid's eyes are open and looking at the teacher or whether they're closed, another term captures whether a kid is sitting quietly or throwing a fit, etc.?  And what happens when we move beyond a standard curriculum?  How, for instance, do you make an AI "virtual" art teacher?

I won't attempt to answer these questions here, although I will mention that China has already begun to deploy AI in primary school education, as noted in the 2020 Nesta article titled, "The Future of the Classroom? China’s experience of AI in education" and the 2019 article "Artificial intelligence and education in China," which is unfortunately behind a Taylor and Francis paywall.  It will be interesting to see comprehensive, multi-year studies which document whether the use of AI in education is actually living up to its promise.  

But let's say that the deployment of AI in education really does turn out to be effective.  What happens to the human teachers in such a case?  Kai-Fu Lee says in AI 2041 that teachers will still be needed to be confronters, coaches, and comforters.  In fact, this seems to be a rather stock answer given whenever the potential massive occupational disruptions promised by the widespread deployment of AI are mentioned.  We are told that when jobs that formerly required powers of observation, quick assessment, logical reasoning, computational or motor skills, or memorization are taken over by AI, the newly-displaced workers can be retrained as "compassionate caregivers."  But it might be good to confront the fact that the widespread deployment of AI under an optimistic scenario would certainly mean the de-skilling of large numbers of people.  What possibly unforeseen effects would this de-skilling have on the displaced workers even if they were retrained as "compassionate caregivers?"

Consider, for instance, what might happen to London cab drivers if they were replaced by self-driving taxis.  To become a London taxi driver, a person must memorize a huge amount of London metro local geography, then pass a special test administered by the British government.  (From what I hear, you can't cheat on the test by using a GPS!)  All that memorization (especially visual memorization of London streets and intersections) induces strong development of key regions of the brains of aspiring London taxi drivers.  If this challenge is taken away from a London cabbie, he or she will lose that brain development.  Consider also the personnel who comprise flight crews of airliners.  Up to the 1960's, one of the positions on the flight deck of an airliner was the navigator.  But the navigator position was eliminated by autopilots.  So flight crews shrank from four to three people.  But then, further advances in automation eliminated the position of flight engineer.  So now flight crews consist of only two people.  What development was lost in the brains of the navigators when they were replaced by machines?  (What navigational feats are humans capable of when those humans are pushed to their cognitive limits?  Consider for instance how the peoples of Oceania learned to sail between their islands reliably and successfully without needing maps or a compass.)

AI has eliminated not only aircraft navigators and flight engineers, but an increasing number of other degreed professionals including medical radiologists, as well as receptionists, telephone operators, fast-food cooks, waiters, and waitresses.  AI "expert systems" are threatening the jobs of an increasing number of skilled, educated technical professionals, as noted here and here, for instance.  An increasing number of news stories are documenting the ongoing erosion of human labor markets by AI.  It must be asked what will happen to people whose jobs required the development of hard cognitive skills when those skills are replaced by AI.  Preliminary answers to that question are not encouraging.  For instance, the British Journal of Medicine published a 2018 article titled, "Intellectual engagement and cognitive ability in later life (the “use it or lose it” conjecture): longitudinal, prospective study," in which the authors concluded that lifelong intellectual engagement helps to prevent cognitive decline later in life.  There is also a 2017 article published in the Swiss Medical Weekly whose authors concluded that "low education and cognitive inactivity constitute major risk factors for dementia."  In other words, by ceding to AI the hard cognitive challenges which have traditionally been the hallmark of many kinds of paying work, we may well be at risk of turning ourselves into a society of de-skilled idiots.

Ahh, but there's more.  Let's consider the obvious fact that when AI takes over a job, one or more humans is thrown out of work.  Let's consider the response of various politicians to this fact.  For instance, let's consider the rhetoric spouted by crooks like Donald Trump and other Republican Party politicians (as well as their millions of adoring fans) in the run-up to the 2016 election.  Let's also consider the "scholarly" articles, ethnographic studies and books such as Hillbilly Elegy which sought to "explain" the Trump phenomenon.  One of the key assertions of the Trump crowd in 2016 was that the reason why the white American working class was becoming increasingly poor was the threat posed by immigrants (especially dark-skinned immigrants) taking jobs away from "real" Americans.  Thus America needed to build walls - made both of barbed wire and cement, and of policies and legislation -  in order to keep the great unwashed from stealing what "rightfully" belongs to America.  In other words, one of the biggest drivers of the growth of Trumpism was the loss of jobs and income among the white American working class.  But if concern about job losses was really so bloody important to the architects of Trumpism, why is it that they did not utter a single word in protest against the threat to jobs posed by the deployment of AI?  Why is it that NO ONE in the Rethuglican Party nowadays has anything bad (or even cautionary) to say about the use of AI by American businesses?  The silence of the Rethuglicans regarding the disruptions of AI can be explained quite simply.  AI helps business owners increase profits while reducing labor costs.  Thus AI helps the rich get richer.  Also, Trumpism is not and never was about bringing jobs back to the "working class".  It was rather always an expression of collective narcissism.  Thus all the talk about jobs, like all the rest of the rhetoric of the American Right, was and is utter crap.

To be sure, we do need to start having urgent conversations, both locally and on a wider scale, regarding the deployment of machine artificial intelligence in society.  Such conversations need to ask what AI can reasonably be expected to be able to do, as well as asking whether we really need machines to do what AI is promised to do.  If we decide that it is actually in our best interest to continue the massive development and deployment of AI, we need to figure out how to do this in such a way that we maximize the benefits of AI while minimizing our exposure to the potential downsides and negative externalities of AI.  Lastly, we need to start asking whether it might make sense to establish a basic universal income and other social structures which allow the people in our societies to develop their full human potential even in an era of the expanding use of AI.

Wednesday, May 8, 2024

Precarity and Artificial Intelligence: A Four-Wheeled Reason to be Skeptical about AI Optimism

The most recent post in my series on economic precarity hinted that the wildly optimistic claims of what artificial intelligence can do or is about to be able to do may be a bit overblown.  A case in point just surfaced this week: the Tesla Corporation (and its CEO Elon Musk in particular) are now being investigated by Federal prosecutors about claims made by Musk that Tesla's "self-driving car" AI technology has actually produced cars that drive themselves without any human input.  It seems this claim is not quite true, as "hundreds of crashes and dozens of fatalities" have proven over the last few years.  Musk may soon find himself the target of State-sponsored vengeance - a vengeance carried out by human prosecutors, plaintiffs, judges, and juries instead of robots.  They may optimize their "objective function" to return a guilty verdict.  Could this be the start of a rocky road for Musk ... ?

Tuesday, May 7, 2024

The "Principled" Boneheads

Today I ran into some members and organizers of the protests against Israeli violence in Gaza.  I had known about the war between Israel and Hamas, and had heard of the extremely disproportionate response of Israel, but I had not been personally involved in any protest action.  And on a certain level, today was no different for me in that my contact with the protesters was entirely a chance encounter that came about because we just happened to be in the same place at the same time.  During my encounter I saw that the protesters had printed a bunch of flyers urging voters in the Democratic primary to write "Uncommitted" in the ballot choice for the Presidency of the United States.

Now I can quite understand public outrage among many people in the United States over Israel's actions in this present war.  I can also understand why many people would characterize those actions as attempted genocide.  The brunt of Israel's violence has fallen on poor Palestinian civilians, especially women and children, and Israel has caused many tens of thousands of casualties among these.  Let me say right now that although I am a Christian and I believe Israel is God's earthly people, I most emphatically do not believe that God has created Israel to be a special pet who gets to trash the other peoples of the earth.  There is no nation on earth that has a right to make itself great by oppressing the powerless.  Therefore to the extent that I can, I am committed to such things as researching the country of origin of the things that are offered to me for sale, so that I can boycott those products which are made by nations that oppress.  That includes boycotting Israel.

However, when protesters in this country begin to urge withdrawing support from the Democratic party as a means of pressuring the Biden administration to withdraw military aid from Israel, I am reminded of how Russian operatives and propaganda organs managed to weaken and depress the Democratic vote in 2014 and 2016 by promoting "principled" spokespersons who pointed out to us all the weaknesses of Barack Obama and of Hillary Clinton.  I am also reminded of how the candidacy of Bernie Sanders weakened the candidacy of Hillary Clinton in 2016.  (Although Hillary won the 2016 popular vote by 2.7 million, evidently this was not enough of a margin to prevent Donald Trump from capturing the White House.  Go figure!)  I am also reminded of how operatives from the Right used mouthpieces such as Umair Haque in 2020 to try to do the same thing to Biden.

Such things as this make me wonder what it is that some of the present protesters really want.  If they really want to put the powerless into a better position to resist the predations of the powerful, they should consider what will happen to the powerless in the United States in the event of a wave of Republican victories in the November elections.  Among the things we are all likely to receive from such victories are the following:
  • The continued erosion of the rights of women and dark-skinned ethnic minorities in the United States
  • The continued concentration of wealth and corruption among the richest Americans
  • The continued impoverishment of the poor and the continued expansion of economic precarity in the United States
  • The continued expansion of fascism and the continued development of an American police state
  • The continued erosion of American democracy
  • The continued expansion of Russian power (including the possible loss of Ukraine to Russia)
  • Oh, and by the way: the Republicans will also continue to support Israeli militancy, in case no one noticed.
Given these possible outcomes, why are the antiwar protesters trying to tamper with the 2024 American elections?  I can think of only two possible reasons.  First, they may be incredibly stupid in their idealism.  It is far too easy to make an emotional, yet senseless response to an evil situation.  Then, when one's emotional response turns out to have evil consequences, the person who made the response can try to comfort himself by claiming that he is paying the price of martyrdom.  To such people I say, please go to school and enroll in a crash course in strategic thinking.  Then think of some other way to put irresistible pressure (including economic pressure) on the powers that be in this country to force them to withdraw military support for Israel without engaging in actions that endanger the 2024 U.S. elections.  Just to be clear: I am all for pressuring the powers that be to withdraw military support for Israel as long as Israel continues to attempt genocide against the Palestinian people.

But maybe the attempt to tamper with the 2024 elections is itself an example of fine strategic thinking - although the strategy in question has an actual aim that is very different from the aim which its creators claim they want to see.  In that case, maybe some of the antiwar protesters are themselves being disingenuous.