Saturday, April 27, 2024

Precarity and Artificial Intelligence: Review of Objective Functions, and A Contrarian Perspective

This post is a continuation of my series of posts on economic precarity.  As I mentioned in recent posts in this series, we have been exploring the subject of the educated precariat - that is, those people in the early 21st century who have obtained either bachelors or more advanced graduate degrees from a college or university, yet who cannot find stable work in their chosen profession.  However, the most recent previous post in this series began to explore the impact of machine artificial intelligence (AI) on the future of all work, whether that work requires advanced education or not.  

In my my most recent post in this series, I wrote that the greatest potential for the disruption of the future of work through machine AI lies in the development of machine AI tools that can tackle the increasingly complex  tasks that are normally associated with human (or at least highly developed animal) intelligence.  Such tasks include machine vision (including recognizing a human face or an animal), natural language processing, construction of buildings, navigating physically complex unstructured and random environments (such as forests), and optimization of problems with multiple objectives requiring multiple objective functions to model.  Today I'd like to amend that statement by saying that there is another potentially massive disruptive impact of machine AI on the future of work, namely, the ways in which the wide deployment of machine AI in a society might condition and change the humans in that society.  I'll have more to say on that subject in another post.  What I'd like to do right now is to take another look at mathematical objective functions and their place in the implementation of machine AI.  But even before that, let's review the two main types of applications we are talking about when we talk about "machine AI".  Also, let me warn you that today's post will move rather deep into geek territory.  I'll try to have some mercy.

As I mentioned in the most recent post in this series, AI applications can be broken down into two main categories.  The first category consists of the automation of repetitive or mundane tasks or processes in order to ensure that these processes take place at the proper rate and speed and thus yield the appropriate steady-state or final outcome.  This sort of AI has been around for a very long time and first came into existence in entirely mechanical systems such as steam engines with mechanical governors that regulated the speed of the engines.  An example of a mid-20th century electromechanical control system is the autopilot with mechanical gyroscopes and accelerometers and simple electronic computers which was invented to guide airliners and early guided missiles during long-distance flights.  Other early electronic examples include the programmable logic controllers (PLC's) which were developed in the 1960's to regulate assembly line processes in industrial plants.  In his 2022 paper titled "The two kinds of artificial intelligence, or how not to confuse objects and subjects," Cambridge Professor Alan F. Blackwell characterizes these systems as servomechanisms, which Merriam-Webster defines as "an automatic device for controlling large amounts of power by means of very small amounts of power and automatically correcting the performance of a mechanism" and which Blackwell himself defines as "...any kind of device that 'observes' the world in some way, 'acts' on the world, and can 'decide' to 'behave' in different ways as determined by what it observes."  

The construction (and hence performance) of servomechanisms can be increasingly complex as the number and type of processes regulated by the servomechanisms increases, but that does not mean that the servomechanisms possess any real native intelligence.  Consider, for instance, a very simple servomechanism such as the thermostat from a heating system in a house built during the 1950's.  Such a thermostat would regulate the timing and duration of the burning of a fuel in a heating furnace, and would most likely consist of a simple switch with a movable switch contact and a stationary contact with the movable contact attached to a bimetallic strip.  Because the shape of the bimetallic strip is regulated by the temperature of the air, when the air temperature drops, the movable switch contact eventually touches the stationary contact, closing the switch and turning on the flow of fuel to the furnace.  We can say that the thermostat "decides" when the furnace turns on or off, but that's all this thermostat can "decide" to do.  You certainly wouldn't want to rely on the thermostat to help you decide what movie to watch with your spouse on a weekend!  Servomechanisms are what Blackwell calls "objective AI" which "measures the world, acts according to some mathematical principles, and may indeed be very complex, even unpredictable, but there is no point at which it needs to be considered a subjective intelligent agent."  In other words, all a servomechanism can do is to mechanically or electronically regulate a physical process on the basis of process measurements provided to the controller by means of physical sensors that sense a process variable.  It can't think like humans do.

The second type of AI is designed to make value judgments about the world in order to predict how the world (or some small subset of the world) will evolve.  In the most optimistic cases, this AI uses these value judgments to generate the most appropriate response to the world which is supposedly evolving according to prediction.  But is this really a native intelligence created by humans, yet now embodied in a machine and existing independently of humans?  A possible answer to that question can be found in another paper written by Blackwell and published in 2019, titled, "Objective functions: (In)humanity and inequity in artificial intelligence."

The value judgments and predictions made by the second type of AI are made by means of objective functions. These objective functions are mathematical abstractions consisting of functions of several independent variables.  Each of the independent variables represents an independently controllable parameter of the problem.  If the purpose of the objective function is to predict the numerical value of an outcome based on historical values of independent input variables, then optimizing the function means making sure that for a given set of historical inputs, the objective function yields an output value that is as close as possible to the historical outcome associated with the particular historical inputs. This ensures that for any set of future possible inputs, the objective function will accurately predict the value of the output.  Two levels of objective functions are needed: the first level, which makes a guess of the value of an output based on certain values of inputs, then a second supervisory level which evaluates how close each guess is to a set of historical output values based on corresponding sets of input values.  The output of this second supervisory objective function is used to adjust the weights (in the case of a polynomial function, the coefficients) of the primary objective function in order to produce better guesses of the output value. 

Objective functions are mathematical expressions; hence, the second type of AI is a primarily mathematical problem which just happens to be solved by means of digital computers.  This also includes the implementation of multi-objective optimization, which is really just another mathematical problem even though it is implemented by machines.  Thus, the second type of AI is really just another expression of human intelligence.  This is seen not only in the development of the objective functions themselves, but also in the training of the supervisory objective function to recognize how close the output of the primary objective function is to the a value that actually reflects reality.  This training takes place by several means, including supervised learning (in which humans label all the training data), and partially-supervised and unsupervised learning (in which the training data is out there, but instead of it being labeled, a human still has to create the algorithms by which the machine processes the training data).  

An example that illustrates what we have been considering is the development of large language models (LLM's) such as ChatGPT which predict text strings based on inputs by a human being.  A very, very, very simple model for these AI implementations is that they consist of objective functions that guess the probability of the next word, phrase, sentence, or paragraph in a string on the basis of what a human has typed into an interface.  These AI implementations must be trained using data input by human beings so that they can calibrate their objective functions to reduce the likelihood of wrong guesses.  Cases like these lead scientists such as Alan Blackwell to conclude that the second type of AI is not really a separate "intelligence" per se, but rather the embodiment and disguising of what is actually human intelligence, reflected back to humans through the intermediary of machines.  The calibration of the objective functions of these AI deployments (or, if you will, the training of this AI) is performed by you every time you type a text message on your smartphone.  For instance, you start by typing "Hello Jo [the phone suggests "Jo", "John", "Joe", but the person you're texting is actually named "Joshiro", so as you type, your phone keeps making wrong guesses like "Josh", "Joshua", and "Josh's" but you keep typing until you've finished "Joshiro"]. You continue with "I'm at the gym right now, but I forgot my judo white belt [the phone guesses almost everything even though you misspelled "at" as "st" and the phone auto-corrected.  However, the phone chokes when you start typing "judo" so you have to manually type that yourself].  You finish with "Can you grab it out of my closet?"  The next time you text anyone whose first name starts with the letters "Jo", your phone will be "trained" to think you are texting Joshiro about something related to judo - or more accurately, the generative LLM in your phone will have determined that there is a statistically higher likelihood that your message will contain the words "Joshiro" and "judo".  Your phone's LLM is thus "trained" every time you correct one of its wrong guesses when you type a text.

Of course, the longer the predicted phrase or sentence or paragraph the AI is supposed to return, the more training data is required.  The boast of the developers of large language models and other similar AI implementations is that given enough training data and a sufficiently complex statistical objective function, they can develop AI that can accurately return the correct response to any human input.  This unfortunately leads to an unavoidable conclusion of the second type of AI: the assumption that the universe, reality, and life itself are all deterministic (thus there is no free will in the universe).  Why? Because the kind of intelligence that can accurately predict how the universe and everything in it will evolve and thus generate the most appropriate local response to the moment-by-moment evolution of your particular corner of the universe can always be modeled by an appropriately elaborate statistical objective function trained on an appropriately huge set of training data.  In other words, given enough data, a statistical objective function can be derived which accurately predicts that your spouse's sneeze at the dinner table tonight will provoke an argument which ends with you sleeping on the couch tomorrow night, and that this will lead to the invention of a new technology later in the week that causes the stock market of a certain country to crash, with the result that a baby will cry on a dark and stormy night five days from now, and that this cry will enable a computer to predict all the words that will be in the novel you sit down to write a week from today...  This is my rather facetious illustration of the "generative AI" of chatGPT and similar inventions.  The fallacy of determinism is that life, the Universe, and reality itself are full of phenomena and problems that can't easily be modeled by mathematics.  Scientists call them "wicked problems."  Thus the claims made about the second kind of AI may be overblown - especially as long as the implementation of this second kind of AI remains primarily dependent on the construction and optimization of appropriately complex statistical objective functions.

Yet it can't be denied that the second type of AI is causing some profound changes to the world in which we live, and even the first type of AI - the implementation of servomechanisms, or the science of cybernetics - has had a profound effect.  The effects of both types of AI to date - especially on the world of work - will be the subject of the next post in this series.

P.S. Although I am a technical professional with a baccalaureate and a master's degree in a STEM discipline, I am most definitely NOT an AI expert.  Feel free to take what I have written with a grain of salt - YMMV. 

P.P.S. For more commentary on ChatGPT and other LLM's, feel free to check out a 2021 paper titled, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" by Emily Bender and others.  The authors of this paper were Google engineers who were fired by Google for saying things that their bosses didn't want to hear regarding LLM's.   Oh, the potential dangers of writing things that give people in power a case of heartburn...
 

No comments: