Saturday, November 11, 2023

Precarity and Artificial Intelligence: The Foundations of Modern AI

This post is a continuation of my series of posts on economic precarity.  As I mentioned in recent posts in this series, we have been exploring the subject of the educated precariat - that is, those people in the early 21st century who have obtained either bachelors or more advanced graduate degrees from a college or university, yet who cannot find stable work in their chosen profession.  Today's post, however, will begin to explore a particular emerging impact to employment for everyone, whether formally educated or not.  That impact is the impact of machine artificial intelligence on the future of work.

As I mentioned in a previous post
Labor casualization has been part of a larger tactical aim to reduce labor costs by reducing the number of laborers...This reduction of the total number of laborers can be achieved by replacing employees with machines.  That replacement has been occurring from the beginning of the Industrial Revolution onward, but in the last two or three decades it has accelerated greatly due to advances in artificial intelligence (AI).  A long-standing motive behind the recent massive investments in research in artificial intelligence is the desire by many of the world's richest people to eliminate the costs of relying on humans by replacing human laborers with automation.

So it is natural to ask what sort of world is emerging as the result of the use of increasingly sophisticated AI in our present economy.  Here we need to be careful, due to the number of shrill voices shouting either wildly positive or frighteningly negative predictions about the likely impacts of AI.  I think we need to ask the following questions:
  • First, what exactly is artificial machine intelligence?  What is the theoretical basis of AI?  How does it work? ...

Today we'll start trying to answer the questions stated above.   And at the outset, I must state clearly that I am not an AI expert, although my technical education has exposed me in a rudimentary way to many of the concepts that will be mentioned in our discussion of AI.

First, let's paint a picture.  One of the original motives for trying to invent intelligent machines was the desire for machines that would reliably do the kind of mundane tasks that humans find to be distasteful or unpleasantly difficult.  This desire actually has a very long history, but was popularized in the science fiction of the mid-20th century.  Think of a kid from the 1950's or 1960's who wished he had a robot that could vacuum the living room carpet or take out the garbage or do homework or shovel snow out of the driveway so that the kid could play without being bothered by parents demanding that the kid himself do the tasks listed above.  What kind of "brain" would the robot need in order to know what tasks needed to be performed and when would that brain know the tasks had been performed to acceptable standards?

The question of the kind of "brain" required was solved by the invention of the first programmable all-electronic digital computer during World War Two.  This computer was itself an evolution of principles implemented in previous mechanical and electromechanical computers.  Once engineers developed digital computers with onboard memory storage, these computers became capable of automation of tasks that had been formerly automated by more crude mechanical and electromechanical means such as relays.  Thus the 1950's saw the emergence of digital control systems for automation of chemical processes at refineries; the 1950's and 1960's saw the emergence of computer-assisted or computer-based navigation for ships, aircraft, missiles, and spacecraft; and the late 1960's saw the emergence of programmable logic controllers (PLC's) for automation of factory processes at industrial assembly plants.

The digital electronic automation systems that have been developed from the 1950's onward have thus formed a key component of the development of modern machine AI.  But another key component consists of the principles by which these systems achieve their particular objectives.  These principles are the principles of mathematical optimization, and they also have a rather long history.

Mathematical optimization is the collection of techniques and methods for finding the maximum or minimum value of a mathematical function of one or more independent variables.  Some of the earliest methods of mathematical optimization were based on calculus.  More complex methods include such things as numerical methods for solving nonlinear differential equations.  These methods were only possible to implement easily once electronic digital computers became available.

The first step in optimizing real-world problems consists of turning a real-world problem into a mathematical abstraction consisting of a function of several independent variables.  Each of the independent variables represents an independently controllable parameter of the problem.  Then optimization techniques are used to find the desired maximum or minimum value of the function.  To put it another way,
"Optimization is the act of obtaining the best result under given circumstances.  In design, construction, and maintenance of any engineering system, engineers have to take (sic) many...decisions.  The ultimate goal of all such decisions is either to minimize the effort required or to maximize the desired benefit.  Since the effort required or the benefit desired in any practical situation can be expressed as a function of certain decision variables, optimization can be defined as the process of finding the conditions that give the maximum or minimum value of a function."  - Engineering Optimization: Theory and Practice, Rao, John Wiley and Sons, Inc., 2009

The function to be optimized is called the objective function.  When we optimize the objective function, we are also interested in finding those values of the independent variables which produce the desired function maximum or minimum value.  These values represent the amount of various inputs required to get the desired optimum output from a situation represented by the objective function.  

A simple case of optimization would be figuring out how to catch up with a separate moving object in the shortest amount of time if you started from an arbitrary starting position.  To use optimization techniques, you'd turn this problem into an objective function and then use calculus to find the minimum value of the objective function.  Since the velocity and acceleration are the independent variables of interest, you'd want to know the precise values of these (in both magnitude and direction) which would minimize the value of the objective function.  Note that for simple trajectories or paths of only two dimensions, adult humans tend to be able to do this automatically and intuitively - but young kids, not so much.  Try playing tag with a five or six-year-old kid, and you will see what I mean.  The kid won't be able to grasp your acceleration from observing you, so he will run to where he sees you are at the moment he sees you instead of anticipating where you'll end up.  Of course, once kids get to the age of ten or so, they're more than likely to catch you in any game of tag if you yourself are very much older than 30!

The easiest AI problems are those that can most easily be turned into mathematically precise objective functions with only one output variable.  Examples of such problems include the following: reliably hitting a target with a missile, winning a board game in the smallest number of moves, traveling reliably between planets, simple linear regression, regulating the speed or rate of industrial or chemical processes, and control of HVAC and power systems in buildings in order to optimize interior climate, lighting, and comfort.

Harder AI problems include machine vision (including recognizing a human face or an animal), natural language processing, construction of buildings, navigating physically complex unstructured and random environments (such as forests), and optimization of problems with multiple objectives requiring multiple objective functions to model.  The machine vision and natural language processing problems are harder because they require the use of logistical regression functions as objective functions, and in order to accurately assign the appropriate "weights" to each of the variables of these objective functions, the AI which implements these functions needs massive amounts of training data.  However, these and other harder problems are now being solved with increasing effectiveness through technologies such as deep learning and other advanced techniques of machine learning.  It is in the tackling of these harder problems that AI has the greatest potential to disrupt the future of work, especially of cognitively demanding work that formerly only humans could do.  In order to assess the potential magnitude and likelihood of this disruption, we will need to examine the following factors:
  • The current state of the art of machine learning
  • The current state of the art of designing objective functions
  • And the current state of the art of multi-objective mathematical optimization.
I'll try tackling these questions in the next post in this series.

No comments: