Saturday, April 27, 2024

Precarity and Artificial Intelligence: Review of Objective Functions, and A Contrarian Perspective

This post is a continuation of my series of posts on economic precarity.  As I mentioned in recent posts in this series, we have been exploring the subject of the educated precariat - that is, those people in the early 21st century who have obtained either bachelors or more advanced graduate degrees from a college or university, yet who cannot find stable work in their chosen profession.  However, the most recent previous post in this series began to explore the impact of machine artificial intelligence (AI) on the future of all work, whether that work requires advanced education or not.  

In my my most recent post in this series, I wrote that the greatest potential for the disruption of the future of work through machine AI lies in the development of machine AI tools that can tackle the increasingly complex  tasks that are normally associated with human (or at least highly developed animal) intelligence.  Such tasks include machine vision (including recognizing a human face or an animal), natural language processing, construction of buildings, navigating physically complex unstructured and random environments (such as forests), and optimization of problems with multiple objectives requiring multiple objective functions to model.  Today I'd like to amend that statement by saying that there is another potentially massive disruptive impact of machine AI on the future of work, namely, the ways in which the wide deployment of machine AI in a society might condition and change the humans in that society.  I'll have more to say on that subject in another post.  What I'd like to do right now is to take another look at mathematical objective functions and their place in the implementation of machine AI.  But even before that, let's review the two main types of applications we are talking about when we talk about "machine AI".  Also, let me warn you that today's post will move rather deep into geek territory.  I'll try to have some mercy.

As I mentioned in the most recent post in this series, AI applications can be broken down into two main categories.  The first category consists of the automation of repetitive or mundane tasks or processes in order to ensure that these processes take place at the proper rate and speed and thus yield the appropriate steady-state or final outcome.  This sort of AI has been around for a very long time and first came into existence in entirely mechanical systems such as steam engines with mechanical governors that regulated the speed of the engines.  An example of a mid-20th century electromechanical control system is the autopilot with mechanical gyroscopes and accelerometers and simple electronic computers which was invented to guide airliners and early guided missiles during long-distance flights.  Other early electronic examples include the programmable logic controllers (PLC's) which were developed in the 1960's to regulate assembly line processes in industrial plants.  In his 2022 paper titled "The two kinds of artificial intelligence, or how not to confuse objects and subjects," Cambridge Professor Alan F. Blackwell characterizes these systems as servomechanisms, which Merriam-Webster defines as "an automatic device for controlling large amounts of power by means of very small amounts of power and automatically correcting the performance of a mechanism" and which Blackwell himself defines as "...any kind of device that 'observes' the world in some way, 'acts' on the world, and can 'decide' to 'behave' in different ways as determined by what it observes."  

The construction (and hence performance) of servomechanisms can be increasingly complex as the number and type of processes regulated by the servomechanisms increases, but that does not mean that the servomechanisms possess any real native intelligence.  Consider, for instance, a very simple servomechanism such as the thermostat from a heating system in a house built during the 1950's.  Such a thermostat would regulate the timing and duration of the burning of a fuel in a heating furnace, and would most likely consist of a simple switch with a movable switch contact and a stationary contact with the movable contact attached to a bimetallic strip.  Because the shape of the bimetallic strip is regulated by the temperature of the air, when the air temperature drops, the movable switch contact eventually touches the stationary contact, closing the switch and turning on the flow of fuel to the furnace.  We can say that the thermostat "decides" when the furnace turns on or off, but that's all this thermostat can "decide" to do.  You certainly wouldn't want to rely on the thermostat to help you decide what movie to watch with your spouse on a weekend!  Servomechanisms are what Blackwell calls "objective AI" which "measures the world, acts according to some mathematical principles, and may indeed be very complex, even unpredictable, but there is no point at which it needs to be considered a subjective intelligent agent."  In other words, all a servomechanism can do is to mechanically or electronically regulate a physical process on the basis of process measurements provided to the controller by means of physical sensors that sense a process variable.  It can't think like humans do.

The second type of AI is designed to make value judgments about the world in order to predict how the world (or some small subset of the world) will evolve.  In the most optimistic cases, this AI uses these value judgments to generate the most appropriate response to the world which is supposedly evolving according to prediction.  But is this really a native intelligence created by humans, yet now embodied in a machine and existing independently of humans?  A possible answer to that question can be found in another paper written by Blackwell and published in 2019, titled, "Objective functions: (In)humanity and inequity in artificial intelligence."

The value judgments and predictions made by the second type of AI are made by means of objective functions. These objective functions are mathematical abstractions consisting of functions of several independent variables.  Each of the independent variables represents an independently controllable parameter of the problem.  If the purpose of the objective function is to predict the numerical value of an outcome based on historical values of independent input variables, then optimizing the function means making sure that for a given set of historical inputs, the objective function yields an output value that is as close as possible to the historical outcome associated with the particular historical inputs. This ensures that for any set of future possible inputs, the objective function will accurately predict the value of the output.  Two levels of objective functions are needed: the first level, which makes a guess of the value of an output based on certain values of inputs, then a second supervisory level which evaluates how close each guess is to a set of historical output values based on corresponding sets of input values.  The output of this second supervisory objective function is used to adjust the weights (in the case of a polynomial function, the coefficients) of the primary objective function in order to produce better guesses of the output value. 

Objective functions are mathematical expressions; hence, the second type of AI is a primarily mathematical problem which just happens to be solved by means of digital computers.  This also includes the implementation of multi-objective optimization, which is really just another mathematical problem even though it is implemented by machines.  Thus, the second type of AI is really just another expression of human intelligence.  This is seen not only in the development of the objective functions themselves, but also in the training of the supervisory objective function to recognize how close the output of the primary objective function is to the a value that actually reflects reality.  This training takes place by several means, including supervised learning (in which humans label all the training data), and partially-supervised and unsupervised learning (in which the training data is out there, but instead of it being labeled, a human still has to create the algorithms by which the machine processes the training data).  

An example that illustrates what we have been considering is the development of large language models (LLM's) such as ChatGPT which predict text strings based on inputs by a human being.  A very, very, very simple model for these AI implementations is that they consist of objective functions that guess the probability of the next word, phrase, sentence, or paragraph in a string on the basis of what a human has typed into an interface.  These AI implementations must be trained using data input by human beings so that they can calibrate their objective functions to reduce the likelihood of wrong guesses.  Cases like these lead scientists such as Alan Blackwell to conclude that the second type of AI is not really a separate "intelligence" per se, but rather the embodiment and disguising of what is actually human intelligence, reflected back to humans through the intermediary of machines.  The calibration of the objective functions of these AI deployments (or, if you will, the training of this AI) is performed by you every time you type a text message on your smartphone.  For instance, you start by typing "Hello Jo [the phone suggests "Jo", "John", "Joe", but the person you're texting is actually named "Joshiro", so as you type, your phone keeps making wrong guesses like "Josh", "Joshua", and "Josh's" but you keep typing until you've finished "Joshiro"]. You continue with "I'm at the gym right now, but I forgot my judo white belt [the phone guesses almost everything even though you misspelled "at" as "st" and the phone auto-corrected.  However, the phone chokes when you start typing "judo" so you have to manually type that yourself].  You finish with "Can you grab it out of my closet?"  The next time you text anyone whose first name starts with the letters "Jo", your phone will be "trained" to think you are texting Joshiro about something related to judo - or more accurately, the generative LLM in your phone will have determined that there is a statistically higher likelihood that your message will contain the words "Joshiro" and "judo".  Your phone's LLM is thus "trained" every time you correct one of its wrong guesses when you type a text.

Of course, the longer the predicted phrase or sentence or paragraph the AI is supposed to return, the more training data is required.  The boast of the developers of large language models and other similar AI implementations is that given enough training data and a sufficiently complex statistical objective function, they can develop AI that can accurately return the correct response to any human input.  This unfortunately leads to an unavoidable conclusion of the second type of AI: the assumption that the universe, reality, and life itself are all deterministic (thus there is no free will in the universe).  Why? Because the kind of intelligence that can accurately predict how the universe and everything in it will evolve and thus generate the most appropriate local response to the moment-by-moment evolution of your particular corner of the universe can always be modeled by an appropriately elaborate statistical objective function trained on an appropriately huge set of training data.  In other words, given enough data, a statistical objective function can be derived which accurately predicts that your spouse's sneeze at the dinner table tonight will provoke an argument which ends with you sleeping on the couch tomorrow night, and that this will lead to the invention of a new technology later in the week that causes the stock market of a certain country to crash, with the result that a baby will cry on a dark and stormy night five days from now, and that this cry will enable a computer to predict all the words that will be in the novel you sit down to write a week from today...  This is my rather facetious illustration of the "generative AI" of chatGPT and similar inventions.  The fallacy of determinism is that life, the Universe, and reality itself are full of phenomena and problems that can't easily be modeled by mathematics.  Scientists call them "wicked problems."  Thus the claims made about the second kind of AI may be overblown - especially as long as the implementation of this second kind of AI remains primarily dependent on the construction and optimization of appropriately complex statistical objective functions.

Yet it can't be denied that the second type of AI is causing some profound changes to the world in which we live, and even the first type of AI - the implementation of servomechanisms, or the science of cybernetics - has had a profound effect.  The effects of both types of AI to date - especially on the world of work - will be the subject of the next post in this series.

P.S. Although I am a technical professional with a baccalaureate and a master's degree in a STEM discipline, I am most definitely NOT an AI expert.  Feel free to take what I have written with a grain of salt - YMMV. 

P.P.S. For more commentary on ChatGPT and other LLM's, feel free to check out a 2021 paper titled, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" by Emily Bender and others.  The authors of this paper were Google engineers who were fired by Google for saying things that their bosses didn't want to hear regarding LLM's.   Oh, the potential dangers of writing things that give people in power a case of heartburn...
 

Sunday, February 25, 2024

Random Sunday Ramblings, Part 2

As readers can guess, I'm still in the midst of a busy period, and am trying hard to tame my schedule.  So today's post will be short.  But I want to mention that I just finished listening to the latest installment of the podcast "In God's Name: An Unseen Cult."  This latest episode dealt with the abuse perpetrated against women by the fringe group known as the Assemblies of George Geftakys.  It is well known that the Assemblies largely disintegrated after the revelations of the horrible domestic violence perpetrated by George Geftakys' son David and the fact that George and his lieutenants covered up this abuse for over two decades.  But the latest episode of the "In God's Name" podcast also sheds light on a number of cases of domestic abuse among rank-and-file members of the Assemblies that did not at first attract notice because they blended into the toxic cultural miasma of these groups.

It's interesting to note how the Bible was used to justify this toxic culture, as well as to note how extreme even some of these rank-and-file cases became.  I can imagine how much of an aversion to the Bible many women might now have after suffering the long-term experience of a group such as this.  I am not a woman but a man, and yet as a Black man I can tell you that over the last few years I too have struggled at times with the same aversion.  The aversion is triggered by the experience of encountering prominent people behind pulpits who waved Bibles in our faces and said (in so many words, even though they would deny that this is actually what they were saying) "THIS BOOK SAYS that God has given US the RIGHT and the MISSION and the MANDATE to MAKE OURSELVES GREAT by trashing everyone we can get our hands on!  Therefore it is the WILL OF GOD for you who are not of our tribe to willingly submit yourselves to us so that we can turn you into our toilet paper, our vomit buckets, our spittoons, our diapers, our doormats on which we can wipe our feet, our beasts of burden, our crates full of china dishes that we can fling against a wall whenever we need to get our angries out, our furniture that we can kick, our clay pigeons, our punching bags!!!"

Regardless of what white evangelicals might say, this has been the message of white American evangelicalism for far too long.  And my, how prominent white evangelicals (and some nonwhite evangelicals who have drunk the same Kool-Aid) have perverted and bastardized Scripture to justify their message and their treatment of the rest of us.  Eventually, however, even the most passive of victims gets tired of being treated like a clay pigeon.  Which is why I'm not attending church today and haven't attended a church since March 2020.  I still read the Bible, as can be seen from the posts I've written for this blog over the last six or seven years.  But I'm figuring things out for myself.  And I have abandoned treating other people like clay pigeons.  I am now ashamed of the times in which I was guilty of dishing out such treatment to others, especially women.

One last note: as has been mentioned frequently, the Assemblies are by no means the only misogynistic conservative evangelical group to appear in American culture.  According to a recently discovered source, Grace Community Church in Southern California has been obstructing the efforts of women in its congregation to obtain protection from abusive husbands.  Grace Community Church is pastored by John MacArthur, a man who has publicly denied that the Bible contains any mandate for social justice toward the poor and those who are not of the dominant culture.  MacArthur seems to be yet another evangelical doofus who has perverted the Gospel of good news for the poor into a message of dinner time for the rich and powerful.

Thursday, January 11, 2024

Introducing a New Podcast - "In God's Name: An Unseen Cult"

Today's post will be short.  I still owe a continuation of my series of posts on precarity.  I'm in So. Cal. right now helping an elderly family member with cognitive decline issues.  Perhaps on the plane ride home I can finish the post on frontiers on artificial intelligence...

But I do want to let readers know about an upcoming new podcast series focused on the experience some of us (including myself) had in the evangelical fringe cult of the Assemblies of George Geftakys.  The podcast is being produced by someone who was born into the cult and who left as a young child along with her family just before the Assemblies collapsed.  In recent years she has applied her university education to analyzing our cult experience and shedding light on the implications of that experience.  The name of the podcast is "In God's Name: An Unseen Cult" and the first episode will be out later this month.  

This podcast is one of several podcasts dealing with evangelical/Protestant cults and groups with cultic tendencies which I have discovered over the last few weeks.  To those former members of the Geftakys cult whose primary focus has been on the Geftakys cult experience, I would just point out that many of the things we encountered there - erasure of personal boundaries, hyper-competitiveness in seeking "ministry" positions, forced communal living, long meetings, excessive busy-ness, and child abuse - have by now spread far and wide throughout mainstream evangelicalism.  Thus there has been a multiplication of  podcasts and related books essays, and news articles which examine such groups as YWAM (Youth With A Mission), Teen Mania (now defunct, and similar to YWAM in its tactics and the trauma it caused), the continuing menace of cultic front groups on college campuses, the proliferation of teachings on child rearing that encourage child abuse (such as the books by Michael and Debi Pearl, J. Richard Fugate, James Dobson, and Gary Ezzo), the continuing menace and harm caused by Dominionism, the prevalence of sexual and domestic violence in evangelical churches, and the excesses of the American "troubled teen industry" - an "industry" which is for the most part extremely lacking in governmental regulation and oversight.

That such groups and phenomena are associated so strongly with conservative white American evangelicalism/Protestantism is not surprising.  These groups and their associated phenomena of asserting abusive control over those whose power of resistance has been taken away are a symptom of a larger problem within American society.  That these groups, teachings, and leaders are proliferating now is a sign of the insecurity of the dominant culture and of its willingness to hold onto its dominant position at any cost.  It is vital to understand how this expression of white American evangelicalism is affecting the broader society, and how both this expression and the larger society will evolve as reality continues to impose limits on the unrestrained exercise of American evangelical power.

Sunday, November 26, 2023

Research Week, Late Fall 2023

A scene from a summer night study session
in my backyard ...

I'm working on the next post in my series on precarity.  Because this series has begun to study the impact of artificial intelligence on the future of work, I've been reading papers on the implementation of multi-objective functions for solving hard AI problems.  Don't worry - I'll try my best to break it down into simple pieces in my next posts.  The next few posts may come rather slowly, but I'm sure that readers will appreciate quality over hasty quantity.

One rather sad thing lately is that I seem to be down to one cat instead of two.  After I returned home from an emergency visit to my family last month, I discovered that my cat Vashka had gone missing.  The neighbors who watched my house and cats while I was gone reported that the last time they saw him was the night before I returned home.  I haven't seen him in nearly four weeks.  That is more than a bit of a drag - Vashka was not exactly the sharpest cat, but he had real personality.  

Once I finish my series of posts on economic precarity, I think I might start a series of posts on the sociological and political impacts of religion on American society, and how American religion (particularly white Evangelicalism/Protestantism) is likely to affect or hinder the ability of the United States to adapt to the challenges of the 21st Century.  As part of my research for that series, I plan to buy an audiobook copy of One Nation under God: How Corporate America Invented Christian AmericaThe thing that piqued my interest in this topic was my recent discovery of the activities and early 20th century "ministry" of a certain Reverend James W. Fifield, who was one of the architects of the myth of America as a "Christian" nation.  It's interesting that he laid the foundation for a "faith" that systematically denies that Christians have any duty to love their neighbors as themselves.  Mr. Fifield, it must be hot as hell where you are, no?

Saturday, November 11, 2023

Precarity and Artificial Intelligence: The Foundations of Modern AI

This post is a continuation of my series of posts on economic precarity.  As I mentioned in recent posts in this series, we have been exploring the subject of the educated precariat - that is, those people in the early 21st century who have obtained either bachelors or more advanced graduate degrees from a college or university, yet who cannot find stable work in their chosen profession.  Today's post, however, will begin to explore a particular emerging impact to employment for everyone, whether formally educated or not.  That impact is the impact of machine artificial intelligence on the future of work.

As I mentioned in a previous post
Labor casualization has been part of a larger tactical aim to reduce labor costs by reducing the number of laborers...This reduction of the total number of laborers can be achieved by replacing employees with machines.  That replacement has been occurring from the beginning of the Industrial Revolution onward, but in the last two or three decades it has accelerated greatly due to advances in artificial intelligence (AI).  A long-standing motive behind the recent massive investments in research in artificial intelligence is the desire by many of the world's richest people to eliminate the costs of relying on humans by replacing human laborers with automation.

So it is natural to ask what sort of world is emerging as the result of the use of increasingly sophisticated AI in our present economy.  Here we need to be careful, due to the number of shrill voices shouting either wildly positive or frighteningly negative predictions about the likely impacts of AI.  I think we need to ask the following questions:
  • First, what exactly is artificial machine intelligence?  What is the theoretical basis of AI?  How does it work? ...

Today we'll start trying to answer the questions stated above.   And at the outset, I must state clearly that I am not an AI expert, although my technical education has exposed me in a rudimentary way to many of the concepts that will be mentioned in our discussion of AI.

First, let's paint a picture.  One of the original motives for trying to invent intelligent machines was the desire for machines that would reliably do the kind of mundane tasks that humans find to be distasteful or unpleasantly difficult.  This desire actually has a very long history, but was popularized in the science fiction of the mid-20th century.  Think of a kid from the 1950's or 1960's who wished he had a robot that could vacuum the living room carpet or take out the garbage or do homework or shovel snow out of the driveway so that the kid could play without being bothered by parents demanding that the kid himself do the tasks listed above.  What kind of "brain" would the robot need in order to know what tasks needed to be performed and when would that brain know the tasks had been performed to acceptable standards?

The question of the kind of "brain" required was solved by the invention of the first programmable all-electronic digital computer during World War Two.  This computer was itself an evolution of principles implemented in previous mechanical and electromechanical computers.  Once engineers developed digital computers with onboard memory storage, these computers became capable of automation of tasks that had been formerly automated by more crude mechanical and electromechanical means such as relays.  Thus the 1950's saw the emergence of digital control systems for automation of chemical processes at refineries; the 1950's and 1960's saw the emergence of computer-assisted or computer-based navigation for ships, aircraft, missiles, and spacecraft; and the late 1960's saw the emergence of programmable logic controllers (PLC's) for automation of factory processes at industrial assembly plants.

The digital electronic automation systems that have been developed from the 1950's onward have thus formed a key component of the development of modern machine AI.  But another key component consists of the principles by which these systems achieve their particular objectives.  These principles are the principles of mathematical optimization, and they also have a rather long history.

Mathematical optimization is the collection of techniques and methods for finding the maximum or minimum value of a mathematical function of one or more independent variables.  Some of the earliest methods of mathematical optimization were based on calculus.  More complex methods include such things as numerical methods for solving nonlinear differential equations.  These methods were only possible to implement easily once electronic digital computers became available.

The first step in optimizing real-world problems consists of turning a real-world problem into a mathematical abstraction consisting of a function of several independent variables.  Each of the independent variables represents an independently controllable parameter of the problem.  Then optimization techniques are used to find the desired maximum or minimum value of the function.  To put it another way,
"Optimization is the act of obtaining the best result under given circumstances.  In design, construction, and maintenance of any engineering system, engineers have to take (sic) many...decisions.  The ultimate goal of all such decisions is either to minimize the effort required or to maximize the desired benefit.  Since the effort required or the benefit desired in any practical situation can be expressed as a function of certain decision variables, optimization can be defined as the process of finding the conditions that give the maximum or minimum value of a function."  - Engineering Optimization: Theory and Practice, Rao, John Wiley and Sons, Inc., 2009

The function to be optimized is called the objective function.  When we optimize the objective function, we are also interested in finding those values of the independent variables which produce the desired function maximum or minimum value.  These values represent the amount of various inputs required to get the desired optimum output from a situation represented by the objective function.  

A simple case of optimization would be figuring out how to catch up with a separate moving object in the shortest amount of time if you started from an arbitrary starting position.  To use optimization techniques, you'd turn this problem into an objective function and then use calculus to find the minimum value of the objective function.  Since the velocity and acceleration are the independent variables of interest, you'd want to know the precise values of these (in both magnitude and direction) which would minimize the value of the objective function.  Note that for simple trajectories or paths of only two dimensions, adult humans tend to be able to do this automatically and intuitively - but young kids, not so much.  Try playing tag with a five or six-year-old kid, and you will see what I mean.  The kid won't be able to grasp your acceleration from observing you, so he will run to where he sees you are at the moment he sees you instead of anticipating where you'll end up.  Of course, once kids get to the age of ten or so, they're more than likely to catch you in any game of tag if you yourself are very much older than 30!

The easiest AI problems are those that can most easily be turned into mathematically precise objective functions with only one output variable.  Examples of such problems include the following: reliably hitting a target with a missile, winning a board game in the smallest number of moves, traveling reliably between planets, simple linear regression, regulating the speed or rate of industrial or chemical processes, and control of HVAC and power systems in buildings in order to optimize interior climate, lighting, and comfort.

Harder AI problems include machine vision (including recognizing a human face or an animal), natural language processing, construction of buildings, navigating physically complex unstructured and random environments (such as forests), and optimization of problems with multiple objectives requiring multiple objective functions to model.  The machine vision and natural language processing problems are harder because they require the use of logistical regression functions as objective functions, and in order to accurately assign the appropriate "weights" to each of the variables of these objective functions, the AI which implements these functions needs massive amounts of training data.  However, these and other harder problems are now being solved with increasing effectiveness through technologies such as deep learning and other advanced techniques of machine learning.  It is in the tackling of these harder problems that AI has the greatest potential to disrupt the future of work, especially of cognitively demanding work that formerly only humans could do.  In order to assess the potential magnitude and likelihood of this disruption, we will need to examine the following factors:
  • The current state of the art of machine learning
  • The current state of the art of designing objective functions
  • And the current state of the art of multi-objective mathematical optimization.
I'll try tackling these questions in the next post in this series.

Monday, October 30, 2023

The Second Amendment As A Leading Cause of Childhood Death

I've been busy tending to urgent family matters, and as a result, I had to fly back to So. Cal. last week.  When I arrived, all the American flags I saw were flying at half mast.  Because nowadays I'm allergic to the news except in extremely small doses, I had no idea why.  Today I finally found out the reason.

Learning the reason prompted me to do a quick Internet search for the incidence of firearms as a cause of death in the United States.  I learned today that firearms are now the leading cause of death for children and teens in the U.S.  (See this and this also.)  This inconvenient truth will no doubt be vigorously denied by the same patriotic right-wing Republican evangelical/Protestant white supremacist types who denied that the COVID-19 virus actually existed and that it could kill people.  These types will insist that the right to keep and bear arms is part of America's "Christian" heritage.  In saying this, they will conveniently elide the fact that Christ told Simon Peter to put away his sword, since "all who take up the sword shall die by the sword".  (Matthew 26).  In considering those who rabidly cling to their guns I am reminded of analyses I read about how the Vikings eventually had to abandon their ancient Greenland settlements even though the Inuit were able to remain and thrive.  The reason is that the Vikings tried to import a culture and way of life that could not survive the reality of their new environment.  Yet the Vikings stubbornly clung to this culture and way of life, even as some of their spiritual and cultural descendants are now clinging to a culture and way of life that has no future.  This can only end badly.

One other thing - I'm still working on the next installments of my series on precarity.  We will shortly start learning more about artificial intelligence.  Beware - some of the discussion will turn rather geeky...

Thursday, October 19, 2023

Introducing the Main Street Alliance

I'd like to take this opportunity to introduce readers to the Main Street Alliance, an organization which seeks to foster the creation and growth of small businesses in the United States.  As I resume my series of posts on the problem of economic precarity, I will also discuss solutions.  As I mentioned in a previous post, I believe that the eradication of the monopoly power of the rich and the fostering of small business among the poor are two strategic efforts which can reduce or eliminate economic precarity in the United States.  This is what the Main Street Alliance is working to achieve.

Those who read about the activities of the Main Street Alliance will also learn about how the rich and the powerful in the United States are trying to destroy small businesses, especially those run by minorities, and how these bad actors are using Republican-appointed Federal court justices in their attacks against small business.  This should be of great concern to those of you who are small entrepreneurs.  The latest attack against small business consists of judicial challenges to the Federal tax code.  Readers of this blog can learn from the Main Street Alliance website how they can join in the fight to foster and protect small business.