International Monetary Fund Warns of Potential Ramifications of Adaptation of AI in Economic Development, but is That the Biggest Threat?

By The American Contemporary, January 15, 2024

            Reported across a variety of business and economic-related media outlets, a recent publication by the IMF (International Monetary Fund, a major financial agency of the United Nations whose stated mission is “working to foster global monetary cooperation, secure financial stability, facilitate international trade, promote high employment and sustainable economic growth, and reduce poverty around the world.”) has suggested potential concern and the recommended caution around the continued development and utilization of artificial intelligence in the workplace.

            The report, which released on Sunday January 14th 2024, hypothesized that upwards of 40% of all global jobs could be impacted in some capacity by AI. Specifically, the IMF postulates that technologically developed countries could see upwards of 60% workplace impact as a result of AI, whereas less developed nations with reduced infrastructure and limited technological professions could be less directly impacted. While the report acknowledges the potential of AI to streamline the work process and automate redundant and potentially even specialized tasks, thereby increasing productivity and thus profitability, it primarily focuses on AI’s negative impacts on factors such as disproportional income between individual peoples and nations.

            As speed and ability become more centralized behind AI-assisted systems, competition between lower income nations and developed nations could see staggering disproportionality, as labor-heavy markets would function only as rapidly as its human workforce. Further, domestic income disparity could also grow, as older demographics and those without formal technological training are of higher risk to be replaced by AI automation. The NY Post in their reporting offered several interviews with business owners as well to accompany their story, highlighting the concern several professionals have with survivability in the marketplace without access to these technologies, which could making competition in the marketplace more challenging for new businesses, and would incentivize businesses to higher only those who are educated in the implementation and upkeep of these systems.

            It seems undeniable that AI as a tool has extreme potential to reshape the global economy and could radically impact the flow and distribution of capital. However, from a political perspective, one is left to consider whether this is truly the whole story? While the dollar does drive the economy and the quality of life for the people of a nation, there is perhaps a far more concerning ramification of AI adaptation, namely being the consolidation of power, influence, and access to information within the hands of several companies/individuals.

            Many have critiqued technology companies for influencing access to information, as well as altering or influencing the information itself in order to advance a particular agenda. Google, for example, has been accused of influencing the outcomes of elections in the United States via the search engine manipulation effect (SEME) (a postulation based on the research of Dr. Robert Epstein). Facebook has been criticized for censorship of individuals based on their opinions and beliefs. Needless to say, the fear of powerful tech conglomerates using their influence and reach over the everyday lives of citizens to advance their agenda is nothing novel. However, the integration of AI as a tool for the completion of complex works which are integral to the function of a company (tasks such as the diagnosis of patients in a hospital, the aggregation and analysis of market data for stock investment, the organization of governmental logistics, etc.) opens up the potential for those who publish and code the AI software to integrate their agenda and objectives into the very fabric by which other neutral companies operate, increasing further the influence these select few actors would have over the reality of everyday individuals and potentially even government officials.

            To aid in conveying the point, consider this from a theoretical perspective. A financial advising company seeks to improve investment analysis and speed using AI. A logical choice considering the human analysts requires time and manpower, and can be biased, slow, or misguided resulting lost revenues for the company and its shareholders. To do this, the investment firm lays off some of its analysts and replaces them with AI from Google. While the AI is trained to analyze the code as requested by the investment company, the underlying mechanics are property of Google. As such, if Google as a company prioritized something such as environmental advocacy, it may code a bias, consciously or unconsciously, into the AI, resulting in favorability in the mathematical analysis for companies which are environmentally conscious. As AI recommends eco-friendly companies over non-eco companies, those which are not green will begin to suffer disproportionately, irrespective of their financial strength, causing a widespread impact over the market centralized by a single company actor. Thus, in this case, it may be argued that this technology would result in the potential for stock manipulation.

            As another example, consider governmental use of AI by those who plan housing, zoning, and construction. If a technology company such as Amazon were to supply the AI systems, it could harbor inherent biases which suit the company, such as aggregating data or making recommendations which facilitate business delivery logistics, or allow mixed use zoning where companies such as Amazon could purchase property in residential areas where property may be cheaper. Thus, Amazon would directly impact the information and decision making process of governmental officials.

            Now, it must be stated that these theocraticals are nothing more than speculation. There are various legal and ethical barriers which, in theory, should preclude these events from transpiring. However, the risk of power consolidation is ever present, and even if 99% of developers are honest and attempt to remove any and all bias to their own agenda, all it takes is a single person to introduce a piece of errant code to dramatically impact the flow of global business and the distribution of information various companies would have access to. Coupled with the risk to the livelihood of such a substantial segment of the population, AI appears to be far more than a threat to finances alone. Without human integrated systems, the population becomes more dependent on both the government as well as private companies to survive, a dangerous and morally dubious conundrum. However, we must also acknowledge that these technologies are continuing to advance, and without implementation to some degree, there are various risks to global economic performance and national security. The challenge from a political fundamentals perspective is balancing the natural advancement of technology with the intrinsic obligations of the state and the inherent liberties of the people, ensuring one does not infringe upon the other or allow the other to accumulate undue power or influence.

            Thus, we are left wondering, what is the correct solution as it pertains to AI? Perhaps setting forth an agreed upon limit in its implementation could be beneficial, though such a thing would limit the free market economy and could arguably impact the rights of individuals and corporations from competing. Perhaps it is a matter of slowing its implementation, giving workers time to learn and develop new skills to find alternative jobs in the marketplace, though this does little to address the concern of consolidation. And perhaps a total ban of AI development could protect the rights of the people while also ensuring continued employment for those who are non-programmers, however limiting technology seldom succeeds, and could negatively impact innovations made elsewhere through streamlined workflow.

            The true answer likely lies somewhere in-between, an agreed upon series of terms and conditions, limiting the degree to which companies may supply to the market to ensure an oligarchy of information does not form. A limit in company automation, protecting the jobs of those who are unskilled in programming. An industry-specific allowance, such as academia, which would allow the study and ethical distribution of AI technology? And regulations which limit the government’s ability to restrict or take control of these tools for themselves. Truly, much more thought is needed in this regard, however the start is simple: We must ask ourselves how we foresee AI integrating in society, and how we ensure we do not lose ourselves along the way.