Skip to Main Content

Artificial Intelligence: A Guide for Students & Faculty — Problems of AI

Taking Jobs

One of the most prolific concerns regarding AI, is the question of what ramifications for employment. The fear has always been that increasing automation will begin to reduce the number of jobs that are available, and while previous instance of technological development have increased the total number of jobs, there are some indicators that the current course of AI might directly endanger livelihoods. Goldman Sachs, for example, estimates that AI will automate a quarter of existing jobs. 

Kochhar, Rakesh. “Which U.S. Workers Are More Exposed to AI on Their Jobs?” Pew Research Center, Pew Research Center, 26 July 2023, www.pewresearch.org/social-trends/2023/07/26/which-u-s-workers-are-more-exposed-to-ai-on-their-jobs/.

“Generative AI Could Raise Global GDP by 7%.” Goldman Sachs Research, 5 Apr. 2023, www.goldmansachs.com/intelligence/pages/generative-ai-could-raise-global-gdp-by-7-percent.html.

Inheriting Biases

Because AI is trained on human produced data, it will often make decisions based off biases in that data. For example, when an MIT student prompted an AI to create a professional headshot (as you might find on Linkedin) out of another photo, the AI's training data lightened her skin and gave her blue eyes, indicating that it had picked up a pattern in it's dataset that light skin and blue eyes were considered more professional than her existing features, a clear demonstration of racial bias on repetition. Similar fears revolve around the use of AI by law enforcement to predict criminality-- if enforcement of the law disproportionally applies to some groups, then so too will be the records of that enforcement fed to the AI. 

Nicoletti, L., & Bass, D. (2023, June 14). Humans are biased. Generative AI is even worse. Bloomberg Technology + Equality. https://www.bloomberg.com/graphics/2023-generative-ai-bias

Diffusing Responsibility

When something is done by an AI, there is a level of obfuscation as to what human being can be interpreted as responsible for the action. There are clear existing examples of this occurring with similar technologies: revenue management software RealPages analyzes the aggregated real estate data of it's clients and its own database in order to recommend adjustments to rents and at what rate real estate should be sold and units filled, and is currently embroiled in a series of lawsuits that allege the software is indirectly facilitating collusion to fix prices. AI presents an even greater opportunity to disclaim responsibility for decision-making by obfuscating reasoning and the process by which individuals determine consequences are acceptable or unacceptable. 

Scarcella, Mike. “RealPage Must Face Renters’ Price-Fixing Lawsuit over Multifamily Housing | Reuters.” Reuters, Thomson Reuters, 29 Dec. 2023, www.reuters.com/legal/litigation/realpage-must-face-renters-price-fixing-lawsuit-over-multifamily-housing-2023-12-29/.

Easy Misinformation and Other Abuses

One major concern surrounding generative AI for the immediate future is that they can make fabricating seemingly legitimate sources of information extremely easy, including images and video. Consider how damaging it could be if elections were swayed by soundbites from speeches a politician never gave, or if someone lost their job over the dissemination of sexually explicit images depicting them that they never created in the first place, much less shared. Consider how AI driven bots on social media can and do fuel echo chambers sway public opinion, or manipulate algorithms to enhance someone's platform. 

Cavaciuti-Wishart, Ellissa, et al. “Global Risks Report 2024.” World Economic Forum, 10 Jan. 2024, www.weforum.org/publications/global-risks-report-2024/in-full/.