AI: An Ethical Atomic Bomb

August 02, 2023
By Elisabeth Hershman
lighting matches

In May, more than 350 executives, researchers and engineers working on artificial intelligence signed an open letter describing its risk as being at the level of pandemics and nuclear war.

As a communicator who advises clients about reputational and legal risks, I am concerned that corporations are failing to fully consider the implications of AI usage, especially the application of large language models exemplified by ChatGPT. As with many previous technological breakthroughs — like the advent of social media —  the benefits are so readily apparent and achievable that people are quick to experiment with and adopt AI-enhanced tools.

The evolution of social media is telling. Initially perceived as almost entirely beneficial, it is now notorious for promoting divisive, irrational behavior at both the individual and societal level.

It is possible that AI will evolve along a similar path, but much more quickly than the relatively slow spread of social media. ChatGPT was released in November 2022, but it has already caused teachers and testers of all kinds to consider how it will complicate their jobs. Artists are contemplating lawsuits for theft of their creative and intellectual work, since AI uses their output as its database. Writers and actors in the entertainment industry are on strike in part because they anticipate AI becoming a major force in content creation going forward.

 

phone with OpenAI screen

 

Experts say AI will change or replace the jobs of hundreds of millions of people in the foreseeable future. Already, many writers are experiencing a dramatic downturn in demand for their services.

Ethically, the companies who use AI to gain productivity in ways that change or eliminate jobs, or inadvertently perpetuate dangerous misinformation, have a responsibility to think through the consequences — intentional and unintentional — of what they do, just as the people who control weapons of mass destruction are acutely aware of their devastating power. Nuclear weapons don’t just destroy people and buildings, they send dust into the sky that can alter the atmosphere. They spew radiation that can kill people downwind of an explosion and alter the genetic structure of every living thing. Bombs can make parts of the planet deadly for thousands of years.

Ethical and legal consideration must also be given to the questions of data ownership, which will become increasingly complex as AI usage becomes more sophisticated. Who owns biometric data captured by AI-powered cameras in shopping areas, assessing where eyes linger? Who owns aggregate cell phone movement data? Who will profit from this data and to what nefarious ends?

Now that the power of AI is becoming evident, many believe that its impact on society will be equally disruptive and lasting. Optimists hope that the power can be channeled into breakthroughs in medicine, science, agriculture and other key drivers of civilization. Skeptics acknowledge these likely benefits but fear AI’s misuse by criminals, rogue states, terrorists and political activists who threaten social order.

Historically, American corporations move quickly to exploit competitive advantages and then fight regulation. Regulation often comes when the public becomes aware of the dangers or damage caused by business. It is then enforced by legal action from either the government or private citizens.

 

“Researchers at Microsoft have said that its OpenAI language model Chat GPT-4 is, to paraphrase, already smarter than most people. Its ability to reason like humans will continue to improve.”

 

In the case of AI, this historic model of businesses acting aggressively until reined in by political and legal forces may prove disastrous. It is spreading into the private and professional lives of hundreds of millions of people at lightning speed. Researchers at Microsoft have said that its OpenAI language model Chat GPT-4 is, to paraphrase, already smarter than most people. Its ability to reason like humans will continue to improve.

Waiting for public outrage, political debate, regulation and, inevitably, lawsuits to thoroughly explore the human and societal consequences of the use of AI is perilous. Boards of directors should not delay examining what the companies they oversee are doing with this incredibly disruptive force. They need to understand that the public at large will judge them for being irresponsible and not considering the potential harms with as much seriousness as they consider the benefits.