The “Oppenheimer Moment”

By Roger Paradiso

THERE IS MUCH EUPHORIA COMING FROM AI TECH COMPANIES, just like the over exuberance on social media with the advent of Facebook.

There is much euphoria coming from AI tech companies, just like the over exuberance on social media with the advent of Facebook.

My sense of distrust back then led me to call Facebook a marketing scheme. I was looked at like some fool. But I ask you today, what have you gotten from Facebook that has changed the world? It has made the founders very wealthy and is a powerful force in culture and politics. But for regular folks, it hasn’t done much.

I love new tech when it helps people. I find Apple’s phone and camera revolutionary. It has created a vigilante news agency out of thin air. Plain citizens can turn the lens on government overreach. So, with the appropriate guard rails Apple has been a friendly tech company.

As for AI, I think I should fight the early euphoria and look into what experts are telling us are the danger signs. I was fascinated by this article by Mike Thomas entitled 15 Dangers of Artificial Intelligence (AI) | Built In. It leads to what I call “The Oppenheimer Moment.” It is that moment in time where individuals have to speak up against danger or forever live with their silence. Here are some excerpts from the article.

What is AI?
AI (artificial intelligence) describes a machine’s ability to perform tasks and mimic intelligence at a similar level as humans.

Is AI dangerous?
AI has the potential to be dangerous, but these dangers may be mitigated by implementing legal regulations and by guiding AI development with human-centered thinking.

Can AI cause human extinction?
If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

What happens if AI becomes self-aware?
Self-aware AI has yet to be created, so it is not fully known what will happen if or when this development occurs.

Is AI a threat to the future?
AI is already disrupting jobs, posing security challenges and raising ethical questions. If left unregulated, it could be used for more nefarious purposes. But it remains to be seen how the technology will continue to develop and what measures governments may take, if any, to exercise more control over AI production and usage.

As noted in the Built In article these are some of the dangers of AI.

  • Automation job loss
  • Deepfakes and social manipulation
  • Privacy violations
  • Algorithmic bias caused by bad data
  • Socioeconomic inequality
  • Weapons and military automatization
  • Market volatility
  • Increased criminal activity and child safety risks
  • Psychological harm and overrelianceHere’s what former presidential candidate Andrew Yang had to say to on the matter: “It’s going to get bad. I certainly don’t think 99% bad.”

Using his 44% vulnerability benchmark, Yang offered a rough projection: if the U.S. “churns through” even half of those jobs over the next decade, the country could see 30 to 40 million positions eliminated. Yang suggested that to subsidize displaced workers the major benefactors of AI should pay the $1,000 a month. – Business Insider


I also happen to think AI may destroy the commercial entertainment business. A few weeks ago, Pope Leo had an audience with creative artists at the Vatican. He said, “The logic of algorithms tends to repeat what works, but art opens up what is possible.” He urged filmmakers to defend “slowness, silence and difference” when they serve the story.

Built In wrote that altohyrtins in “social manipulation also stand as a danger of artificial intelligence. This fear has become a reality as politicians rely on platforms to promote their viewpoints, with one example being Ferdinand Marcos, Jr., wielding a TikTok troll army to capture the votes of younger Filipinos during the Philippines’ 2022 election. 

According to Built In, “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.” They also wrote, “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

I would add some humor by noting that the great Dudley Moore, a comedian and Hollywood star, had a show called Beyond the Fringe in London and on Broadway in the nuclear 1960s. In the sketch they said the safest way to protect yourself during a nuclear catastrophe would be to put paper bags on your head. Thank God we haven’t had to do that yet.

Perhaps an astute future Congress and White House will come up with some guard rails to protect us. Our European friends have already been putting in some laws and rails for their countries.

In these days of buffoonery in Congress, I would say to save all the strong bags you have and practice putting them on your head.


15 Dangers of Artificial Intelligence (AI) | Built In