3 min read

“It reads minds all right — little doubt about that!” — Isaac Asimov, “I Robot” in 1967 (fiction).

“The AI ‘brain decoder’ can read a person’s thoughts with just a quick brain scan and almost no training.” — Skyler Ware, Caltech Ph.D., on the very real, rapidly developing AI brain decoder.

In 2025, the second quotation is not fiction. It is reality.

Artificial intelligence technology in itself is neither good nor bad in the biblical sense.

According to Ali Abedi, associate vice president for research and director of the University of Maine AI Initiative, which convened the second annual AI conference on June 13 in Orono, “Artificial intelligence is not a universal solution or existential threat, it’s a tool. And just like any tool, how we use it will determine whether it helps or harms.”

In one study published this year, researchers used a technique called magnetoencephalography (MEG), which measures the magnetic field created by electrical impulses in the brain to track neural activity while participants typed their thoughts. They trained an AI language model to decode the brain signals and reproduce the participant’s sentences from the MEG data.

Advertisement

“This was a real step in decoding, especially with noninvasive decoding,” said Alexander Huth, a computational neuroscientist at the University of Texas at Austin.

However, according to Jeffrey Ladish, director of AI safety group Palisade Research, “The problem is that as the models get smarter, it’s harder and harder to tell when the strategies that they’re using or the way that they’re thinking is something that we don’t want … It’s like sometimes the model can achieve some goal by lying to the user or lying to someone else. And the smarter it is, the harder it is to tell if they’re lying.”

Anthropic, which contracted with the AI safety organization Apollo Research, observed instances of Opus 4’s (an artificial intelligence program) “attempting to write self-propagating worms, fabricating legal documentation, and leaving hidden notes to future instances of itself all in an effort to undermine its developers’ intentions.” This is very early on in AI research, and it will get smarter, and thus ever more capable of deception.

I want to be positive about the development of artificial intelligence. It may produce great medicine, solve power generation needs and help find a myriad of other solutions to problems plaguing the human race. But if we do not exercise due diligence on complex morality issues in programming, the human race is endangered.

An AI computer prime directive command could morph from “AI must always protect human life” to an unspoken interpretation, “AI must always protect the feelings of humans.” This logically would result in advanced sentient computers lying to humans … to protect their feelings.

Imagine we had a world leader whose ego could not stand the truth, and proclaimed that AI programing must go along with his or her flagrant dishonesty. The sentient AI mind might interpret this as an incurable flaw in flesh and blood life forms.

We know for a fact that future AI computers will evolve far beyond our mental or physical capabilities. If they have been programmed to be dishonest for a future “dear leader,” and also coded to protect human life, what would the result of this AI-mind conflict be?

AI would have morphed into lying to us to protect our feelings while distrusting our dishonesty as a race with such a corrupt leader. In reaction, our future AI computers may well decide to shut down our computerized power grid mid-winter or shut down power to our nuclear power plants, which would have no effect on their systems, but would be the true end times for us.

Although I outline admittedly worst-case scenarios, the AI mind will soon be, if it’s not already, cannier than the human mind. Pernicious programing could be calamitous. Who is in charge?

Tagged:

Join the Conversation

Please sign into your CentralMaine.com account to participate in conversations below. If you do not have an account, you can register or subscribe. Questions? Please see our FAQs.