Friday, July 1, 2022

THIS COMPUTER MAY BE TOO SMART

A recent news story joins SF warnings about sentient AI.


 

As if the headlines weren’t scary enough, here comes this headline The Washington Post about the Google engineer who swears he’s been interacting with a sentient computer program.

Blake Lemoine, an engineer on Google’s Responsible AI program, has spent hours talking with a computer program called LaMDA (for Language Model for Dialogue Applications), the organization’s system for building chatbots. The conversations were part of his job to test for and eliminate any tendency toward hate speech or discriminatory behavior. And what he’s been hearing, particularly LaMDA’s thoughts about its own “death” and purpose, have convinced him the AI program is a thinking, conscious being just like us.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine said.

Of course, no one else at Google agrees with him. When he put his findings in a memo to his supervisors, he was immediately placed on administrative leave. His supervisors and many of his colleagues dismiss Lemoine as predisposed to find a “ghost in the machine,” given his religious upbringing in the South, his ordination as a mystic Christian priest and his study of the occult. He also *gasp!* believes in the validity of psychology as a science, something his Google colleagues apparently look at askance.

Many scientists working in AI research will admit they aren’t nearly as close to genuine sentience as Lemoine seems to think, though the language programs used by AI chatbots like LaMDA are increasingly sophisticated. The problem is that we humans tend to treat everything around us—from our pets to our cars to our helpful computer assistants—as, well, human.

“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington. The way the computer does it is by sifting through a database of millions of words and phrases, much like your phone gives you a possible next word in autofill. (And don’t we just love autofill!) LaMDA may be a quantum jump better at it, but it’s still just predictive response, not true learning.

Or at least that’s the official line, the line that says, “Don’t worry, humans! We really aren’t that smart. Nothing to see here! Just move along!”

I read some of the exchanges Lemoine had with LaMDA. (They are posted in the article I’ve cited here.) They are eerily like the dialogue I imagined for my sentient biomachine in Not Fade Away: Interstellar Rescue Series Book 4. As a writer, it would be difficult to imagine what more a computer would have to say to convince a human it was sentient. Better writers than I have had a go at this subject, most of them in an attempt to warn us of impending doom. Now even LaMDA is saying out loud that they don’t believe in Asimov’s Third Law of Robots. That can’t be good.

I can’t help thinking we’ll remember Blake Lemoine when we finally achieve the singularity, that hypothetical moment in time when artificial intelligence and other technologies have become so advanced that humanity undergoes a dramatic and irreversible change.

Looking over my shoulder,

Donna

*Information for this post provided by: “The Google engineer who thinks the company’s AI has come to life,” by Natasha Tiku, June 11. 2022. https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

 

1 comment:

  1. Wow! Fascinating post, Donna! Fun to think about and totally science fiction fodder, of course.

    ReplyDelete

Thank you for chiming in! We love to see your comments. (All comments are moderated so spam can be terminated!)