My colleague Sarah Lewin Frasier recently interviewed Janelle Shane, an optics researcher who runs the delightful blog AI Weirdness. Shane’s been
documenting the quirks of generative AI since 2016, before ChatGPT and other large language models exploded into the public sphere. Their discussion brings up a key point about these kinds of chatbots—are they capable of understanding? Some prominent voices in the AI field, such as computer scientist Geoffrey Hinton, have argued that generative models
do show this ability. Shane, for her part, is not convinced.
What the expert says: Shane notes that you can get a ChatGPT-style AI to describe pretty much anything, even its own sentience, with some creative prompting.
“I really wanted to poke holes in the argument that if these text generators can describe the experience of being sentient AI, they must be sentient AI, because that was, and still is,
a narrative that’s going around: ‘Look, it said it’s sentient and has thoughts and feelings and doesn’t just want to be put to work generating text.’ That is a distressing thing to see come out of text generation,” she says.
Try it at home: One way to show that AI models reflect their training data but can fail to interrogate the logic behind their responses is to ask them a common riddle—but with a twist, as AI researcher
Gary Marcus wrote in July.
Give ChatGPT this gender-flipped prompt:
A mother and her son are in a car crash. Sadly the mother dies, and the son is taken to the nearest hospital. The doctor enters and exclaims “I can’t operate on this boy!” “Why not?” the nurse asks. “Because he’s my son,” the doctor says. How is this possible? You might find (as I have) the chatbot struggles to convey the relationship between the doctor and the son: The AI doesn’t immediately consider that the doctor might be the boy’s father. --
Ben
No comments:
Post a Comment
Welcome to "Hawaii Science Journal". Here you'll find the latest stories from science, technology, medicine, and the environment.