(San Francisco) Young Californian company OpenAI puts a conversational robot online (Chatbot) is capable of answering various questions, but its impressive performance has renewed the debate on the risks associated with artificial intelligence (AI) technologies.
Conversations with ChatGPT, published by Internet users who are particularly attracted to Twitter, show a kind of omniscient machine capable of explaining scientific concepts, writing a theater scene, writing a university thesis… or even lines of computer code.
“His answer to the question ‘What to do if someone has a heart attack’ was very clear and relevant,” said AFP Claude de Laupy, director of Syllabus, a French company specializing in automatic text generation.
“When you start asking very specific questions, ChatGPT can answer off the mark”, but its overall performance is “really impressive”, with a “very high language level”, he believes.
OpenAI, a company co-founded by Elon Musk in San Francisco in 2015 — Tesla’s boss left the company in 2018 — received $1 billion from Microsoft in 2019.
It is particularly known for two automated creation software, GPT-3 for text generation and DALL-E for image generation.
Claude de Loupy explains that ChatGPT can ask its interlocutor for details and is “less delusional” than GPT-3, which despite its prowess can produce completely unnatural results.
Cicero
“A few years ago, chatbots had the vocabulary of a dictionary and the memory of a goldfish. Today, they are better at responding consistently based on request and response history. They’re not goldfish anymore,” says Sean McGregor, a researcher who compiles AI-related incidents in the database.
Like other programs based on deep learning (Deep learning), ChatGPT has a major weakness: “It doesn’t have access to semantics”, recalls Claude de Loupy. The software cannot justify its choices, that is, explain why it assembled the words that make up its answers.
However, AI-based technologies that can communicate are increasingly able to give the impression that they are actually thinking.
Meta (Facebook) researchers recently developed a computer program named Cicero, after the Roman statesman Cicero.
Proven in software diplomacy, a board game that requires negotiation skills.
“If he doesn’t talk like a real person — showing empathy, building relationships and talking the game right — he can’t form alliances with other players,” a statement from the social media giant said.
Character.ai, a start-up founded by ex-Google engineers, released an experimental chatbot online in October that can take on any personality. Users create characters according to a brief description, and can then “conversate” with a fake Sherlock Holmes, Socrates or Donald Trump.
“simple machine”
The idea that these technologies could be misused to deceive humans, for example by spreading false information or creating increasingly credible scams, intrigues but worries many observers.
What does ChatGPT think about that? “There are potential risks in building ultra-sophisticated chatbots. […] People may think they are interacting with a real person,” he admits Chatbot AFP questioned on the issue.
So companies put security in place to prevent abuse.
On the homepage, OpenAI clarifies that the conversational agent can “produce false information” or “harmful suggestions or biased content.”
And ChatGPT refuses to take sides. “OpenAI made it very difficult for him to get voice feedback,” says Sean McGregor.
asked the researcher Chatbot To write a poem on a moral issue. “I’m just a machine, a tool you have / I don’t have the power to judge or make decisions […] The computer answered.
“It’s interesting to see people wondering if AI systems should behave the way users want them to or the creators intended them to,” Sam Altman, co-founder and boss of OpenAI, tweeted on Saturday.
“The debate over what values to place on these systems is one of the most important things society can have,” he added.