Swiss perspectives in 10 languages

ChatGPT: intelligent, stupid or downright dangerous?

Enfants derrière un ordinateur
“There’s no age limit to talk to me,” says ChatGPT if asked. Children (and adults) should still not take everything it says as gospel. Copyright 2023 The Associated Press. All Rights Reserved.

It has an answer for everything, and it talks like a book. OpenAI’s chatbot, ChatGPTExternal link, is the poster child of a new era of artificial intelligence (AI). Yet it is far from resembling a human brain, and there is no legal framework to regulate it, experts say.

Who still hasn’t heard of ChatGPT? Since it was launched four months ago, in Switzerland alone print media have devoted on average ten articles to it a day (as tracked by the news aggregator smd.ch). If you add to that radio, television, news websites and social media, rarely has a product got such publicity for its launch – without paying a penny.

At first, its praises were trumpeted, but misgivings soon came to the fore as the machine revealed its pitfalls and hazards, regarding not only the reliability of the information but also the security of users’ data.

Since March 29, thousands of tech experts have called on business and governmentsExternal link to freeze development of AI for six months, pointing to “major risks for humanity”. One of them was Steve Wozniak, co-founder of Apple. Another was Elon Musk, one of the main investors in OpenAI, the company that has been developing ChatGPT.

Three days later, the Italian national data protection authority decided to block access to the device. The agency criticises ChatGPT for harvesting and storing information as training input to its algorithms without a legal basis. The agency has called on OpenAI to notify it within 20 days of action taken to deal with this situation or else face a fine of up to €20 million (CHF19.7 million).

External Content

El Mahdi El Mhamdi is one of those who criticise the lack of a legal framework for AI. He is now a professor of mathematics and data science at the Paris Polytechnique, but he did his doctorate at the Swiss Federal Institute of Technology Lausanne (EPFL) under the direction of Rachid Guerraoui. These two names will crop up again later in this article.

Is the chatbot really that dangerous? It doesn’t give that impression, with its minimalist interface and its unflappable politeness, already familiar from interaction with Siri, Cortana, OK Google and other such synthetic voices.

To grasp what’s at stake, you need to understand what this machine is – and even more, what it is not.

Not an electronic brain

When you ask it the question, ChatGPT makes no secret of what it is: “as a computer programme, I am very different from a human brain”, it says. It then goes on to explain that it can process huge amounts of data faster than a human being, that its memory never forgets, but that it lacks emotional intelligence, self-awareness, inspiration, creative thinking or independent decision-making ability.

More
Opinion

More

AIs are out of (democratic) control

This content was published on We desperately need more attention, staff and funding to set up artificial intelligence (AI) governance systems, says Lê Nguyên Hoang.

Read more: AIs are out of (democratic) control

The architecture of AI has really nothing to do with the brain, as explained in a recent book called A Thousand Brains by Jeff Hawkins, an American computer scientist who was one of the originators of Palm, a personal assistant in the 1990s that preceded the smartphone. Now working in neurosciences, Hawkins heads the AI company Numenta.

One of the core ideas of his book is that the brain creates referentials, hundreds of millions of “maps” of everything we know. It keeps adjusting them based on our sensory input. AI has no eyes or ears and depends on the information fed to it, which remains the same and doesn’t evolve.

What’s a cat?

Hawkins gives simple examples of what he means. An AI robot labelling pictures can recognise a cat. But it doesn’t know that this is an animal, that it has paws and a tail, that some humans prefer cats to dogs, that cats purr or that they cast hair. In other words, this machine knows less than a five-year-old does about cats.

More
Opinion

More

Governments and companies won’t stop the AI race

This content was published on Artificial intelligence researchers are calling for a pause on “giant AI experiments”, but it won’t happen, argues AI researcher Jürgen Schmidhuber.

Read more: Governments and companies won’t stop the AI race

Why? Because the five-year-old has seen cats, petted them, listened to them purr, and all this information is part of the “map” for a cat stored in the child’s brain. A conversational robot like ChatGPT only knows sets of words and the probability of them occurring together.

The limits of AI as it is developing today were foreseen 70 years ago by Alan Turing, the mathematical genius who originated the whole idea of computing machinery. In 1950, in his  article “Computing Machinery and Intelligence”, Turing saw that to build a machine that could think, it would not be enough to programme it to make deductions from masses of data. It would also need to reason from induction, that is, to make generalisations from particular cases. AI is still far from being able to do that.

What’s the source?

Rachid Guerraoui heads the Distributed Computing Laboratory at EPFL. With his colleague Lê Nguyên Hoang, who produces the YouTube channel “Science4All”, he published a book called Turing à la plage (Turing on the beach) in 2020. The publisher promised that this book would “make you see your computer differently”.

For Guerraoui, one of the major risks with ChatGPT is putting too much trust in what it says. “Most of the time, what it tells you is true, or at least it’s so well written that it sounds true. But it regularly makes big mistakes. So there’s no point in trusting it blindly.” Yet not everyone is likely to have the critical thinking skills to question what the machine tells them. Especially when it tells them so clearly, and with good grammar and spelling too.

Since it was launched, people have been told that ChatGPT can write anything, from a book summary to a film script, from a marketing slogan to a computer programme – even poems. Asked by SWI swissinfo.ch to “write a poem about Switzerland in the style of Philip Larkin”, this is its instantaneous effort: six stanzas of iambic tetrameter rhyming couplets which, while admittedly stuffed with clichés, are definitely Larkinesque. Impressed?

In Switzerland, the mountains rise,
To meet the pale and wintry skies,
A land of snow and chocolate bliss,
Where all is neat, and nothing amiss.

The streets are clean, the trains run on time,
And life is lived in perfect rhyme,
A place where order reigns supreme,
And all is calm, a perfect dream.

Yet in this land of clockwork grace,
There lies a melancholic trace,
A sense of something left behind,
A longing for a different kind.

For though the Alps are vast and grand,
And chocolate sweet and cheese is grand,
The heart still yearns for something more,
A world beyond the Swiss front door.

And so we gaze upon the peaks,
And wonder what our future seeks,
A world beyond the Swiss facade,
A life that’s more than just a job.

For in this land of ordered life,
We long for something more than strife,
A world of chaos, love and pain,
Where life is lived with might and main.

“Another danger I see is that it will let people dodge responsibility,” he says. “Even corporations will be using it. But what is the source, and who is responsible if the information is wrong or causes problems somewhere down the line? That’s not clear at all.”

Does Guerraoui expect to see AI replace journalists, writers and teachers, as is often heard? Not yet, he says, but he can imagine that “some jobs will change. The academic or journalist will be more engaged in checking facts and corroborating sources, because the machine will produce plausible-sounding text which is true only most of the time. Everything it says will have to be verified.”

Urgent need of regulation

“The big challenge now for AI is not performance, but governance, regulation and trustworthiness,” says El Mahdi El Mhamdi.

In 2019 he published – also with Lê Nguyên Hoang – Le fabuleux chantier (The fabulous workshop), a book which deals with the dangers of what are called “recommendation algorithms”. These are used by social media to display content likely to interest particular users based on their existing profile. El Mhamdi did not have to wait for the advent of ChatGPT to critique the impact of these algorithms on “the information chaos in our society”.

More

Debate
Hosted by: Sara Ibrahim

Is AI going to help or hurt us?

Computers are proving capable of performing tasks that require human intelligence and to influence our decisions. Should we be letting them?

3 Likes
67 Comments
View the discussion

“In my view, ChatGPT is not just overrated but is being deployed too soon in an irresponsible and dangerous manner,” he says. “When I see the uncritical admiration for this tool, even from some of my colleagues, I wonder if we’re living on the same planet.” He recalls the Cambridge Analytica data-gathering scandal and the proliferation of spying software like Pegasus, which can be loaded surreptitiously onto mobile phones.

El Mhamdi concedes that ChatGPT could be a useful work tool, but he says that the science on which it is based “is the result of work by thousands of researchers over the past decade, as well as huge engineering resources, not to mention the ethically dubious labour of an army of underpaid workers in Kenya” (see box below).

Ultimately, he thinks, “the real genius of OpenAI lies not in the science behind ChatGPT, but in the marketing, which has been delegated to users already enthusiastic for this kind of gadget”. Users, yes, but the media too, to get back to the starting point of this article. Everybody is talking about ChatGPT, but have you ever actually seen an advert for it?

ChatGPT sees no difference between swallows flying over a field of tulips and the rape of a child by their father. For the chatbot, these situations are just information – like pain for the Terminator. Designers of AI have to teach the machine (which has no values, conscience or morality) what is acceptable and what is not. To do this, it has to be given examples of the most dreadful things which human beings are capable of doing so it can recognise them and eliminate them.

In January 2023 Time magazine revealedExternal link how OpenAI had outsourced the problem. Beginning in November 2021, it sent tens of thousands of bits of the most toxic material found on the internet to a California company called Sama, which had it analysed by its staff in Nairobi, Kenya. There, about 30 people had to spend nine hours a day reading and labelling rapes, tortures, bestiality and paedophilia, described with clinical precision. They got paid around $2 an hour.

As no one could fail to be affected by such horrors, Sama, which claims to be an “ethical” company, provided psychological counselling for these workers. But it was not enough, according to former employees who spoke to a journalist from Time anonymously. Since then, the contract between OpenAI and Sama has been cancelled.

ChatGPT, of course, keeps no trace of this inglorious side of its origins. However you ask the question, the reply is always “sorry, I am just a language model, I don’t know anything about this…, I am not able…, I cannot provide any information…, I do not have the capability…”

Edited by Sabrina Weiss. Translated from French by Terence MacNamee/ts

Popular Stories

Most Discussed

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

You can find an overview of ongoing debates with our journalists here . Please join us!

If you want to start a conversation about a topic raised in this article or want to report factual errors, email us at english@swissinfo.ch.

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR

SWI swissinfo.ch - a branch of Swiss Broadcasting Corporation SRG SSR