“Artificial intelligence will be used to interfere in elections”

Barcelona

10/10/2023 at 07:07

CEST


AI regulation remains an unfinished business for many nations around the world.

On May 16, half a year after the ChatGPT phenomenon broke out, the Congress of USA held a session to address future regulation of the artificial intelligence (AI). The event brought together industry leaders such as Sam Altmanfounder of OpenAI, but also one of the most prominent critics with the use of this technologythe cognitive neuroscientist Gary Marcus.

The 53-year-old American, professor emeritus at New York University, is one of the most influential personalities in the field of AI. After more than two decades of research, he has published key works, dozens of scientific studies and founded two companies that he sold to Uber.

This Monday, Marcus attended the Parliament of Catalonia to participate in the EPTA Conference, a meeting to debate the opportunities, challenges and risks of AI and how to address them from public policy. EL PERIÓDICO, from the Prensa Ibérica group, has been able to chat with him.

We call it AI, but is it really intelligence?

It depends on how you define intelligence. I would say it’s about being able to solve many different types of problems flexibly. Machines are reasonably good at finding information, they can play chess very well and do math better than people. But they don’t really understand how the world works, they aren’t very logical, and when faced with something new they often break down. They are very limited.

Last year wrote a paper in which he said that AI is stagnant. Why can’t we trust big language models like ChatGPT?

Because they do not have the ability to verify facts. One of those systems said that Elon Musk died in a car accident. For a human that is easy to prove that that is not true, because he is on Twitter every day. They cannot reason or think, they only create that illusion because they reproduce phrases that other people have said.

“ChatGPT cannot reason or think, it only creates that illusion because it reproduces other people’s phrases”

like parrots

It’s an acceptable comparison, but not perfect. Parrots can learn a bit of language and compare it to objects in the world around them, as well as solve certain problems they have never seen before. The systems do not.

ChatGPT and others can respond to misinformation as if it were true. Will large-scale disinformation accelerate?

Misinformation is the biggest threat posed by AI. These systems frequently make mistakes, but they never hesitate and respond with absolute confidence. If they don’t know something they make it up. The most famous case is when it was invented that a law professor had committed sexual harassment and an article in ‘The Washington Post’ that did not exist was cited as evidence. This is how you can accidentally slander, defame and destroy someone’s reputation.

On the other hand, there is deliberate misinformation. When I appeared before Congress, I asked the system to make up a story about how the US senators I was meeting with were part of a conspiracy with aliens to raise the price of fuel so that humanity would never explore the universe. He created an entire narrative, with made-up quotes from Musk and a Yale professor who also didn’t exist. Disinformers don’t care if their hoaxes aren’t perfect. If only 20% believe them, that can give enormous strength to those who want to disrupt an election.

Thus, it poses a threat to democratic processes

My most immediate concern is what will happen to the nearly 70 elections in 2024. It is very likely that fake videos of Joe Biden falling down stairs or saying something stupid will be created to try to rig the election. This will happen in many or all of them. Using AI doesn’t require much experience, so they will use it to disrupt the elections.

By generating false but credible information, models like ChatGPT are poisoning the information in web search engines. Will it go further?

This is an immense problem. What should worry Google is not its future as a search engine, but rather that the quality of the Internet will decline. And that is already happening for multiple reasons. One is that ChatGPT, Bard and others often make things up and those falsehoods are leaked and incorporated into other systems. There are also people using AI to create books and guides to make money. Cory Doctorow has described that loss of information quality as ‘Enshittification‘. This type of garbage is polluting the network, and it will only get worse.

Has ChatGPT’s potential been overstated?

Absolutely. It was a fad, its use was catapulted, but now it has decreased and some companies have even banned it. Plus, the reality is that they don’t make nearly as much money as you think. It was thought that AI-based search would be the big economic driver, but people are realizing that it doesn’t actually work that well because a lot of things are invented.

Companies have launched products into society that make mistakes. What incentives do they have to develop systems we can trust?

That’s why we need the government to intervene. Companies have realized that they basically have the power to launch anything, no matter how risky it may be. That’s dangerous. We have no power to prevent them from doing so, other than some after-the-fact lawsuits. That is the reason that pushed me to move my career from research to politics, to educate governments about what we can do. If you want to deploy something to 100 million people, you first have to demonstrate that the benefits outweigh the risks.

Companies throw anything on the market, no matter how risky, and we have no power to stop it.

Companies like OpenAI talk about Artificial General Intelligence (AGI), which equals or exceeds human knowledge. That idea of ​​the Terminator is science fiction, but how far are we from it?

It is unlikely that we will have a Terminator scenario where machines alone decide to take control. I’m not sure we’ll see it, although it’s good that there are people studying that.

I think we should be more concerned about the prejudices that AI can carry, the mistakes it makes or how bad actors can already use it to interfere in elections, manipulate the market or invent new weapons. Also because more and more people are hooked on these things and believe that they understand them. And that has many risks. On one occasion, a chat encouraged someone to commit suicide.

Artificial intelligence neuroscientist Gary Marcus.

| Parliament of Catalonia / Jordi Garcia Monte

So, is the fact that experts and large companies talk about “non-human minds that could replace us” a distraction maneuver to avoid talking about more real problems such as the concentration of power?

It’s very possible that this is the case, but I don’t know what his motivations are. Some people may think that the IAG is close and others believe that saying so will increase their company’s valuations. The reality is that we are no closer to machines having an intelligent purpose or being able to trust them. It’s easier to worry about science-fiction scenarios and avoid the fact that misinformation is already here and we don’t have a good answer to stop it. If I ran a large company I would like you to think about the more abstract problems and not the concrete and immediate problems that I cannot solve.

It is easier to worry about science fiction scenarios and avoid talking about other dangers such as misinformation, which is already here and to which we do not know how to respond.

AI lives on data mining. Does its use expand the surveillance as a business model?

We are used to everyone, for whatever reason, agreeing to give up their rights to privacy. Many people are putting all kinds of data into ChatGPT and that will create a lot of security problems. When I appeared in the senate, Sam Altman (founder of OpenAI) said they won’t use that data to sell ads, which doesn’t mean they won’t. Using these tools means giving a lot of information to the companies behind them.

You are in favor of creating a global organization that supervises AI as is already happening with nuclear energy. Is it the law that the EU prepares the role model?

In general, yes. I like it quite a bit, but it is not law yet and could not be passed or weakened. Even if it’s not perfect it’s one of the best attempts there is. I like that they put responsibility around transparency. We can have hope.

Even if it is not perfect, the AI ​​law being prepared by the EU is the best attempt there is. We can have hope.

Now, big tech say they want some regulation, but they have launched their lobby to limit the law and Altman even threatened to pull ChatGPT out of the EU. Is this a public relations strategy?

Part of it is, they want to appear to care. They actually see some value in regulation, but they want to build it in a way that protects them and keeps other companies out. The worst thing we can do is not have regulation; The second worst thing is a regulation dictated by companies.

We need transparency around the data being used so that artists and writers can be compensated, so that sources of bias can be detected and mitigated, and so that scientists can understand what these systems do. Governments are going to have to insist on this and will have to penalize non-compliance because companies will not do it voluntarily.

Some people have good intentions and some don’t. But companies play the ‘lobby’ game. They have a lot of money and they are trying to tip the balance so that regulation favors them. We can’t let it happen. It really bothers me every time some government official gets together with a bunch of big tech leaders and takes photos. That’s sending the wrong message. We need independent scientists and ethicists in these meetings and not just companies and governments behind closed doors.

Even so, much of the scientific research in this field is economically controlled by giants like Google or Meta…

It is definitely a problem that there is not enough independent funding and that a lot of the money comes from companies, because they set the agenda. The economy is hugely in favor of business, and for academics not accepting that money makes it harder for them to compete. The entire agenda of science has been eclipsed by economics. The fact that the personalized ads you can make with AI are so financially profitable turned Google and Meta into huge companies and that has distorted research.

ttn-25