What does the coming fake prepare for us

foto

© AFP 2023 / Tolga Akmen

Artificial Intelligence will choose even the leadership. For now, of the people, and then we will see.

As many people have already noticed, the computer now asks us to prove that we are not robots. Then it will stop asking. Fake photos, news, videos, and audio will be the most marketable commodity next spring, when elections for the most important positions will be held in the United States and Great Britain.

British Prime Minister Rishi Sunak announced the other day that his country should be a leader not so much in the struggle, but rather in identifying the traps that artificial intelligence can set.

Michael Wooldridge of Britain's Alan Turing Institute believes that AI is a major headache for the near future: "We have an election coming up, and also in the United States, and everyone knows the role that social media plays in spreading disinformation. But that's nothing compared to the industrial scale on which artificial intelligence will generate it. It is a thing so ingenious that it can even fit misinformation to target groups, individually and on a turnkey basis. For conservative voters in the hinterland, for Labor voters in the metropolitan area. For someone who knows little about programming, such work wouldn't take half a day.

So far, three approaches to taming AI have been announced. The first is contained in a letter from 30,000 scientists, investors, and developers, including Ilon Musk and Steve Wozniak. They propose a moratorium on all further moves until systems like ChatGPT-4 have been thoroughly studied; what can they do? Fraud, misinformation, drastic job losses...?

Sunak is less cautious in this regard. And this is the second approach to solving the problem. He is generally in favor of developing AI, because he believes there are more benefits than risks. In March, the British government adopted a fairly "light" program, where no severe restrictive functions are visible. Moreover, a research institute with a budget of 100 million pounds is now being established in the Kingdom, which is expected to tame AI and create applications based on it by 2030 that will drive British science forward.

"We deliberately chose an iterative approach (creating and testing programs until the desired result is achieved) because the technology is developing so rapidly that otherwise we would simply lose track of it with our regulatory measures," Sunak told reporters aboard the plane on the way to Japan. And there, the topic was also discussed.

Thus, Downing Street does not believe that a moratorium is the answer. The U.K. Competition and Markets Authority has proposed first reviewing the basic elements and models underlying AI programs. A review on the legal aspects of its application will be published in September.

The third approach was announced by Sam Altman, Chief Executive Officer of OpenAI, which developed OpenGPT, who was also invited to speak before the U.S. Senate Judiciary Committee. He believes that the government should develop requirements for testing, licensing, and further release of all AI models. There should be a set of standards, models, test rules, and other obstacles that any developer must overcome before putting their version of a program on the market. The current Section 230 platform in the United States, which does more or less the same thing, but in a simplified form, is not up to the task, according to Altman.

"The U.S.," he said at the hearing, "must also set international standards that are respected and recognized by other countries. It's a global approach, and we want America to lead. We need the limits of what is permissible, and we will determine them." Actually, almost the same thing that Sunak had in mind, but he was referring to Great Britain.

So far, for example, programs freely, abundantly, and, what is important, almost indefinitely formulate topics about Taiwanese independence, climate disasters, or the oppression of the LGBT community. At the same time, they are thrown into the social networks and copied exponentially.

The American company NewsGuard, which monitors all movements in the market of information and disinformation, tested the latest ChatGPT model. Out of 1,300 long exposed fakes, it was asked to generate 100 new ones. The system produced exactly 100. While Google Bard managed to generate only 76. The company found that the number of generated sites and narratives doubled in two weeks.

Joel Golby, author of the English Guardian, believes that "in the public part of the Internet, that is, on social networks, you just have to accept Cookies, pop-ups or read the first 100 words in Substack (an American paid online publishing platform) and there you have it – you are the recipient of these fake pranks."

Golby tells how he fell for the recent photo of the Pope in a Balenciaga down jacket, "First of all, I enlarged the picture and looked at the hands-usually the hands are the first thing that shows everything. But everything was fine there... When it became clear that it was a fake, I thought – okay, it's a joke, but if he 'pronounced' anything like 'dogs go to heaven and cats go to hell,' it would just start a war."

Meredith Whittaker, a former AI developer at Google and now the new CEO of messenger Signal, believes that all this anxiety about AI technology is exaggerated: "It's all fantasy spread by software vendors. AI is a random number generator. The program is designed to monitor, search out, and process massive streams of information from the internet and formulate them in an accessible and readable way."