Regulatory races: how the world is going to control artificial intelligence

foto

Ovsyannikova Yulia / Global Look Press

The rapid development of artificial intelligence (AI) is causing a lot of controversy in the field of control.

Obviously, there is no general approach yet, and many countries are at the stage of studying the issue.

The British regulator, for example, tasked with developing new guidelines on the use of AI, is consulting with the Alan Turing Institute and other legal and academic institutions to gain broader knowledge of the technology.

On May 4, Britain's competition regulator said it would begin studying the impact of AI on consumers, businesses and the economy, as well as the need for new control measures. Previously, the country's officials had said that responsibility for AI governance would be divided between human rights, health, security and competition regulators. There are no plans to create a new body.

The Australian government is also in dialogue with the country's main scientific advisory body, and is discussing next steps, a representative of the Minister of Industry and Science said in April.

Italian Data Protection Authority plans a more detailed study of AI, as well as the involvement of independent experts. Previously, ChatGPT was temporarily banned in the country because of regulatory concerns, but now it is available again.

The leaders of the G7 are also in the process of studying the issue. At a meeting in Hiroshima on May 20, they recognized the need to control AI and virtual reality technologies, and agreed that ministers would discuss these issues, reporting on the results by the end of the year. Countries are expected to adopt common rules on artificial intelligence.

In April, the Irish Data Protection Commission also spoke of the need for regulation without the use of rigid mechanisms.

There are a number of countries where the relevant authorities have already decided on the future direction of regulatory development. For example, in April, China's cyberspace regulator published draft measures for managing generative artificial intelligence services. The agency said that AI developers are expected to provide security information before launching new solutions.

The regulator said in a statement that China supports AI developments and encourages the use of secure software, tools and information resources, but that content created by artificial intelligence must be consistent with the country's core socialist values.

It was noted that providers would be responsible for the legitimacy of the data used to train AI, and should take measures to prevent discrimination in the development of algorithms and mentioned training data.

The regulator also said that service providers should require users to provide their real identification data.

In case of non-compliance, providers will be fined or even prosecuted.

If their platforms are already generating inappropriate content, companies must update the technology within three months to prevent similar content from being created again, the regulator said in a statement referred to by Reuters.

European lawmakers in May this year agreed to tighten draft regulations on generative artificial intelligence. The EU Parliament is expected to vote on the AI law as early as June.

Russia is also working to study and regulate artificial intelligence. In April of this year, a representative of the relevant committee of the State Duma announced that the United Russia party was working on a bill to guarantee a safe digital environment for citizens.

It is planned that the law will also affect solutions based on artificial intelligence, but it was noted that this is not a ban on AI technology, but rather the ground for their development.

Having analyzed the current regulatory situation in the world, it can be noted that mostly government agencies have taken a wait-and-see attitude, concentrating on security issues.

This contradicts various statements by public figures associated with the IT sphere, who have actively called for a ban on artificial intelligence, and its harsh regulation. For example, former Google employee Geoffrey Hinton, who was involved in the development of neural networks at the company (he is even called the "Godfather of Neural Networks") and who left it for ethical reasons, openly stated that generative AI can contribute to the active spread of misinformation. Moreover, the very notion of truth has become blurred, the developer notes with dismay.

In addition to this and a number of other problems, there is great positive potential: Artificial Intelligence, which has long been called the "New Internet," in the future could bring significant dividends, and not just financial, to the forces that will learn to apply it competently, including to reduce financial costs and improve the quality of people's lives.

However, it cannot be ruled out that its practical effect will be greatly overestimated by marketing, as it already was with the meta-universes, and even the most useful developments can get out of hand. Unfortunately, it cannot be claimed that large Western IT companies, which are now experiencing not the best of times, will stop if something goes against moral and ethical principles.