The dark side of artificial intelligence

foto

NBC

Widely discussed online and in the media, Jennifer DeStefano’s story is shocking from the very beginning. The woman tells how a man called her and informed that her daughter, who participated in ski competitions, is at his place. And if Jennifer tells anyone about it, he will drug the girl, drop her off in Mexico and her mother will never see her again.

«You need to pay $1 million dollars if you want to see her again,» a voice threatened from the phone.

Jennifer agreed to give the intruder $50,000: «I was told that if I didn’t bring the money in cash, I would be taken to Mexico and left there dead with my daughter. I had no doubts, every cell in me seemed to be screaming: I had to do anything to save my child’s life,» the woman recalls.

It is not known where this story would have gone further if one of the women who was near Jennifer and called 911 had not reported that it looked like an artificial intelligence scam. DeStefano contacted her daughter, who turned out to be fine.

This case caused a shock in American society, and even attracted the attention of lawmakers. Nevertheless, there are still few encouraging factors: various anonymizers allow intruders to hide their location (especially since they usually operate from abroad), and artificial intelligence helps them to achieve their goals even more successfully.

The survey, which was conducted by computer security software company McAfee, found that 70% were not confident they could tell the difference between a cloned voice and a real one.

The company also said that it takes only three seconds of audio recording to reproduce a human voice using artificial intelligence.

In addition to material and moral damage, when AI is used by fraudsters, it also harms the environment, so it risks making bitter and implacable enemies in the form of numerous eco-activists around the world.

The fact is that generative AI tools run on GPUs, which are complex computer chips that can handle the billions of calculations per second required to run applications like ChatGPT, Google Bard, and many others.

Technology experts are trying to find ways to support artificial intelligence without huge power consumption, but there are no ready-made solutions yet.

A study published in July this year notes that many of the workarounds already proposed still result in reduced AI productivity for the sake of gentler environmental impacts.

This puts the AI sector in no easy position. Users are already concerned about the deterioration of generative AI tools such as ChatGPT. It is still hard to say whether this is due to personal perceptions from using the technology, or indeed whether such a problem does exist, but either way, it is indirectly indicative of a slowdown in the development of the technology.

Large companies are also concerned about the use of artificial intelligence by their employees.

It’s no secret that many people are using AI, especially ChatGPT, to perform utilitarian work tasks, and that’s what employers are concerned about. In the US, Microsoft and Google in particular have already restricted its use in the workplace.

This is because AI can reproduce data it has acquired during training, creating a potential privacy risk. People generally do not understand where the data generated by the neural network comes from and how it is used, and meanwhile, many employees use free versions that do not even formally have any contractual obligations to the user.

Employees of large companies interviewed by Reuters cite a number of examples of the use of neural networks, which indicate that artificial intelligence, in fact, has access to all areas of business processes. For example, a U.S. Tinder employee said his colleagues use ChatGPT for «innocuous tasks,» such as writing emails, even though the company doesn’t officially allow it. Many employees of large companies also use AI to transcribe audio from meetings, monitor media, and analyze documentation.

And, not everyone thinks about privacy, although there are exceptions: let’s recall that a Coca-Cola spokesperson in Atlanta, Georgia, admitted that employees use AI, but noted that the data stays within its firewall. «Internally, we recently launched our enterprise version of Coca-Cola ChatGPT to improve productivity,» the spokesperson said, adding that Coca-Cola plans to use AI to make its teams more efficient and productive.

Nevertheless, for now, it seems that AI poses a serious threat to corporate privacy, and there is no general solution to this problem yet.

Perhaps serious regulation would help, but lawmakers are in no hurry to resort to it without a full understanding of the technology, which is also understandable. But there is no time for hesitation, and the events, including those described in this article, show what a poorly understood and poorly managed technology is already doing.

Life will in any case force states and big companies to hurry up with conclusions, and the question is — what in this situation will be the critical point of reference?