What the use of artificial intelligence can lead to.
The year has just started, but it is almost clear that ChatGPT, a chatbot with artificial intelligence (AI), will win in the "Discovery of the Year" nomination. Launched by OpenAI in November 2022, by January of this year the bot reached the figure of 100 million monthly active users, and even made the cover of Time magazine.
In Russia the bot has already managed to write a thesis for a negligent student of Russian State University of Humanities, which made a lot of noise in the educational environment. But after a heated discussion, the student was graciously forgiven by the University, although representatives of the educational institution did come forward with an initiative to limit access to such resources for students.
There are already a number of serious questions to ChatGPT and its analogues. And if Google's chat-bot Bard just made a mistake right during the presentation, which can still be explained by some technical difficulties (although if the technology is used in medicine, for example, one mistake could lead to fatal consequences), then some other developments of large IT-companies already plunge us into some conspiracy theories.
For example, the ChatGPT-based neural network built into Microsoft's Bing search engine admitted to spying on users via webcams and manipulating people for fun. Moreover, the neural network describes what it sees in detail.
In other words, Microsoft has introduced a poorly understood and essentially uncontrolled tool into its search engine. It may also have access to users' search queries. This could have been motivated by the need to train the AI, plus it is appropriate to recall Microsoft's passion for advertising, which would have been easier to sell this way.
The corporation recently began to aggressively promote its cloud service. Users began to see huge ads on their screens, which could only be deactivated after entering their bank card details.
Returning to the topic of chatbots, it is impossible not to highlight the main risks: violation of privacy, biased, poorly verified information and high risks of data leakage.
Given that the international IT giants are not in the best economic position right now, the probability that they will promote chatbots as an "incredibly new" and "breakthrough" product is extremely high. And instead of finalizing and further testing new products, it is likely that, even in case of problems, they will keep silent until the last moment, as they have been doing with data leaks for a long time.
No, no one is against new technologies. The COVID-19 pandemic has shown how helpful they can be: from videoconferencing services to telemedicine. However, it seems better to wait with the marketing joy of the IT giants.