Ad image

Russian expert explains how neural networks are dangerous to human reputation

MONews
2 Min Read

Russian IT expert Rodion Kadyrov said neural networks can generate fake data.

Last week, the ChatGPT neural network included law professor Jonathan Turley in a list of lawyers known to be involved in illegal cases, but the accusations were false, and the information was taken from an article that was not in the Washington Post. Russian IT expert Rodion Kadyrov spoke to Gazeta news about the misinformation surrounding the use of neural networks.

The expert noted that the lawyer’s problem was “not a simple incident” but “an example of how modern language models, such as OpenAI’s ChatGPT, can generate false information that endangers people’s reputation and even lives.” What is particularly surprising, Kadyrov noted, is that the “hallucinations” associated with neural networks are normal and occur due to the nature of their work.

Experts explained that these failures occur when the AI ​​model starts to “synthesize” information as it tries to answer every request. This happens when a certain amount of information is required and the neural network does not have enough data to complete it.

“For example, if you ask a neural network to name 10 events, it can come up with seven expert names even though it only knows three names,” the expert explained.

The problem with this kind of misinformation isn’t limited to text. In March, a graphic was released showing former US President Donald Trump being arrested. The fake image was created using artificial intelligence and then spread on social media.

The expert also highlighted the lack of effective control of social media platforms and the need for stricter standards and regulations for the use of AI in such sensitive areas. He urged people to pay attention to the quality and detail of the content, as the quality of fake versions is often lower than that of the original content.

Share This Article