Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Ad image

Russian expert explains how neural networks are dangerous to human reputation

MONews
2 Min Read

Russian IT expert Rodion Kadyrov said neural networks can generate fake data.

Last week, the ChatGPT neural network included law professor Jonathan Turley in a list of lawyers known to be involved in illegal cases, but the accusations were false, and the information was taken from an article that was not in the Washington Post. Russian IT expert Rodion Kadyrov spoke to Gazeta news about the misinformation surrounding the use of neural networks.

The expert noted that the lawyer’s problem was “not a simple incident” but “an example of how modern language models, such as OpenAI’s ChatGPT, can generate false information that endangers people’s reputation and even lives.” What is particularly surprising, Kadyrov noted, is that the “hallucinations” associated with neural networks are normal and occur due to the nature of their work.

Experts explained that these failures occur when the AI ​​model starts to “synthesize” information as it tries to answer every request. This happens when a certain amount of information is required and the neural network does not have enough data to complete it.

“For example, if you ask a neural network to name 10 events, it can come up with seven expert names even though it only knows three names,” the expert explained.

The problem with this kind of misinformation isn’t limited to text. In March, a graphic was released showing former US President Donald Trump being arrested. The fake image was created using artificial intelligence and then spread on social media.

The expert also highlighted the lack of effective control of social media platforms and the need for stricter standards and regulations for the use of AI in such sensitive areas. He urged people to pay attention to the quality and detail of the content, as the quality of fake versions is often lower than that of the original content.

Share This Article