Artificial intelligence created a mind trap for humans: scientists warned of the danger | Obozrevatel

admin
3 Min Read

AI algorithms limit people’s access to opposing points of view

The widespread use of artificial intelligence algorithms to recommend content and products to users based on their previous online activity has led to the emergence of new phenomena on social media, such as echo chambers and information cocoons. AI algorithms create mind traps for netizens, encouraging them to read the content that resonates with their views on life, politics, etc only.

This is stated in a study by a group of scientists led by Professor Yong Li from Tsinghua University (China). Their work is published in the journal Nature Machine Intelligence.

The researchers studied how the so-called information cocoons, in which users encounter only opinions or users who repeat their own views, are formed. This can pose a serious danger to the development of critical thinking.

The first author of the paper, Jinghua Piao, noted that the widespread adoption of AI-driven algorithms poses new challenges. For example, reducing the influence of ideologically diverse news, opinions, political views, and friends. He noted that such technology isolates people from diverse information and eventually “locks them into one topic or point of view.”

This, according to the researchers, can have far-reaching negative consequences as such cocoons can reinforce prejudice and polarization in society, hinder personal growth, creativity and innovation, increase disinformation and impede efforts to create a more inclusive world.

“The concept of information cocoons is used to describe a widely observed phenomenon, in which people are isolated from a variety of information and eventually become trapped in one topic or point of view,” the scientist emphasized.

At the same time, the researchers argue that it would not be fair to blame AI algorithms or people specifically as such information cocoons arise from complex interactions and information exchange between different authors.

Such interactions are divided into four components:

Scientists believe that persistent information cocoons form when there is an imbalance between positive and negative feedback, as well as when similarity-based selection is continuously reinforced.

To avoid this mind trap, scientists advise not only to like but also to respond to the content you don’t like so that algorithms can form a more objective picture. They also urge netizens to engage in self-exploration more often as viewing new information will affect the algorithm’s further recommendations.

Earlier, OBOZ.UA reported that scientists found the “kryptonite” of artificial intelligence that drives it crazy.

Share This Article
By admin
test bio
Leave a comment
Please login to use this feature.