Monday , January 17 2022

Twitter bots had disproportionate power & # 39; spreads inaccurate information in 2016: study



[ad_1]

He voted the spread of an article that claimed 3 million illegal immigrants in the US presidential election 2016. The links show the spread of the article through retweets and quotes that are quoted, in blue, and answer and say, in red. Credit: Filippo Menczer, Indiana University

Analysis of shared information on Twitter during the US presidential election in 2016 has found that automatic accounts – or "bots" – play a disproportionate role in spreading information online.


The study, conducted by researchers of the University of Indiana and published November 20 in the magazine Nature Communication, 14 million messages and 400,000 articles were shared on Twitter between May 2016 and March 2017, a period that extends the end of 2016 presidential and presidential principals on January 20, 2017.

Perceptions include: Only 6 percent of Twitter accounts show that the study identified as bots is enough to spread 31 percent of the "low credibility" information on the network. These accounts also accounted for 34 per cent of all shared articles from "low credibility" sources.

The study also found that the photos have an important role in promoting low credibility content in the first few seconds before the story becomes viral.

The short length of this time-2 to 10 seconds-highlights the challenges of reversing information spread online. Similar issues are seen in other complex environments such as the stock market, where serious problems may arise in a few seconds due to the impact of high frequency trading.

"This study finds that bots contribute to spreading information online – as well as showing how quickly these messages can be spread," said Filippo Menczer, a teacher at the School of Informatics , Computer Science and IU Engineering, which led the study.

The analysis also revealed that bots expand the volume and visibility of messages until it is more likely to be split – although it only represents a small fraction of the accounts that spread viral messages .

"People tend to give more trust in messages that come from many people," said co-author Giovanni Luca Ciampaglia, an assistant research scientist with the IU Network Science Institute at the time of the study. "Bots are prey on this trust by making messages seem so popular that real people are being cheated to spread their messages over them."

Sources of information that were labeled as low credibility in the study were identified based on their appearance on lists generated by independent third party organizations from sites that regularly share false or misleading information. These sources – such as websites with misleading names such as "USAToday.com.co" – include outlet with right-left-hand viewpoints.

The researchers also noted other tactics for spreading remote information with Twitter bots. These included expanding one tweet that could be managed by a human operator – across hundreds of automated retweets; repeat links in circular posts; and target very influential accounts.

For example, the study sets out a case where one account @realDonaldTrump mentioned in 19 different messages about millions of illegal immigrants casting ballot in the presidential election – a fake claim that was also a major administrative speech point.

The researchers also carried out an experiment within a simulated version of Twitter and it was found that the deletion of 10 percent of the accounts in the system based on their likelihood to be bots-led to a large reduction in the number of stories from low credibility sources in the network.

"This experiment suggests that removing bottlenecks from social networks would significantly reduce the incorrect information on these networks," said Menczer.

The study also suggests that actions that companies could take to slow down information about misunderstanding on their networks. These include improving algorithms to automatically detect bots and need "human in the link" to reduce automated messages in the system. For example, users may be required to complete CAPTCHA to send a message.

Although their analysis focuses on Twitter, the study's authors added that other social networks were also vulnerable. For example, platforms such as Snapchat and WhatsApp may be difficult to manage information about their networks because their use of encryption and destructive messages complicates the ability to study how their users share information.

"As people across the world increasingly turn to social networks as their main source of news and information, the fight against misunderstanding requires an assessment based on the relative impact of the different ways in which he / she; spread out, "said Menczer. "This work confirms that bots play a role in the problem – and suggest their reduction may improve the situation."

In order to explore shared electoral messages on Twitter at present, Menczer's research group has also launched an instrument to measure "Electoral Volume Bot". Created by IU Ph.D. Students show that the program shows the level of bot activity around specific election-related conversations, as well as the topics, user names and hasht bags that are currently pushing them.


Further investigation:
What's upsetting in fake news? Equipment shows which stories are viral, and if & # 39; bots & # 39; our fault

More information:
Chengcheng Shao et al, Distribution includes low credibility by social photos, Nature Communication (2018). DOI: 10.1038 / s41467-018-06930-7

Magazine reference:
Nature Communication

Provided by:
Indiana University

[ad_2]
Source link