Article Type
Changed
Fri, 01/18/2019 - 17:54

Russian trolls and bots significantly intensified the polarization of vaccine messaging on Twitter, fostering discord on the social network, according to researchers who analyzed the content of tweets over a 3-year period.

copyright DesignPics/Thinkstock

“Bots and trolls are actively involved in the online public health discourse, skewing discussions about vaccination,” wrote David A. Broniatowski, PhD, of George Washington University, Washington, D.C., and his associates.

“Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination,” creating an environment in which countering vaccine skepticism actually enable bots to “legitimize the vaccine debate,” the researchers concluded in the American Journal of Public Health (Am J Public Health. doi: 10.2105/AJPH.2018.304567).

“This is vital knowledge for risk communicators, especially considering that neither members of the public nor algorithmic approaches may be able to easily identify bots, trolls, or cyborgs.”

The researchers conducted two content analyses and one qualitative analysis of tweets from July 2014 to September 2017. Their data set included 1% of all tweets during that time period and a sample of tweets containing vaccine-related keywords.

First they compared rates of vaccine-related tweets between bots and average users, and then they assessed the attitude of these tweets from different account types. Their qualitative case study focused on the use of the hashtag #vaccinateUS which was predominantly used by Russian trolls.

The researchers relied on seven publicly available lists to identify which accounts were bots or trolls and then compared them to randomly selected tweets posted in the same time period.

In their second analysis, the researchers used Botometer, a program created by the Indiana University Network Science Institute (IUNI) and the Center for Complex Networks and Systems Research (CNetS), to categorize tweets as very likely to be human, very likely to be bots, or of uncertain provenance.

Results revealed that Russian trolls, sophisticated bot accounts, and “content polluters” – those that spread malware and unsolicited content – are more likely than average users to tweet about vaccination. Content polluters tweeted more anti-vaccine messages while Russian trolls and sophisticated bots promoted both anti-vaccine and pro-vaccine messages that amplified the polarization (P less than .001).

The higher rate of antivaccine messages from content polluters suggested that antivaccine advocates may have exploited existing bot networks for their messaging.

“These accounts may also use the compelling nature of antivaccine content as clickbait to drive up advertising revenue and expose users to malware,” Dr. Broniatowski and colleagues wrote. “Antivaccine content may increase the risks of infection by both computer and biological viruses.”

The qualitative analysis of the #VaccinateUS hashtag found that 43% were provaccine, 38% were antivaccine and the other 19% were neutral.

“Whereas most non-neutral vaccine-relevant hashtags were clearly identifiable as either provaccine (#vaccineswork, #vaxwithme) or antivaccine (#Vaxxed, #b1less, #CDCWhistleblower), with limited appropriation by the opposing side, #VaccinateUS is unique in that it appears with very polarized messages on both sides,” the researchers reported.

Tweets using the #VaccinateUS hashtags were also more likely to contain grammatical errors, unnatural word choices, and irregular phrasing – but fewer spelling or punctuation errors than average tweets related to vaccines.

“The #VaccinateUS messages are also distinctive in that they contain no links to outside content, rare @mentions of other users, and no images (but occasionally use some emojis),” the researchers found.

Although messages with that hashtag “mirrored” Twitter’s overall vaccine discourse, subtle differences included greater emphasis on “freedom,” “democracy,” and “constitutional rights” than the more common “parental choice” focus of tweets using other vaccine-related hashtags. The conspiracy-theory targets of #VaccinateUS tweets also focused almost entirely on the U.S. government instead of a wide range of conspiracy theories at large, which was more common in other anti-vaccine tweets.

Antivaccine content was densest among accounts, with accounts falling in the middle bot category of uncertainty.

“Although we speculate that this set of accounts contains more sophisticated bots, trolls, and cyborgs, their provenance is ultimately unknown,” the researchers wrote. “Therefore, beyond attempting to prevent bots from spreading messages over social media, public health practitioners should focus on combating the messages themselves while not feeding the trolls.”

The research was funded by the National Institutes of Health. No conflicts of interest were noted.

SOURCE: Broniatowski DA  et al. Am J Public Health. 2018 Aug 23. doi: 10.2105/AJPH.2018.304567.

Publications
Topics
Sections

Russian trolls and bots significantly intensified the polarization of vaccine messaging on Twitter, fostering discord on the social network, according to researchers who analyzed the content of tweets over a 3-year period.

copyright DesignPics/Thinkstock

“Bots and trolls are actively involved in the online public health discourse, skewing discussions about vaccination,” wrote David A. Broniatowski, PhD, of George Washington University, Washington, D.C., and his associates.

“Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination,” creating an environment in which countering vaccine skepticism actually enable bots to “legitimize the vaccine debate,” the researchers concluded in the American Journal of Public Health (Am J Public Health. doi: 10.2105/AJPH.2018.304567).

“This is vital knowledge for risk communicators, especially considering that neither members of the public nor algorithmic approaches may be able to easily identify bots, trolls, or cyborgs.”

The researchers conducted two content analyses and one qualitative analysis of tweets from July 2014 to September 2017. Their data set included 1% of all tweets during that time period and a sample of tweets containing vaccine-related keywords.

First they compared rates of vaccine-related tweets between bots and average users, and then they assessed the attitude of these tweets from different account types. Their qualitative case study focused on the use of the hashtag #vaccinateUS which was predominantly used by Russian trolls.

The researchers relied on seven publicly available lists to identify which accounts were bots or trolls and then compared them to randomly selected tweets posted in the same time period.

In their second analysis, the researchers used Botometer, a program created by the Indiana University Network Science Institute (IUNI) and the Center for Complex Networks and Systems Research (CNetS), to categorize tweets as very likely to be human, very likely to be bots, or of uncertain provenance.

Results revealed that Russian trolls, sophisticated bot accounts, and “content polluters” – those that spread malware and unsolicited content – are more likely than average users to tweet about vaccination. Content polluters tweeted more anti-vaccine messages while Russian trolls and sophisticated bots promoted both anti-vaccine and pro-vaccine messages that amplified the polarization (P less than .001).

The higher rate of antivaccine messages from content polluters suggested that antivaccine advocates may have exploited existing bot networks for their messaging.

“These accounts may also use the compelling nature of antivaccine content as clickbait to drive up advertising revenue and expose users to malware,” Dr. Broniatowski and colleagues wrote. “Antivaccine content may increase the risks of infection by both computer and biological viruses.”

The qualitative analysis of the #VaccinateUS hashtag found that 43% were provaccine, 38% were antivaccine and the other 19% were neutral.

“Whereas most non-neutral vaccine-relevant hashtags were clearly identifiable as either provaccine (#vaccineswork, #vaxwithme) or antivaccine (#Vaxxed, #b1less, #CDCWhistleblower), with limited appropriation by the opposing side, #VaccinateUS is unique in that it appears with very polarized messages on both sides,” the researchers reported.

Tweets using the #VaccinateUS hashtags were also more likely to contain grammatical errors, unnatural word choices, and irregular phrasing – but fewer spelling or punctuation errors than average tweets related to vaccines.

“The #VaccinateUS messages are also distinctive in that they contain no links to outside content, rare @mentions of other users, and no images (but occasionally use some emojis),” the researchers found.

Although messages with that hashtag “mirrored” Twitter’s overall vaccine discourse, subtle differences included greater emphasis on “freedom,” “democracy,” and “constitutional rights” than the more common “parental choice” focus of tweets using other vaccine-related hashtags. The conspiracy-theory targets of #VaccinateUS tweets also focused almost entirely on the U.S. government instead of a wide range of conspiracy theories at large, which was more common in other anti-vaccine tweets.

Antivaccine content was densest among accounts, with accounts falling in the middle bot category of uncertainty.

“Although we speculate that this set of accounts contains more sophisticated bots, trolls, and cyborgs, their provenance is ultimately unknown,” the researchers wrote. “Therefore, beyond attempting to prevent bots from spreading messages over social media, public health practitioners should focus on combating the messages themselves while not feeding the trolls.”

The research was funded by the National Institutes of Health. No conflicts of interest were noted.

SOURCE: Broniatowski DA  et al. Am J Public Health. 2018 Aug 23. doi: 10.2105/AJPH.2018.304567.

Russian trolls and bots significantly intensified the polarization of vaccine messaging on Twitter, fostering discord on the social network, according to researchers who analyzed the content of tweets over a 3-year period.

copyright DesignPics/Thinkstock

“Bots and trolls are actively involved in the online public health discourse, skewing discussions about vaccination,” wrote David A. Broniatowski, PhD, of George Washington University, Washington, D.C., and his associates.

“Accounts masquerading as legitimate users create false equivalency, eroding public consensus on vaccination,” creating an environment in which countering vaccine skepticism actually enable bots to “legitimize the vaccine debate,” the researchers concluded in the American Journal of Public Health (Am J Public Health. doi: 10.2105/AJPH.2018.304567).

“This is vital knowledge for risk communicators, especially considering that neither members of the public nor algorithmic approaches may be able to easily identify bots, trolls, or cyborgs.”

The researchers conducted two content analyses and one qualitative analysis of tweets from July 2014 to September 2017. Their data set included 1% of all tweets during that time period and a sample of tweets containing vaccine-related keywords.

First they compared rates of vaccine-related tweets between bots and average users, and then they assessed the attitude of these tweets from different account types. Their qualitative case study focused on the use of the hashtag #vaccinateUS which was predominantly used by Russian trolls.

The researchers relied on seven publicly available lists to identify which accounts were bots or trolls and then compared them to randomly selected tweets posted in the same time period.

In their second analysis, the researchers used Botometer, a program created by the Indiana University Network Science Institute (IUNI) and the Center for Complex Networks and Systems Research (CNetS), to categorize tweets as very likely to be human, very likely to be bots, or of uncertain provenance.

Results revealed that Russian trolls, sophisticated bot accounts, and “content polluters” – those that spread malware and unsolicited content – are more likely than average users to tweet about vaccination. Content polluters tweeted more anti-vaccine messages while Russian trolls and sophisticated bots promoted both anti-vaccine and pro-vaccine messages that amplified the polarization (P less than .001).

The higher rate of antivaccine messages from content polluters suggested that antivaccine advocates may have exploited existing bot networks for their messaging.

“These accounts may also use the compelling nature of antivaccine content as clickbait to drive up advertising revenue and expose users to malware,” Dr. Broniatowski and colleagues wrote. “Antivaccine content may increase the risks of infection by both computer and biological viruses.”

The qualitative analysis of the #VaccinateUS hashtag found that 43% were provaccine, 38% were antivaccine and the other 19% were neutral.

“Whereas most non-neutral vaccine-relevant hashtags were clearly identifiable as either provaccine (#vaccineswork, #vaxwithme) or antivaccine (#Vaxxed, #b1less, #CDCWhistleblower), with limited appropriation by the opposing side, #VaccinateUS is unique in that it appears with very polarized messages on both sides,” the researchers reported.

Tweets using the #VaccinateUS hashtags were also more likely to contain grammatical errors, unnatural word choices, and irregular phrasing – but fewer spelling or punctuation errors than average tweets related to vaccines.

“The #VaccinateUS messages are also distinctive in that they contain no links to outside content, rare @mentions of other users, and no images (but occasionally use some emojis),” the researchers found.

Although messages with that hashtag “mirrored” Twitter’s overall vaccine discourse, subtle differences included greater emphasis on “freedom,” “democracy,” and “constitutional rights” than the more common “parental choice” focus of tweets using other vaccine-related hashtags. The conspiracy-theory targets of #VaccinateUS tweets also focused almost entirely on the U.S. government instead of a wide range of conspiracy theories at large, which was more common in other anti-vaccine tweets.

Antivaccine content was densest among accounts, with accounts falling in the middle bot category of uncertainty.

“Although we speculate that this set of accounts contains more sophisticated bots, trolls, and cyborgs, their provenance is ultimately unknown,” the researchers wrote. “Therefore, beyond attempting to prevent bots from spreading messages over social media, public health practitioners should focus on combating the messages themselves while not feeding the trolls.”

The research was funded by the National Institutes of Health. No conflicts of interest were noted.

SOURCE: Broniatowski DA  et al. Am J Public Health. 2018 Aug 23. doi: 10.2105/AJPH.2018.304567.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF PUBLIC HEALTH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

Key clinical point: Twitter bots and trolls are polluting social media vaccine discussions.

Major finding: Russian trolls and bots are more likely to amplify polarization of vaccine Twitter messaging while other trolls and bots are more likely to promote anti-vaccine messages and malware.

Study details: The findings are based on three content analyses of vaccine-related Twitter content samples from July 2014 to September 2017.

Disclosures: The research was funded by the National Institutes of Health. No conflicts of interest were noted.

Source: Broniatowski, D et al. Am J Public Health. doi:10.2105/AJPH.2018.304567.

Disqus Comments
Default
Use ProPublica