‘Deepfake challenge’ aims to find tools to fight manipulation… TWITTER bots becoming more Human-Like

‘Deepfake challenge’ aims to find tools to fight manipulation

Washington (AFP) – Technology firms and academics have joined together to launch a “deepfake challenge” to improve tools to detect videos and other media manipulated by artificial intelligence.

The initiative announced Thursday includes $10 million from Facebook and aims to curb what is seen as a major threat to the integrity of online information.

The effort is being supported by Microsoft and the industry-backed Partnership on AI and includes academics from the Massachusetts Institute of Technology, Cornell University, University of Oxford, University of California-Berkeley, University of Maryland and University at Albany.

It represents a broad effort to combat the dissemination of manipulated video or audio as part of a misinformation campaign.

“The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer,” said Facebook chief technical officer Mike Schroepfer.

Twitter Bots Are Becoming More Human-Like: Study

In 2016, they were mostly retweeters on timers. Now they’re gathering intelligence.

We are primarily funded by readers. Please subscribe and donate to support us!

Even as humans are getting better at recognizing bots — social media personas that are just software disguised as people — so these bots are growing more sophisticated and human-like. A new study by researchers at the University of Southern California tracks how — and suggests ramifications for public opinion and the 2020 election.

The study, published in the journal First Monday, looked at 244,699 Twitter accounts that tweeted about politics or the election in both 2016 and 2018. Using Indiana University’s Botometer tool, the researchers determined that 12.6 percent — about 31,000 accounts — were bots, a percentage that aligns with previous research. 

A look at the bots’ tweets showed that most of their 2016 activity was, well, bot-like, as in rhythmically mechanical and largely composed of retweets. But in 2018, “bots better aligned with humans’ activity trends, suggesting the hypothesis that some bots have grown more sophisticated.” Moreover, the bots did a lot less retweeting.

But so did humans, the researchers found: “Human users significantly increased the volume of replies, which denotes a growing propensity of humans in discussing (either positively and negatively) their ideas instead of simply re-sharing content generated by other users.”

Bots are bad at replies, for obvious reasons. To make up for it, the bots shifted toward more interactive posts like polls and questions, seeking information on their followers, according to the study.

Our study further corroborates this idea that there is an arms race between bots and detection algorithms,” wrote lead researcher Emilio Ferrara in a statement. “As social media companies put more efforts to mitigate abuse and stifle automated accounts, bots evolve to mimic human strategies. Advancements in AI enable bots producing more human-like content. We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 US elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences.”

The research was supported, in part, by the U.S. Air Force’s Office of Scientific Research.

Views:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.