Twitter will begin using a wider range of signals to rank tweets in conversations and searches, hiding more replies that are likely to be abusive, the company said today. Comments from users that have often been blocked, muted, or reported for abuse will be less visible throughout the service, CEO Jack Dorsey told a group of reporters. “We are making progress as we go,” Dorsey said.
Twitter already ranks tweets in search and in conversations. But until now, it has not taken negative signals into account when ranking them. This has meant that replies could easily be gamed by bad actors, whether they’re spammers hawking cryptocurrencies or bot networks attempting to influence elections.
Twitter will now begin examining a much wider variety of signals when ranking tweets in conversations and in search, Dorsey said. Some of those signals include number of accounts created by the person tweeting, IP address, and whether the tweet had led people to block the person tweeting it. Twitter won’t remove the tweets from Twitter, it said, but they will now be moved to the “see more replies” section of a conversation, where they are hidden behind an additional tap.
A test of the new approach to ranking found that the number of abuse reports generated from conversations declined by 8 percent, the company said. “The spirit of the thing is, we want to take the burden of the work off the people receiving the abuse or harassment,” Dorsey said.
Relying on algorithmic signals could have several advantages for Twitter as it works to reduce abuse on the platform. They work without respect to the content of the tweet, sparing Twitter from having to make tricky decisions around the tone or intent of a message. And they work regardless of the language the tweet was written in, allowing the company to roll the changes out globally all at once.
At the same time, decisions made by algorithms can also go disastrously awry, and can be difficult for outsiders to understand. Dorsey said Twitter is conscious of that and would invest in making sure the product communicated about how it makes decisions. The company will also consider issuing reports on the enforcement actions it takes across the platform, said Del Harvey, the company’s vice president of trust and safety.
RADICAL CHIC AND MAU-MAUING THE TWITTER SAFETY COUNCIL:
(Yes, Twitter really does have a “Trust and Safety Council.” Yes, it’s as Orwellian as the name implies.)
Paris (AFP) – Facebook pulled or slapped warnings on nearly 30 million posts containing sexual or violent images, terrorist propaganda or hate speech in the first three months of 2018, the social media giant said Tuesday.
In an unprecedented report responding to calls for transparency after the Cambridge Analytica data privacy scandal, Facebook detailed its actions against such content in line with its “community standards”.
Facebook said improved technology using artificial intelligence had helped it act on 3.4 million posts containing graphic violence, nearly three times more than it had in the last quarter of 2017.
In 85.6 percent of the cases, Facebook detected the images before being alerted to them by users, said the report, issued the day after the company said “around 200” apps had been suspended on its platform as part of an investigation into misuse of private user data.
Bogus Accounts: Facebook disabled 583 million fake accounts during the Q1 of 2018, and 694 million the quarter before. The social network removed 98.5 percent of these accounts before they were reported in Q1.
Sexual Stuff: Facebook’s relationship with nudity is tricky. The company restricts sexual content and nudity because some users “may be sensitive to this type of content,” according to its guidelines. There are some allowances, however, including protests and works of art. Still, the company removed roughly 42 million pieces of racy content for the two aforementioned quarters — accounting for less than a tenth of a percent of content viewed on Facebook.
Graphic Violence: Facebook took action on 1.2 million pieces of graphic violence during Q4 2017, and 3.4 million during the first quarter of 2018. The company said the spike is due largely to implementing better tools for finding inappropriate content.