Quote:New York/Berkeley, Calif. February 6, 2018… The Anti-Defamation League’s (ADL) Center for Technology and Society today announced preliminary results from an innovative project that uses artificial intelligence, machine learning, and social science to study what is and what isn’t hate speech online. The project’s goal is to help the tech industry better understand the growing amount of hate online.
The Center for Technology and Society (CTS) has collaborated with the University of California at Berkeley’s D-Lab since April 2017 to develop the Online Hate Index. ADL and the D-Lab have created an algorithm that has begun to learn the difference between hate speech and non-hate speech. The project has completed its first phase and its early findings are described in a report released today. In a very promising finding, ADL and the D-Lab found the learning model identified hate speech reliably between 78 percent and 85 percent of the time.
“For more than 100 years, ADL has been at the forefront of tracking and combating hate in the real world. Now we are applying our expertise to track and tackle bias and bigotry online,” said ADL CEO and National Director Jonathan Greenblatt. “As the threat of cyberhate continues to escalate, ADL’s Center for Technology and Society in Silicon Valley is convening problem solvers and developing solutions to build a more respectful and inclusive internet. The Online Hate Index is only the first of many such projects that we will undertake. U.C. Berkeley has been a terrific partner and we are grateful to Reddit for their data and for demonstrating real leadership in combating intolerance on their platform.”
“This project has tremendous potential to increase our ability to understand the scope and spread of online hate speech,” said Brittan Heller, CTS’s director. “Online communities have been described as our modern public square. In reality though, not everyone has equal access to this public square, and not everyone has the privilege to speak without fear. Hateful and abusive online speech shuts down and excludes the voices of the marginalized and underrepresented from public discourse. The Online Hate Index aims to help us understand and alleviate this, and to ensure that online communities become safer and more inclusive.”
The research led to several other interesting findings, including the fact that when searching for one kind of hate, it’s easy to find hate of all kinds. In the initial results, there were several words that appeared more frequently in hate speech than non-hate speech. The top five words most strongly associated with hate were: Jew, white, hate, women, and black.
The project also found patterns in the construction of hateful language.
The average number of words in a hateful comment was typically longer than in non-hateful comments.
There were slightly more words in all caps found in hateful comments than in non-hateful ones.
The sentence length in hateful comments was slightly longer than in non-hateful comments.
I knew I was on to something when I found Twatter disconcerting, I couldn’t get why would they impose such a strict character limit and how users weren’t bothered.
Quote:‘The Eleventh Edition is the definitive edition,’ he said. ‘We’re getting the language into its final shape — the shape it’s going to have when nobody speaks anything else. When we’ve finished with it, people like you will have to learn it all over again. You think, I dare say, that our chief job is inventing new words. But not a bit of it! We’re destroying words — scores of them, hundreds of them, every day. We’re cutting the language down to the bone. The Eleventh Edition won’t contain a single word that will become obsolete before the year 2050.’
He bit hungrily into his bread and swallowed a couple of mouthfuls, then continued speaking, with a sort of pedant’s passion. His thin dark face had become animated, his eyes had lost their mocking expression and grown almost dreamy.
‘It’s a beautiful thing, the destruction of words. Of course the great wastage is in the verbs and adjectives, but there are hundreds of nouns that can be got rid of as well. It isn’t only the synonyms; there are also the antonyms. After all, what justification is there for a word which is simply the opposite of some other word? A word contains its opposite in itself. Take “good”, for instance. If you have a word like “good”, what need is there for a word like “bad”? “Ungood” will do just as well — better, because it’s an exact opposite, which the other is not. Or again, if you want a stronger version of “good”, what sense is there in having a whole string of vague useless words like “excellent” and “splendid” and all the rest of them? “Plusgood” covers the meaning, or “doubleplusgood” if you want something stronger still. Of course we use those forms already. but in the final version of Newspeak there’ll be nothing else. In the end the whole notion of goodness and badness will be covered by only six words — in reality, only one word. Don’t you see the beauty of that, Winston? It was B.B.’s idea originally, of course,’ he added as an afterthought.
‘Don’t you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it. Every concept that can ever be needed, will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten. Already, in the Eleventh Edition, we’re not far from that point. But the process will still be continuing long after you and I are dead. Every year fewer and fewer words, and the range of consciousness always a little smaller. Even now, of course, there’s no reason or excuse for committing thoughtcrime. It’s merely a question of self-discipline, reality-control. But in the end there won’t be any need even for that. The Revolution will be complete when the language is perfect. Newspeak is Ingsoc and Ingsoc is Newspeak,’ he added with a sort of mystical satisfaction. ‘Has it ever occurred to you, Winston, that by the year 2050, at the very latest, not a single human being will be alive who could understand such a conversation as we are having now?’
In other news…
Quote:YouTube is investing $25m (£18.8m) in journalism on its platform, focusing on helping news organisations produce online videos and changing its site to better support trusted news providers.
As well as the investment, which will be partly used to fund a working group to spearhead news product features, the company is changing how its site works to “make authoritative sources readily accessible”.
The service, owned by Google, will heavily promote videos from vetted news sources on the site’s Top News and Breaking News sections “to make it easier to find quality news”, and create new features – initially only in the US – to help distribute local news.
“We believe quality journalism requires sustainable revenue streams and that we have a responsibility to support innovation in products and funding for news,” the company’s chief product officer and chief business officer, Neal Mohan and Robert Kyncl respectively, said in a statement.
‘Taking them down fuels it more’: why conspiracy theories are unstoppable
“Today, we’re announcing steps we’re taking with the Google News Initiative to support the future of news in online video, and product features we’ve been working on to improve the news experience on YouTube.”
News events, particularly breaking stories, have long been a problem for YouTube. Many times over the last year, conspiracy theories have spread on the site following mass shootings in the US, falsely claiming knowledge of the assailants’ political ties or religion, or alleging the entire event was fake.
Within days of the Las Vegas shooting in October 2017, for instance, search results on the site promised videos suggesting that law enforcement had deceived the public, and that the shooting was a “false flag” attack staged by the government to bring in gun control.
A month later, after another shooting in the US, search results on the site showed videos claiming that the assailant was a far-left terrorist.
In their statement, Mohan and Kyncl acknowledged such problems: “We know there is a lot of work to do, but we’re eager to provide a better experience to users who come to YouTube everyday to learn more about what is happening in the world from a diversity of sources.”
Many of their solutions involve temporarily prioritising other sources over videos. “Authoritativeness is essential to viewers, especially during fast-moving, breaking news events, so we’ve been investing in new product features … After a breaking news event, it takes time to verify, produce and publish high-quality videos. Journalists often write articles first to break the news rather than produce videos. That’s why in the coming weeks in the US we will start providing a short preview of news articles in search results on YouTube that link to the full article during the initial hours of a major news event, along with a reminder that breaking and developing news can rapidly change,” they write.
For longer-lasting hoaxes, such as the claim that the moon landings were fake, YouTube will on Tuesday launch a previously announced programme to link to authoritative sources, such as Wikipedia and Encyclopedia Britannica, under videos on those topics.
Related Posts:If everyone who reads our story, who likes it, helps fund it, our future would be much more secure. For as little as $10, you can support the IWB – and it only takes a minute. Thank you. 844 views