Dear Visitor,

Our system has found that you are using an ad-blocking browser add-on.

We just wanted to let you know that our site content is, of course, available to you absolutely free of charge.

Our ads are the only way we have to be able to bring you the latest high-quality content, which is written by professional journalists, with the help of editors, graphic designers, and our site production and I.T. staff, as well as many other talented people who work around the clock for this site.

So, we ask you to add this site to your Ad Blocker’s "white list" or to simply disable your Ad Blocker while visiting this site.

Continue on this site freely
  HOME     MENU     SEARCH     NEWSLETTER    
CUSTOMER RELATIONSHIP MANAGEMENT NEWS. UPDATED 13 MINUTES AGO.
You are here: Home / Tech Trends / Google To Fight Online Extremism
Google To Use AI, Human Experts To Fight Online Extremism
Google To Use AI, Human Experts To Fight Online Extremism
By Shirley Siluk / CRM Daily Like this on Facebook Tweet this Link thison Linkedin Link this on Google Plus
PUBLISHED:
JUNE
19
2017
Silicon Valley has come to the conclusion that it will take a combination of advanced technologies and human intelligence to help find and control extremist content online. Google and Google-owned YouTube are the latest companies to embrace this approach, announcing on Sunday that they are taking four new steps to fight terrorist content on the Internet.

Those steps the companies plans to take are: ramping up their uses of video analysis models and other technology to identify extremist videos; "greatly" increasing the number of experts to flag questionable content on YouTube; toughening their stances on videos that violate content policies; and stepping up their collaborative efforts with other tech companies such as Facebook, Microsoft, and Twitter.

'No Place for Terrorist Content'

"There should be no place for terrorist content on our services," Google general counsel Kent Walker wrote Sunday in a blog post that was also published as an opinion piece in the Financial Times. "While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now."

While using technology to identify extremist content "can be challenging," Walker said Google has used video analysis models to help identify more than half of the terrorism-related content it has removed over the past six months. He added the company plans to "apply our most advanced machine learning research to train new 'content classifiers'" for identifying and removing extremist video content.

Walker added that Google also plans to add 50 more independent non-governmental organizations to the 63 groups already working with YouTube's Trusted Flagger program, and will support them with grant funding.

Google will also put new restrictions on videos with "inflammatory religious or supremacist content," placing them behind a warning message and preventing them from being monetized, recommended, commented on, or endorsed by users.

"That means these videos will have less engagement and be harder to find," Walker said. "We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints."

Other Strategies: Targeted Ads, Data Sharing

Walker added that Google also plans to expand its efforts to fight online radicalization, something it already targets through programs, such as Creators for Change, which promotes anti-hate voices on YouTube. He said the company is working through its Jigsaw initiative to expand use of the "Redirect Method" across Europe.

That method uses targeted online ads to reach potential recruits to the terrorist organization Isis, and then redirects them to videos aimed at reducing radical thinking.

"In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages," Walker noted.

In December, YouTube began working with Facebook, Microsoft, and Twitter to share data with the goal of reducing the spread of terrorist content online. Their announcement followed a European Union study that found the companies were failing to meet the voluntary compliance standards on hate speech they had agreed to earlier in 2016. Last week, Facebook also announced a two-pronged approach to fighting terrorist content with both artificial intelligence and human experts.

Image credit: iStock.

Tell Us What You Think
Comment:

Name:

Like Us on FacebookFollow Us on Twitter
MORE IN TECH TRENDS
CRM DAILY
NEWSFACTOR NETWORK SITES
NEWSFACTOR SERVICES
© Copyright 2017 NewsFactor Network. All rights reserved. Member of Accuserve Ad Network.