Yasmin Green, head of research at Jigsaw
Fake news, terrorist propaganda, hateful comments the internet is not the nicest place in the world. Jigsaw is trying to change that.
Formerly under Google, Jigsaw now is a subsidiary within Alphabet. The company, under the direction of Alphabet chairman Eric Schmidt and Google Ideas founder Jared Cohen who serves as founder and CEO of Jigsaw, focuses on researching some of the darkest corners of the internet and implementing ways to address them.
Yasmin Green, director of research and development at Jigsaw, joined Mashable‘s Biz Please podcast this week to share what her team has worked on over the last year.
As Mashable business editor Jason Abbruzzese noted, Jigsaw is not just about sitting at your desk, staring at your computer, and looking for tech solutions to the internet’s problems.
Rather, the team visits areas all over the world to speak face-to-face with people involved in these online and real-world conflicts, including ISIS sympathizers and producers of fake news.
Green was one of the people who spoke to people who had left home and trained as suicide bombers in Iraq.
“So [I asked], ‘If you knew everything you know now would you still have gone?’ And they say, ‘Yes’ and I’m like, ‘Why?’ and they say, ‘At that point I was so convinced, I was so brainwashed,'” Green said. “So the takeaway is this is an access to information problem.”
Jigsaw experimented with a program called “Redirect Method,” where they looked to target these potential ISIS recruits with online ads that “debunk the ISIS’s narrative,” educating against terrorism. During an eight-week trial, the program reached 300,000 people in English and Arabic. The viewers watched 500,000 minutes of video.
For the case of online harassment, Green pointed to a partnership Jigsaw has currently with The New York Times. The tech company and the news organization are working together to build a moderation system, powered by artificial intelligence and machine learning, that will help improve comment moderation on websites.
It’s not perfect, of course, but they are working to train the system to be as unbiased as possible. For example, the algorithm once consistently flagged the word “gay” as hateful, due to the fact that many of comments used to train it were identified as toxic. To balance that out, they added news stories including the word “gay” into the system.
“The data they were given is based on society, and we know, society is pretty shitty,” Green said. “The first thing is: Are you reflecting your data? And the second is: Do you need to hold yourself to a higher standard than your data?”