This is a guest post by a friend and colleague of the DAI ICT Team, Ben Dubow. Ben is a partner at Omelas, a firm that works to bring together data scientists, software engineers, and counterterrorism experts to defeat violent extremism. Debuted at the annual meeting of the Nordic Council of Ministers, Safe Cities, Omelas currently has operations in Europe and the Middle East. It was co-founded by Ben, Evanna Hu, and Bjorn Ihler.

I started my career by conducting threat analyses of suspected jihadists. I’d trawl their online profiles and then use a mix of instinct and experience to decide what to include. It made sense that sharing a post from a Taliban website signaled radicalization. It made sense that following a jihadist preacher signaled the same. It made sense that liking a Facebook page for bacon lovers signaled some apprehension about fundamentalism. But making sense was the extent of the proof we had.

My career later brought me to Google, where one of my first projects was to redesign some of the more unsightly ad templates. I followed every design best practice and wound up with a template that was universally agreed to be prettier than the last. It made sense that my design was an improvement, but that wasn’t proof. So we showed the ads to millions of people online. The ugly ones got the most clicks. Doing what made sense would have costed Google millions.

Demanding proof beyond what makes sense is a major driver of Google’s success. It’s one reason Google is so good at what I was trying to do to start my career: predicting human behavior. It looks at trillions of patterns online to develop the expected value of each person seeing one of its ads. The value it assigns is so accurate advertisers regularly make four to five times what they spend. That’s a long way from two decades ago when the standard was showing a commercial to a few million people at a time and hoping for the best.

Revolution of CVE

We at Omelas firmly believe that this revolution is long overdue in countering violent extremism (CVE) approaches. Here, we take the U.S. government definition of CVE, which includes both preventing and stopping further radicalization of violent extremism. It is time for the field to have concrete evaluation and more quantitative conclusions about which CVE-specific (that is, counter-messages and -narratives) and CVE-relevant (for example, economic empowerment and committee for war crime reporting) activities are actually effective for a certain target group of beneficiaries.

For example, counter-messaging is one of the most popular CVE approaches. Numerous handbooks on counter-narratives have been written, from [Hedayah]( Effective Frameworks for CVE-Hedayah_ICCT Report.pdf) to the Institute of Strategic Dialogue to the Kofi Annan Foundation. The sole focus of the Global Engagement Center of the State Department and the Sawab Center is counter-message campaigns. Yet when pressed about the efficacy of these campaigns, officials and practitioners admit that they do not have enough data to know if they work, why they do or do not work, or even how to improve them in the next iteration. The few that are deemed successful have not been studied comprehensively, as a uniform set of evaluation metrics does not yet exist in the development sector. No one has done the simplest A/B split test to pinpoint exactly which factor(s) contributed to the success. Online sentiment analysis, through natural language processing of the local dialect, can be done to evaluate the change in mood, tone, and overall content prior and post counter-message campaign. This method is more organic than in-person interviews about how a group of beneficiaries feel about certain extremist groups and individuals, not to mention more cost-effective. Yet, this was never done before we created Omelas.

Moving CVE Forward

Of course, quantitative analysis, ranging from social media network analytics to in-person polls, is neither a panacea nor should it be the lone factor in major decision making process for any organization. Qualitative and anecdotal analysis have their place, since the radicalization process and combination of push-pull factors are different based on the individual and surrounding group dynamics. Rather, based on our experiences, our working theory is that the blend of the two yield the most effective results and give us the necessary information for continual improvement.

The CVE field needs to move from relying on gut feelings to one that relies more on data. We have sufficient raw data to catalogue the publicly available online footprints of known extremists, who they are interacting with online, and their public posts in jihadi forums and social media outlets. For the first time, we can also understand the relationship, if any, between what individuals share online, what they say, how they say it, and their likelihood to join an extremist cause. This knowledge allows us to estimate the probability that an online alias is more radicalized than another, or if they point to the same person in real life. On the flip side, by tracking changes over time in someone’s similarity to known extremists, and finding what signals predict those changes, we know scientifically what turns individuals away from extremism and the deradicalization process.

Predictive analytics has been revolutionary. Online advertising, for all its flaws, has led to an internet with hundreds of millions of domains of free content along with tens of millions of free apps. That has been humanity’s yield from using predictive analytics to sell things. Imagine what the same methods will accomplish when used to stop people from killing each other.