Sunday, July 19, 2015

Washington Post details story on how tech companies have to deal with propaganda from groups like ISIS


The Washington Post has run a long and detailed front page story (by Scott Highman and Ellen Nakashima) Sunday July 9, 2015, “Confronting the Caliphate: Balancing security and free speech: Facebook. Twitter and YouTube look to mute Islamic State without stifling other voices globally”, link here.  Online the title is more graphic: “Why the Islamic State leaves tech companies torn between free speech and security”, with the introduction “Islamic State’s grisly messages force social media to revisit free speech”. 

Facebook aggressively monitors content for violence, according to the story, and Twitter is catching up.  ABC News reports that there are about 200,000 “ISIS tweets” a day, out of over 50 million “normal” tweets.  

The article discusses whether a technology like Microsoft’s PhotoDNA, used now to screen known images (identified by NCMEC) for child pornography (especially by Google on all its platforms) could expand to terror. The technology works mainly with still images, not video. 
   
But there is also a “meta” issue:  it is difficult to distinguish between images or content posted to incite, and stories and images that report recent historical fact.  The idea is known from other extremism in the past (even Nazism). 

The article talks about the Internet Archive and Way Back Machine, which would like to remove propaganda -- but, again, it's also history. 
   
It also poses a quirky problem in conjunction with “amateur” blogger journalism.  A blogger could feel compelled to cover an incident out of a need to make the reporting appear complete, but that very fact could tempt (any) enemies into more outrageous acts in order to be “reported”.

On the other hand, enemies complain about specific reporting (cartoon images of the Prophet).
   
Most mainstream news outlets do not show images or videos the most outrageous or violent ISIS or other radical acts (like the beheadings) explicitly, just as they don't usually publish religiously offensive cartoons.  I follow the same practices.
      
With the most recent incident in Tennessee, it would appear that the perpetrator was influenced heavily by travel overseas, personal contacts, and possibly drug and alcohol use and family issues, maybe mental illness.  This was not a matter of simply getting a tweet, going to a hidden website, and then “acting”. 
  
The article notes that many of the most violent articles are on the Dark Web and are not indexed now anyway.  Also, the communication to “jihadists” is often encrypted.  The “normal” use of platforms like Facebook, Twitter, Blogger, Wordpress and the like isn’t necessarily contributing to the problem as much as some media reports suggest.

No comments: