just one more geek in a sea of austin techies

February 14, 2018

Facebook's crippled "fake news" feature #SocialMediaGeek

This week Wired ran a very interesting story peeking behind the curtain of Facebook and Mark Zuckerberg.  The story tracks the past two years' turmoil of Facebook being increasingly leveraged as a tool to deliver intentionally-divisive content and how Russian agents garnered hundreds of millions of likes and shares for fake articles by simply using Facebook's standard advertising features.

One of Facebook's responses has been to partner with a number of fact-checking entities and provide users with a feature to flag Facebook posts as "fake news". The more times a post is flagged as "fake news", the more likely it will get reviewed by a fact-checking partner.

Unfortunately, the "report fake news" feature is a lot less useful than you might think.  Read on to see why...

Facebook thinks "fake" only applies to text
Although the interface may vary somewhat between computer and mobile apps, the essential steps to report a post as "fake news" are:

  1. Click on the "three dots" icon in the upper-right corner of the post.
  2. Select "report this post" (or "give feedback on this post").
  3. Select "It's a false news story" and then "Mark this post as false news".
Simple, eh?

Simple, but not useful.  This feature only applies to text posts.  The "report fake news" feature is not an option for images or videos.  Have you been noticing a whole lot of images with text as part of the image instead of articles with accompanying images?  This could be one reason why.

If the text in an image makes an outrageous claim and does not cite any sources, it's probably false.  If you investigate and determine it's false, you can't report it as such.  Thanks, Facebook.

Facebook's (in)action on fake items
A feature to report fake text-based posts is still better than no feature at all.  So what happens when a post is reported as fake, gets reviewed by one of Facebook's fact-checking partners, and is confirmed as actually being fake?  Not too much.

Facebook will add a "Disputed by <insert partner name here>" tag below the post.  The company claims it will also "place stories lower in News Feed" and "reduce distribution" of pages that repeatedly share false news (see Facebook's descriptions here).

Here's what the "disputed" tag looks like:

So, apparently, even if a post is confirmed to be false by Facebook's fact-checking partners, Facebook will allow the post to continue to be shared and passed around.  

I suppose this approach amounts to attempt to balance the protection of free speech with protecting the public interest, but it feels like the measure falls short of being as helpful as Facebook could be in such cases -- especially when images and videos are excluded from being reported as "fake".

Other Thoughts: "LIKE farming"
Tell your family and friends to stop falling for those posts that ask you to "LIKE if you support" or "SHARE if you agree" or "Comment 'AMEN' to send a blessing" posts.  These posts seem harmless but are actually "like farming" scams. 

In "like farming" scams, someone posts something that a large segment of readers will obviously agree with.  Later, the owner of the post changes the content to something bad such as a link to a virus (actually a link to a web page armed with browser exploits, but just tell your mom it links to a virus). 

Or perhaps the post's Facebook account owner sells the account to a 3rd party that changes ALL the posts from that account to political statements supporting who knows what.  The posts could be changed to items supporting Trump and now it looks like your mom "LIKES" Trump.  Or, if your mom actually does like Trump, explain that the items could be changed to posts supporting Hillary.  You get the idea.  Any time one of your mom's Facebook friends sees any of those posts, the post will be tagged with a message that your mom LIKES it.
You've probably seen the results of this bait-and-switch approach to Facebook content. A common example:  You see a car ad for Toyota and Facebook tells you your uncle Ray "likes" Toyota but you happen to know uncle Ray would -never- consider anything other than domestic brands.  

How did Ray end up "liking" Toyota?  It could go like this:  
  1. Ray LIKED a "LIKE farming" post that said "LIKE if you think we need to feed and house our veterans before giving Congress another raise".  
  2. Later, Toyota hires an ad agency to create and market adds on Facebook.  Success will be judged based on the number of "LIKES" and "SHARES" the ads get.
  3. The ad agency hires a consultant who is a "Facebook expert" to help them get "LIKES" for the new ads the agency is creating. 
  4. The consultant buys a "LIKE farming" account that is already full of LIKED posts including the post your uncle Ray LIKED. The consultant changes the old posts' content to the new Toyota ads. 
  5. After a few weeks, the ad agency reports back to Toyota the high number of LIKES attached to the advertising posts.  All of uncle Ray's Facebook friends think he's gone soft on buying foreign cars.

Who benefited from that "feed and house our veterans" post?  The LIKE farmer and the consultant that was willing to buy "likes".  Meanwhile Toyota and your uncle Ray got unknowingly scammed.

Other examples of "like farming" scam posts include:  
  • Most people can't find the mistake in this picture. SHARE when you see it.
  • Click LIKE if you would give up Facebook for a year to live in this mansion.
  • You will get a miracle in the next 7 days if you type AMEN in the comments.
  • 98% of people get this math question wrong. Answer in the comments if you know it.
  • Money will come to you in the next 3 days if you click SHARE in the next 7 seconds.

Similar participation scams involve answering quizzes to "see which Disney princess you are", "find out how Southern you are", or "test your rock-and-roll knowledge".

Rule of thumb:  Ask yourself, "Who benefits from this post and how?

No one is going to spend time and effort to make up these things for no gain and no credit.  The addition of Facebook "fake news" reporting options is a step in the right direction, but the best way to reduce Facebook fraud is to learn how to recognize it and not participate.

No comments:

Post a Comment