Research

Facebook has removed 583 million fake accounts this year

Facebook has removed 583 million fake accounts this year

Facebook revealed that the Community Standards Enforcement Report focused on suspicious activities that took place from October 2017 to March 2018.

The majority of these Facebook posts dealt with nudity, propaganda, graphic violence, terrorism and hate speech, among other negative content.

Facebook disabled 583 million fake accounts in the first three months of this year, one of the company's executives confirmed in a blog post on Tuesday.

Facebook says it disabled almost 1.3 billion fake accounts in the six months through March.

Overall, Facebook estimates that around 3% to 4% of the active Facebook accounts on the site during this time period were still fake.




Facebook said it removed 2.5 million pieces of content deemed unacceptable hate speech during the first three months of this year, up from 1.6 million during the previous quarter. Nearly 100 percent of the spam and 96 percent of the adult nudity was flagged for takedown, with the help of technology, before any Facebook users complained about it.

Over the last 18 months Facebook has significantly increased the measures aimed at identifying inappropriate content and protect users, said Vice-President of product management, guy Rosen.

The increased transparency comes as the Menlo Park, California, company tries to make amends for a privacy scandal triggered by loose policies that allowed a data-mining company with ties to President Donald Trump's 2016 campaign to harvest personal information on as many as 87 million users. For instance, the company estimated that for every 10,000 times that people looked at content on its social network, 22 to 27 of the views may have included posts that included impermissible graphic violence. Facebook said that Zuckerberg "has no plans to travel to the United Kingdom", said Damian Collins, the leader of the UK's Digital, Culture, Media and Sport Committee, in a statement Tuesday.

"As Mark Zuckerberg said at F8, we have a lot of work still to do to prevent abuse, .It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important".

However, the social network admitted its automated tools were still struggling to pick up hate speech, with only 38% of more than 2.5 million posts removed having been spotted by the firm's technology. Today's report, which will come out twice a year, can also show how well Facebook's artificial intelligence systems learn to flag items that violate the rules before anyone on the site can see them. As indicated by the article, by Sheera Frenkel, Facebook has been under pressure to remove nudity, violence and hate speech, among other "inflammatory content". "While not always ideal, this combination helps us find and flag potentially violating content at scale before many people see or report it". Facebook is expected to host summits in India, Singapore, and the U.S.