Stung by criticism of its widely reported role as a platform capable of spreading disinformation and being used by state actors to skew democratic elections, Facebook’s COO Sheryl Sandberg unveiled five new ways the company would be addressing these issues at the annual DLD conference in Munich, staged ahead of the World Economic Forum. She also announced that Facebook would fund a german university to investigate the eithics of AI, and a new partnership with Germany’s office for information and security.
Sandberg laid out Facebooks five-step plan to regain trust:
1. Investing in safety and security
2. Protections against election interference
3. Cracking down on fake accounts and misinformation
4. Making sure people can control the data they share about themselves
5. Increasing transparency
Public backlashes mounted last year after Facebook was accused of losing track of its users’ personal data, and allow the now defunct Cambridge Analytica agency to mount targetted advertising to millions of Facebook users without their explicit consent in the US elections.
On safety and security, she said Facebook now employed 30,000 people to check its platform for hate posts and misinformation, 5 times more than in 2017.
She admitted that in 2016 Facebook’s cybersecurity policies were centered around protecting users data from hacking and phishing. However, these were not adequate to deal with how state actors would try to a “sow disinformation and dissent into societies.”
Over the last year she said Facebook has removed thousand of individuals accounts and page designs to coordinate disinformation campaigns. She said they would be applying all these lessons learned to the EU parliamentary elections this year’s well as working more closely with governments.
Today, she said Facebook was announcing a new partnership with the German government’s office for information and security to help guide policymaking in Germany and across the EU ahead of its parliamentary elections this year.
Sandberg also revealed the sheer scale of the problem. She said Facebook was now cracking down on fake accounts and misinformation, blocking “more than one million Facebook accounts every day, often as they are created.” She did not elucidate further on which state actors were involved in this sustained assault on the social network.
She said Facebook was now working with fact checkers around the world and had tweaked its algorithm to show related articles allowing users to see both sides of a news story that is posted on the platform. It was also taking down posts which had the potential to create real-world violence, she said. However, she neglected to mention that Facebook also owns WhatsApp, which has been widely blamed for the spreading of false rumors leaking a spate of murders in India.
She cited independent studies from Stanford University and the Le Monde newspaper which have show that Facebook user engagement with unreliable sites has declined by half since 2015.
In a subtle attack on critics, she noted that in 2012 Facebook was often attacked because it was a “walled garden”, and that the platform had subsequently bent to demands to open up to allow third-party apps to build on the service, allowing greater sharing, such as for game-play. However, the company was “now in a “very different place”. “We did not a do a good job managing our platform,” she admitted, acknowledging that this data sharing had led to abuse by bad actors.
She said Facebook had now dramatically cut down on the information about users which apps can access, appointed independent data protection officers, bowed to GDPR rules in the EU and created similar users controls globally.
She said the company was also increasing transparency, allowing other organizations to hold them accountable. “We want you to be able to judge our progress,” she said.
Last year it published its first community standards enforcement report and Sandberg said this would now become an annual event, and given as much status as its annual financial results.
She repeated previous announcements that Facebook would be instituting new standards for advertising transparency, allowing people to see all the adverts a page is running and launching new tools ahead of EU elections in May.
She also announced a new partnership with the Technical University of Munich (TUM) to support the creation of an independent AI ethics research center.
The Institute for Ethics in Artificial Intelligence, which is supported by an initial funding grant from Facebook of $7.5 million over five years, will help advance the growing field of ethical research on new technology and will explore fundamental issues affecting the use and impact of AI.
This post was originally posted at http://feedproxy.google.com/~r/Techcrunch/~3/r9nRQ71PpMk/.