Skip to main content

Can AI Really Solve Facebook’s Problems?

Despite CEO Mark Zuckerberg’s efforts to reassure Congress that artificial intelligence can help find fake news and protect privacy, lawmakers worry the tech may be “biased”

Facebook CEO Mark Zuckerberg Testifies At Joint Senate Commerce/Judiciary Hearing

Mark Zuckerberg testifies before a combined Senate Judiciary and Commerce committee hearing in the Hart Senate Office Building on Capitol Hill April 10, 2018 in Washington, DC.

Congress interrogated Facebook founder and CEO Mark Zuckerberg for two days this week over his company’s privacy policies—and its apparent inability to prevent the misuse of its social media platform by some promoting hatred, terrorism or political propaganda. Throughout Zuckerberg’s apologies for not doing more to protect users’ privacy and curb the spread of false and misleading information on the site, he repeatedly reassured lawmakers artificial intelligence would soon fix many of Facebook’s problems. Whether or not he has a strong case depends on how his company specifically plans to use AI—and how quickly the technology matures over the next few years.

Zuckerberg touted several AI successes before two Senate committees: Judiciary and Commerce, Science and Transportation and the House Committee on Energy and Commerce. Facebook AI algorithms already find and delete 99 percent of terrorist propaganda and recruitment efforts posted by Islamic State in Iraq and Syria (ISIS) and al Qaeda-related Facebook accounts, Zuckerberg testified during Tuesday’s Senate hearing. But the Counter Extremism Project (CEP), a nonprofit nongovernmental organization that monitors and reports on the terrorist-group activities, disputed Facebook’s claim the same day. The CEP issued a statement saying it still finds “examples of extremist content and hate speech on Facebook on a regular basis.”

Facebook’s founder frequently reminded Congress that he launched the network from his dorm room in 2004, and he acknowledged several times that his approach to monitoring content has long relied on members reporting misuse. That reactive stance has contributed over the years to the company’s failure to quickly find and remove discriminatory advertisements, hateful content directed at specific groups, and terrorist messages, he said. Nor was Facebook equipped to handle the deluge of misleading news articles posted by Russian groups seeking to influence the 2016 U.S. presidential election. AI is already helping Facebook address some of those problems, according to Zuckerberg, who said that the company used the technology to find and delete “tens of thousands” of accounts seeking to influence voters prior to political elections in France, Germany and elsewhere within the past year.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Zuckerberg acknowledged that software that can automatically identify verbal assaults is more difficult to write, and still five to 10 years away. The main challenges are defining exactly what qualifies as hate speech, and training AI algorithms to identify it across a number of languages. In response to criticism that Facebook provided a forum for inciting violence against Rohingya Muslims in Myanmar (also called Burma) and was slow to remove such content, Zuckerberg said his company was ill-prepared to prevent its platform from being misused. He told senators that, in addition to developing AI software to automatically identify hate speech in the future, the company is hiring dozens more Burmese-speaking staff to monitor malicious activity directed at the Rohingya. He added that until the software is ready, there’s a “higher error rate flagging such content than I’m comfortable with.”

Facebook has about 15,000 people working on security and content review and plans hire another 5,000 by the end of the year. “This is an arms race,” Zuckerberg said, noting 2018 is an important year for elections in the U.S. and elsewhere—and that Facebook’s adversaries will continue to get better at using the platform to spread misinformation.

“I believe that what [Zuckerberg] says is possible, but since the methods and results have not been published I cannot say for sure whether Facebook can achieve the accuracy required,” says Stuart Russell,a computer science and engineering professor at the University of California, Berkeley. “A combination of AI and crowdsourced flagging, with proper inference methods to estimate flagger reliability, should be able to clean up enough garbage that posting garbage becomes a futile activity.”

AI’s ability to determine whether a Facebook post includes or links to false information requires the software to understand semantics and what different words mean in context, says Dean Pomerleau, an adjunct research scientist at Carnegie Mellon University’s Robotics Institute. “You need common sense to understand what people mean, and that is beyond AI’s current capabilities,” Pomerleau says. A poster’s ability to make subtle changes in text, images or video flagged as “objectionable” further complicates AI’s capacity to help Facebook. Text alterations, photo cropping and video editing make it difficult for software to do simplistic pattern matching as the content spreads and changes, Pomerleau says. A near-term approach that could work, he adds, would be for Facebook to equip the thousands of people it is hiring to monitor content with artificially intelligent tools that help them find and analyze fake news and other unwanted content.

Facebook’s AI development is indeed well underway. The company launched the Facebook AI Research (FAIR) lab in 2013 under the leadership of AI luminary Yann LeCun. LeCun, who switched roles earlier this year to became Facebook’s chief AI scientist, focused FAIR’s research on developing predictive abilities that would enable the social media site to make educated guesses about what users want in order to better engage with them. That includes more customized news feeds and targeted advertisements as well as improvements in chatbots—artificially intelligent computer programs designed to provide information, enable online purchases and deliver customer service. The same algorithms designed to improve chatbots’ ability to recognize different languages and comprehend dialogue will be particularly relevant to Facebook’s ability to flag objectionable content worldwide.

A recurring concern in both the Senate and House hearings was whether Facebook can avoid political bias when defining objectionable content. Zuckerberg said Tuesday that the company has a team focused on how to ethically employ AI. Facebook will ultimately have to rely on AI to flag content that might be objectionable or require review, says Florian Schaub, an assistant professor of information at the University of Michigan. “They can’t hire enough humans to monitor every post that goes up on Facebook,” he says. “The big challenge, and the thing that we as a society should be concerned about, is that Facebook becomes the watcher over our morals. In that case, it’s a group of people at Facebook rather than society that decides what’s objectionable.”

Facebook has been in hot water with regulators over revelations that groups linked to Russian military intelligence services utilized the platform for years, leading up to the 2016 election. Facebook discovered 470 accounts and pages contributing to a disinformation campaign run by Russia’s Internet Research Agency (IRA). Over a two-year period the IRA spent about $100,000 on more than 3,000 Facebook ads. The final straw that landed Zuckerberg in front of Senate and House committees, however, was the recent revelation that Cambridge University researcher Aleksandr Kogan turned over data on roughly 87 million Facebook users and their friends without their permission to Cambridge Analytica, a political data firm hired by Pres. Donald Trump's 2016 election campaign.

“It seems Facebook is going through a phase of reckoning and now starting to realize how socially impactful their platform is,” Schaub says. “For a long time they felt they were serving a great social function in getting people connected, and now they’re realizing there’s actually a lot of responsibility that comes with that. That seems to be a little bit of a shift, but at the same time this is not the first time we’re hearing Zuckerberg apologize for indiscretions on Facebook.”

The real question is more about Facebook’s will to change than its ability to develop the AI and other technology needed to better protect member privacy and prevent the spread of violent or misleading content, Pomerleau says. “The basic business model is built on sensational content,” he adds. “So it’s not clear that getting rid of all inflammatory stuff is in Facebook’s best interest.”