19 Of Facebook's Biggest Scandals

There are certain movements, events, and fads that define each generation. The 1950s had the birth of rock 'n roll, the 1960s had the civil rights movement, the '80s had Jazzercise, and the era that started in 2003 had social media. Why 2003? That's when Tom Anderson founded Myspace, the Facebook predecessor that allowed users to bling out their pages with all the GIFs, sparkles, pop-ups, and music that their hearts desired. That was quickly eclipsed by the new kid on the block.

Advertisement

Mark Zuckerberg founded "The facebook" in 2004, and it was a massively shocking success. In 2007, The Guardian reported that Zuckerberg already had offers of a $2 billion buyout, and things have only gone up from there.

But they've also gone a little sideways, too — and they've dragged a lot of people down in the process. Facebook (whose parent company is now called Meta) and Zuckerberg have had so many scandals that there's a good chance many users hear the words "Facebook" and "scandal" in the same sentence and wonder which one is being talked about this time. There's been a lot going on over at Facebook, so let's have a refresher on some of the biggest scandals that have ever rocked this seemingly invincible juggernaut of social networking.

Advertisement

19. The Winklevoss twins claim Mark Zuckerberg stole the idea for Facebook

When "The Social Network" hit theaters, it was 2010, Facebook was still a fairly novel idea, and only two years had passed since Facebook founder Mark Zuckerberg settled the OG scandal by making Cameron and Tyler Winklevoss an offer of $45 million in shares in the company, and another $20 million in cash.

Advertisement

The basics of the lawsuit were pretty straightforward: The Winklevoss twins argued that they were the ones that had come up with the whole idea of Facebook, and Zuckerberg had first agreed to help them get it off the ground. While he dragged his feet with their ConnectU — they claimed — he turned around and launched Facebook first.

The entire thing was a saga that just never seemed like it was going to end, until it finally did — pretty abruptly, says Wired. And here's the weird thing: It might seem like accusations of theft, double-crossing, and just being a jerk would put a stink on the whole thing, but Wired later observed: "Facebook can afford to buy back its good name."

18. The company hired consultants to run a smear campaign against TikTok

Back in 2022, TikTok went through a bit of a rough patch as far as their public image went. Headlines were filled with clips of kids being pushed to do all kinds of dangerous and illegal trends, and according to TikTok, they were "deeply concerned" about "the stoking of local media reports on alleged trends that have not been found on the platform." One of the biggest stories to circulate were claims that the so-called "Slap A Teacher" challenge was going viral on TikTok, and according to The Washington Post, there's three important things here. First, it wasn't a viral TikTok thing at all. Second, the rumor first started on Facebook.

Advertisement

And third, it turned out that the stories had all been spread thanks to a Republican consulting firm called Targeted Victory, and who had hired them? Facebook. The idea was a massive smear campaign that would not only — ideally — take a piece out of TikTok's popularity, but take some of the heat off Facebook's own problems — and that's according to emails exchanged in regards to the Facebook-Targeted Victory partnership.

What ended up happening was a series of nationwide stories that were completely unfounded: Most came from anonymous "parents" who were concerned about things they'd heard or seen from TikTok, and it even escalated into appeals for state attorneys offices to investigate. Their collaboration with Meta/Facebook remained out of the conversation, raising some serious questions about the tech giant's reach.

Advertisement

17. Facebook gives some users special treatment with XCheck

In a perfect world, the rules would apply to everyone. It's not a perfect world, though, and Facebook is about as far from perfect as you can get. In theory, Facebook says they put everyone on the same platform, whether you're a college student, a college professor, or the college president, for example. Everyone is supposed to be playing by the same rules, but in 2021, The Wall Street Journal published what they found out about a program called XCheck.

Advertisement

In a nutshell, users who are a part of XCheck are "whitelisted" as being exempt from having their posts deleted or even checked by the programs put in place to stop regular people from posting things like nudity. And that's where our example comes in. In 2019, the Brazilian footballer Neymar posted nude pictures of a woman who had said he'd raped her. His XCheck status let the photos pass, and millions saw them before they were taken down.

Facebook's official stance on XCheck was summed up by spokesman Andy Stone, who wrote that it "was designed for an important reason: to create an additional step so we can accurately enforce policies on content that could require more understanding." Meanwhile, regular, non-XCheck users are subjected to what WSJ calls "rough justice," which a lot of people are unhappy about for a lot of reasons. 

Advertisement

16. Facebook tracks your activity even when you aren't using it

There's a good chance that everyone who has a Facebook account has had a freaky experience that's raised questions about whether or not the company is doing some hardcore stalking. In short? They are, they've been pretty squirrely about saying so, and no one's happy about it. In 2018, The Guardian reported that a lawsuit had been filed in California, alleging that Facebook had designed apps that would allow access to things like text messages, photos, GPS, and even a phone's microphone. Facebook responded that they didn't have to give up all their information proving or disproving the allegations because they were associated with confidential business matters, but that response did the opposite of reassure people.

Advertisement

Fast forward through two years of people wondering just what information Facebook had on them — and what it was listening to — and when The Washington Post reported on the unveiling of the new "Off-Facebook Activity" tracker, it was confirmation of what everyone had been saying for years, all packaged up in a way that made it look like Facebook was really concerned about your privacy.

The tracker came with confirmation that even if the app is closed on a phone, use of other apps — from stores' rewards programs to what news articles are opened and read — were all reporting right back to Facebook. And here's the really terrifying thing: Even turning your phone off doesn't stop the reporting, as many stores will upload information about the purchases you make. Big Brother is definitely watching.

Advertisement

15. Facebook censored a historically important image from the Vietnam War

It's one of the most iconic images from the Vietnam War: little 9-year-old Kim Phuc, running down the road, screaming in agony — she and her family had just been accidentally napalmed. In 2016, a Norwegian author named Tom Egeland shared the photo on Facebook, and not only was it censored for violating the nudity policy, but Egeland's account got hit with a 24-hour ban hammer. According to The Washington Post, the following outrage spiraled through Norway — with even the prime minister and Norwegian print media outlets speaking out against the censorship — and Kim Phuc herself (pictured here in 2019) condemned the decision.

Advertisement

Facebook initially doubled down and defended the censorship, but the outrage just kept growing to the point where they finally said that the "historical importance" of the image "outweighs the value of protecting the community by removal."

It was a huge deal, and not just because of Facebook's hit-or-miss enforcement of censorship. At the time, 44% of Americans used Facebook as a major news source, and the removal of the photo showed just how much the powers-that-be could shape what people did and didn't see. When Norway's biggest print newspaper published an open letter about it, it described Zuckerberg as the "world's most powerful editor," and that's food for thought.

14. Facebook accidentally released the personal information of 6 million users

File this one under "W" for "Whoops!" In 2013, Facebook announced they had just realized there was a massive bug in their system that had been kicking around for about a year, and had accidentally released the personal data of about 6 million people out into the nether regions of the internet — and onto the hard drives of other users.

Advertisement

According to CNet, Facebook's own white hat hackers (the good guys, who try to find exploits in systems before they become problems) discovered the bug in the Download Your Information tool. When users downloaded their own information, it downloaded the information for all the contacts they had, too. But there was a massive problem: If those contacts had phone numbers and addresses, for example, that were in the system but set as private, it downloaded all that private information, too.

Facebook was quick to say (via Reuters) that while the bug had been downloading private information all over the place for a year, they plugged the hole within 24 hours, and found no evidence it had "been exploited maliciously."

Advertisement

13. They've been heavily fined for not adhering to EU data protection laws

Data protection laws are almost ridiculously complicated. In a nutshell, the laws put in place by the European Union are stricter than those in the United States, which can vary wildly in between individual states. In May of 2023, the European Data Protection Board cracked down on Meta and Facebook in a big way when they issued the largest fine in their history.

Advertisement

The fine was for exporting European data to the United States, and the total? A whopping $1.3 billion. At the heart of the matter was Meta's insistence that transferring data from users in the EU to the U.S. was necessary to allow those targeted ads that creep out so many people, but with less-strict data protection laws, European judges ruled that the transfer was putting data at risk. And it was a huge deal: Meta even threatened to pull out of Europe altogether, leading to EU lawmaker Axel Voss's gloves-off response (via The Verge): "Meta cannot just blackmail the EU into giving up its data protection standards. Leaving the EU would be their loss."

It's not the first time Facebook has run afoul of EU laws, either. In 2022, Reuters reported that they had been fined around $64 million by France. Why? Because data protection and privacy watchdogs found that it was too difficult for users to find and decline cookies. They were given three months to comply with French guidelines, or risked further fines to the tune of about $105,000 per day.

Advertisement

12. Facebook was a source of election misinformation via Russia

Few American elections were as divisive as the 2016 presidential election, and piling on top of the chaos was Facebook. Sort of. It wasn't until late 2017 that NBC reported that Facebook had submitted evidence to a government judicial committee showing that around 126 million people had gotten Russian-backed campaign information delivered straight to their news feed. At least 120 Russian Facebook pages shared tens of thousands of posts, which people then helped go viral, to the point where even Facebook didn't know how far it had gone or how many people had actually seen it.

Advertisement

It wasn't a good look. Estimates suggest that about a third of all Americans saw campaign information that was definitely from Russia but didn't have an easily identifiable origin. (In all fairness, other social media sites, like Twitter, were also hit.) Facebook downplayed the impact it may have had, with Facebook attorney Colin Stretch explaining that in reality, it meant only 1 in 23,000 posts was from Russia. Was there an impact on elections? The answer was dissatisfying, and essentially a "probably not... but we can't really rule it out."

Still, most people weren't easily swayed, because what about next time? And the time after? As pointed out by George Washington University media and technology professor Dave Karpf, the real problem is that yes, it's proof that other countries can and will try to influence American elections, and Facebook is a soft way in.

Advertisement

11. They were accused of discriminatory advertising

Can advertisements discriminate? Of course they can, and in 2022, Facebook's Meta settled with the Department of Justice over accusations that they had used algorithms to push housing ads based on criteria (including ethnicity and gender) that violated the Fair Housing Act. The lawsuit, said CNBC, it led to a settlement in which Meta was going to be shelling out $115,054. That wasn't the end of it, though, and a few months later, researchers from Northeastern's Khoury College of Computer Sciences released their findings that other ads were similarly targeted.

Advertisement

Study author Alan Mislove explained that they'd found the people pictured in ads were directly related to who the ads appeared to. Black men and women, for example, more frequently appeared in the news feeds of Black users. The ad features a young woman? That was going to be showing up in the news feed of older men. Both they and a global human rights group called Global Witness found similar trends among job ads, saying that women were more likely to see ads in fields like teaching and child care, while men were shown ads for mechanics.

A Meta spokesperson told CNN that "we do not allow advertisers to target these ads based on gender," but a class action lawsuit over precisely those claims was given the go-ahead in a Quebec court. That lawsuit was kick-started by a CBC investigation into so-called "micro-targeted" ads, and it was estimated that fines could run into the millions of dollars.

Advertisement

10. Facebook's undisclosed psychological experiment

In 2012, Facebook decided to do a little experiment. They took 689,003 people, and mucked about with their news feed to remove what The Guardian described as "emotional words," all to see how it impacted things like likes, shares, and interactions. Then, they monitored what people posted and did after the keywords were manipulated, to see if there were identifiable patterns.

Advertisement

It gets worse: This emotional manipulation was done without the knowledge and consent of these test subjects. University of Maryland law professor James Grimmelmann explained the multi-faceted problem: not only did they bypass one of the hallmarks of ethical research — consent of subjects — but Grimmelmann also said that the idea of a study that was meant and confirmed to change a person's emotional and mental state is downright awful: "This is bad, even for Facebook."

Facebook, of course, started by saying that people accepted the terms of use, and consent was buried in there somewhere. When outrage continued, then-COO Sheryl Sandberg released a ... well, it was a statement: "This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated ... We never meant to upset you." At the heart of it was advertising: Facebook wanted to know if ads with positive and negative words would cause people to post similar sentiments. When news broke, people were posting with words like, "creepy," "terrifying," and some even went right to "evil."

Advertisement

9. They were accused of specifically targeting children

The impact that media has on children and young teens has been argued for decades, and in 2023, Facebook parent company Meta was the target of a lawsuit that spanned 33 states and accused them of using tools that were specifically implemented to target younger users. Add in states that chose to file their own similar lawsuits and not join in the larger suit, and that number rose to 41 states and Washington, DC.

Advertisement

According to The New York Times, the lawsuit included the accusation that "Meta has harnessed powerful and unprecedented technologies to entice, engage, and ultimately ensnare youth and teens. Its motive is profit." Those technologies included things like constant push alerts and updates, along with algorithms designed to target specific users with collected and stored data, then keep them scrolling through content.

A big part of the complaint, says the Associated Press, are claims that Facebook and other Meta properties have less-than-effective age gates in place, with the lawsuit claiming that even though children under 13 are supposedly banned from setting up their own accounts, it's been proven repeatedly that the bans are pretty easy to get around. That's kick-started numerous investigations into the impact that social media has on children, and there's some pretty powerful precedents in place, including the ruling in a British case that found Instagram was, indeed, partially responsible for the suicide of a 14-year-old who had repeatedly reviewed suicide-related content on the platform.

Advertisement

8. Facebook downplays its negative effects on young people

In March 2021, Mark Zuckerberg made this comment in front of Congress when he was asked about Facebook's findings on the relationship between mental health and social media: "The research that we've seen is that using social apps to connect with other people can have positive mental health benefits." Instagram (which is also owned by Facebook) boss Adam Mosseri has said similar things, but according to The Wall Street Journal, the findings that were kept behind closed doors were very different.

Advertisement

Instagram in particular — filled as it is with photos of scantily-clad, perfect bodies on beaches and in bikinis — was found to have the potential to devastate teen users. A March 2020 presentation explained that 32% of teenage girls surveyed said that looking at Instagram made them feel worse about their own appearance. The presentation also said, "Teens blame Instagram for increases in the rate of anxiety and depression. This reaction was unprompted and consistent across all groups."

That's become a big deal, and it's led to Congress demanding Facebook hand over their research. They haven't. It's also led to people like San Diego State University psychology professor Jean Twenge likening the connection between Facebook and teen mental health to smoking and cancer, making this not entirely a scandal, but more accurately a scandal-in-progress.

Advertisement

7. The Cambridge Analytica scandal

The Cambridge Analytica scandal is one of the biggest in Facebook's dubious history, and here's what went down ... in a nutshell, because it's a doozy. According to The Guardian, it started when the companies Cambridge Analytica and Global Science Research teamed up on a Facebook app called thisisyourdigitallife. It was marketed as a personality test and collected all kinds of data from all kinds of people who took it.

Advertisement

The first problem was that it also collected all kinds of data from all kinds of people that were friends with the people who took it. It then came out (in part thanks to a whistleblower) that Cambridge Analytica was headed by Steve Bannon — who just happened to be buddy-buddy with Donald Trump. The same informant revealed that they then turned around to find a way to use all that personal data to build a program that would target voters with political advertisements specially selected to have the most impact on said voter and guide them toward making the "right" choice.

It wasn't just the U.S. elections that were impacted, either. The investigations claimed that there was a distinct possibility that interference from the secret data collection and skewed ads had impacted the Brexit vote as well. For their part, Facebook had a response: It was all fake news. Then, in 2020, the BBC reported that they were sued for illegally harvesting data from a whopping 87 million people.

Advertisement

6. Facebook's $5 billion fine

Facebook's Cambridge Analytica scandal was the scandal that just kept on giving, and things were still going on in 2019 — a full seven years after it first hit headlines. That, says Forbes, was when the (FTC) announced they were hitting Facebook with a whopping $5 billion fine for privacy violations. It was the largest they'd ever issued, and it also involved instructions on how to make sure something like this didn't happen again. Outlined in the settlement were guidelines that explained how Facebook would be responsible — and held accountable — for privacy concerns in the future, and it included a more hands-on approach by the FTC.

Advertisement

Five billion is a ton of money, and to be exact, if it was paid out in $100 bills, it would actually weigh 50 tons (or 100,000 pounds). The really shocking thing didn't come out until 2021, when Politico reported that shareholders were none too pleased about a massive WTF moment. It came out that the FTC originally said the fine would have been around $106 million, but they would allow Facebook to overpay — up to that magic $5 billion number — in exchange for a promise that Mark Zuckerberg and Facebook COO Sheryl Sandberg were not going to be held personally liable.

One shareholder put it like this: "The Board has never provided a serious check on Zuckerberg's unfettered authority. Instead, it has enabled him, defended him, and paid billions of dollars from Facebook's corporate coffers to make his problems go away."

Advertisement

5. Facebook is not concerned about hate, allegedly

Facebook had been called out repeatedly for allowing things like misinformation and hate speech, and in 2021, a whistleblower came forward with tens of thousands of pages of documents that she said proved that the higher-ups at Facebook didn't actually care how much vitriol was being spewed, they were only looking at the bottom line. Her name was Frances Haugen, and she told "60 Minutes": "The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, chose to optimize for its own interests, like making more money."

Advertisement

Haugen (pictured) was headhunted by Facebook in 2019, and worked in the Civic Integrity program. The name of the program is self-explanatory, and so is the fact that they got rid of it immediately post-election because there hadn't been any rioting (...yet). Haugen says it was a red flag that Facebook just wanted to be free to do whatever it wanted — and that included keeping the algorithm that decides what shows up on news feeds.

Like guys with Confederate flags? Worry about chemtrails? Think 5G is the real problem? Then that's what's going to show up in your news feed: Even though it might be damaging, it's still going to target people it knows will stay on Facebook — and keep them clicking on ads — longer, which all increases their revenue.

Advertisement

4. Facebook's slow response to human trafficking

Facebook has a really, really dark side, and it's one that came out in 2021. That's when The Wall Street Journal posted a massive expose on tons of documents they'd received, showing that there were a lot of people using Facebook for unsavory purposes. Worse? Facebook seemed to know about it, and just didn't care.

Advertisement

Stories are horrible, and they start with a Mexican cartel that was reportedly using Facebook for everything from recruiting new members to hiring hit men. Look to the Middle East, and there's a disturbing trend of human traffickers using the platform to reach women who are then held as sex workers or slaves. Investigators even found pages for illegally selling organs, and bizarrely, a lot of this activity is in plain sight. That Mexican cartel? They're technically labeled as one of the "Dangerous Individuals and Organizations" Facebook says they're on the lookout for, but investigators found multiple pages that not only went by the cartel's name, but showed pictures of blood, beheadings, and guns to ... advertise?

Advertisement

That extended to the Facebook-owned Instagram, which had photos including one of a bag of severed hands. Facebook declined to comment, but they did say they spent millions of hours taking down potentially damaging or violent content. The problem? Even though more than 90% of users are outside of the U.S., just 13% of Facebook's time is spent looking at those users.

3. They were accused of not doing enough to stop child exploitation

No one wants to think about the fact that this is a world where child sex trafficking and exploitation are a thing, but here we are. In 2023, The Guardian spoke with Tina Frundt, the founder of a Washington, DC-based center set up to help child victims, and she said that since they opened their doors in 2008, one thing had remained clear: Facebook and Instagram were the go-to platforms for predators looking to recruit, exploit, and sell children.

Advertisement

Survivor stories are awful, and often involve being contacted via direct messages, befriended, and convinced to agree to an in-person meeting. Then, those same platforms are used to advertise. Frundt explained, "When I was trafficked long ago, I was advertised in the classified sections of freesheet newspapers. Now my youth here are trafficked on Instagram. It's exactly the same business model, but you just don't have to pay to place an ad."

Meta/Facebook has issued statements saying that they're doing all they can to remove predators from their platforms, but it's not new or old news. In 2021, TechCrunch reported that more than 90% of sex trafficking involving children in Kenya involved Facebook, and in 2020, a group called the Tech Transparency Project released their findings (via The Guardian) that hundreds of children were trafficked on Facebook over the prior six years — and Facebook's tech had only caught 9% of them.

Advertisement

2. Planning the Capitol riots on Facebook

The Capitol riots of January 6, 2021 were the moment that the rest of the world stopped, looked at America, and asked, "What the heck are you guys even doing?" 

Every riot needs a way to organize, and even as the chaos died down, fingers pointed to Facebook. NBC News reported that Facebook had been used as a major platform to plan attacks on the U.S. Capitol, and a nonprofit organization called the Tech Transparency Project flagged a bunch of pages that specifically called out January 6th as go-time. They found that Facebook content in the previous month included things like groups calling citizens to arms, posts in Nazi-style fonts, and appeals for vigilantes who were ready to "Occupy Congress."

Advertisement

The then-Facebook COO Sheryl Sandberg adamantly said that it definitely wasn't Facebook's fault, and that they had removed somewhere around 350,000 questionable profiles for inciting violence. That's great, but CNBC says they still missed some big ones. On January 5th, for example, the Black Conservatives Fund told their 80,000 followers that it was time to move on the Capitol.

The conversation was still going on months later, with Facebook saying they'd had 35,000 employees working on security teams in the months leading up to the election. Forbes said that internal memos warned employees to be ready for whistleblower accusations that their work had stopped too soon and Facebook had contributed to the rioting, and a whistleblower saying that is exactly what happened.

Advertisement

1. Accusations Facebook is a tool to incite genocide

In 2020, the BBC reported on a crisis that had been going on in Myanmar for a long time. Here's the gist: The government views the Rohingya Muslims — one of Myanmar's ethnic minorities — as illegal immigrants, even though they say they've been there for generations. Persecution led to a mass exodus that turned violent, and thousands of people ended up dead with what the UN called "genocidal intent."

Advertisement

Here's where Facebook comes in. Myanmar had little to no internet access — it was blocked by the militaristic government — for a long time, and it wasn't until 2011 that telecommunications firms were allowed in. When they were, they brought Facebook with them, often pre-loaded onto devices. It quickly became a news source for many, so when "news" stories started being shared about atrocities supposedly being committed by the Rohingya Muslims, a lot of people saw them.

And a lot of people got very, very angry. According to The New York Times, Facebook became one of the largest sources of anti-Rohingya hate and propaganda of the conflict, and even when Facebook noticed — and removed many of the accounts that were spreading the misinformation — more just popped up in their place. Some of the military-run accounts purported to be entertainment or beauty sites, and amassed millions of followers before the propaganda machine kicked into high gear. Facebook was ultimately called out for allowing themselves to become a platform for inciting genocide.

Advertisement

Recommended

Advertisement