Why This Matters
Misinformation can spread faster than fact-checking and has become a problem that social media platforms are struggling to contain. Major platforms have begun implementing policies to try to curb the spread of false information, and TikTok and Instagram are no exception.
Is anything coming out of this?
In the last post I touched on how I think TikTok and Instagram are dealing with the misinformation that’s spreading rapidly on their platforms. Here I’ll go into more detail, with examples, and offer my thoughts on what works and what doesn’t.
TikTok’s Approach to Misinformation
TikTok has taken a pretty aggressive approach when it comes to misinformation. According to the
https://www.tiktok.com/safety/en/policies-and-engagement/combating-misinformation?from=search
the platform uses a mix of AI technology and third-party fact-checkers to identify misleading content.
What TikTok Does:
- Removes harmful misinformation
- Adds warning labels to disputed content
- Limits how far misleading videos can spread
- Redirects users to trusted sources (like WHO during COVID-19)
A good example of this was during the COVID-19 pandemic, when TikTok actively removed videos promoting fake cures and added links to verified health information from the WHO
Where TikTok Falls Short:
Even with these efforts, misinformation still goes viral. Why?
Because TikTok’s algorithm is built around “engagement”. If something is entertaining, shocking, or emotional it spreads. Fast.
From my own experience, I’ve seen videos that are clearly misleading rack up thousands of views before TikTok steps in. By that point, the damage is already done.
Instagram’s Strategy for Combating False Information
For those unaware, Facebook and WhatsApp have recently made the bold decision to block ALL fact-checked content from appearing in users’ feeds and news streams. In contrast, Instagram (also a Meta property) is taking a more subtle, selective approach to this measure.
Learn more here:
https://about.instagram.com/blog/announcements/combatting-misinformation-on-instagram
What Instagram Does:
- Adds warning labels to false content
- Reduces how often flagged posts appear
- Requires users to click through warnings
- Penalizes repeat offenders
During elections, Instagram also directs users to verified information hubs and limits political misinformation.
What Works Well:
One thing Instagram does better is adding friction. If something is flagged, you have to actively choose to view it. That pause can make people think twice.
Where It Falls Short:
The biggest issue is inconsistency. Some posts get flagged quickly, while others slip through completely.
Also, misinformation spreads easily through:
- Stories
- DMs
- Private shares
These areas are harder to monitor, which creates loopholes.
Do These Policies Actually Work?
Kind of but not enough!
Both platforms are clearly trying:
- Fact-checking systems ✔️
- Warning labels ✔️
- Content removal ✔️
But the problem is deeper than that.
Research from the APA Misinformation Effect shows that emotionally charged content spreads faster than factual information.
And what do TikTok and Instagram reward? Engagement.
So even with moderation, the system itself still allows misinformation to thrive.
What’s Missing?
Here’s where both platforms are lacking:
1. Proactive Prevention
Most moderation happens after content goes viral.
2. Transparency
Users don’t really know:
- Why something was flagged
- Why something else wasn’t
3. Media Literacy
Platforms aren’t doing enough to teach users how to:
- Spot fake information
- Verify sources
- Think critically
How These Platforms Can Improve
If TikTok and Instagram really want to make a difference, here’s what they need to do:
✅ 1. Catch Misinformation Earlier
Invest more in AI + human moderation to stop harmful content before it trends.
✅ 2. Be More Transparent
Explain moderation decisions clearly so users understand what’s happening.
✅ 3. Teach Users
Add built-in tools or prompts that help users learn how to fact-check content.
✅ 4. Fix the Algorithm Problem
Reduce the reach of unverified viral content, even before it’s flagged.
Final Thoughts
TikTok and Instagram are definitely making progress, but they’re not solving the problem yet.
From both research and personal experience, it’s clear that misinformation isn’t just about bad content. It’s about how platforms are designed.
As long as engagement is the priority, misinformation will always find a way to spread.
If these platforms want to be truly responsible, they need to rethink not just what gets posted—but what gets promoted.
Leave a comment