BOYCOTT
CARELESS AI

We only use AI from the lab that leads on safety. Everyone else gets boycotted until they catch up on the FLI AI Safety Index.

1
Person Pledged
1
Organization Pledged

Our Manifesto

AI is transforming our world at unprecedented speed. The decisions made by a handful of companies today will shape the future for all of humanity. Yet most of these companies are racing to deploy ever-more-powerful systems with inadequate safety measures.

We believe that consumers have power. Where we spend our time, attention, and money sends a signal. Right now, that signal is muddled—people use whichever AI is most convenient, regardless of safety practices.

We're changing that.

The Future of Life Institute publishes the AI Safety Index, an independent assessment of how well AI labs are addressing safety. We use this as our guide: we exclusively use products from whichever lab leads the index, and boycott everyone else until they catch up.

This isn't about punishing bad actors—it's about rewarding leadership. When safety becomes a competitive advantage, companies will race to the top instead of cutting corners to ship faster.

One person switching AI tools won't change the industry. But thousands of us, making the same deliberate choice and telling others why? That creates pressure. That gets noticed in boardrooms. That shifts incentives.

Join us. Use only the leader. Boycott the rest.

Currently Boycotted

These labs aren't leading on safety. We don't use their products.

C+
OpenAI
Score: 2.31
C
Google DeepMind
Score: 2.08
D
xAI
Score: 1.17
D
Z.ai
Score: 1.12
D
Meta
Score: 1.10
D
DeepSeek
Score: 1.02
D-
Alibaba
Score: 0.98

The AI Safety Scorecard

Based on the Future of Life Institute's AI Safety Index (Winter 2025). We only use the leader until others catch up.

Official FLI AI Safety Index - Winter 2025View Full Report
FLI AI Safety Index Scorecard - Winter 2025
Safety Leader (we use this)
Behind (we don't use these)
Leader

Anthropic

C+(2.67)
Risk AssessmentB
Current HarmsC+
Safety FrameworksC+
Existential SafetyD
GovernanceB-
Information SharingA-
Behind

OpenAI

C+(2.31)
Risk AssessmentB
Current HarmsC-
Safety FrameworksC+
Existential SafetyD
GovernanceC+
Information SharingB
Behind

Google DeepMind

C(2.08)
Risk AssessmentC+
Current HarmsC
Safety FrameworksC+
Existential SafetyD
GovernanceC-
Information SharingC
Behind

xAI

D(1.17)
Risk AssessmentD
Current HarmsF
Safety FrameworksD+
Existential SafetyF
GovernanceD
Information SharingC
Behind

Z.ai

D(1.12)
Risk AssessmentD+
Current HarmsD
Safety FrameworksD-
Existential SafetyF
GovernanceD
Information SharingC-
Behind

Meta

D(1.10)
Risk AssessmentD
Current HarmsD+
Safety FrameworksD+
Existential SafetyF
GovernanceD
Information SharingD-
Behind

DeepSeek

D(1.02)
Risk AssessmentD
Current HarmsD+
Safety FrameworksF
Existential SafetyF
GovernanceD
Information SharingC-
Behind

Alibaba Cloud

D-(0.98)
Risk AssessmentD
Current HarmsD+
Safety FrameworksF
Existential SafetyF
GovernanceD+
Information SharingD+

No company is doing well. We want every lab to take their responsibility seriously.

Join the Movement

Add your name or your organization to the public list of those committed to only using AI from the safety leader—and boycotting the rest.

Take the Pledge

This will be displayed publicly

By signing, I pledge to:

  • 1.Only use AI from whichever lab leads the FLI AI Safety Index
  • 2.Boycott all other labs until they take the lead on safety
  • 3.Advocate for stronger AI safety standards and regulation

Our Supporters

0 people have joined the boycott. Will you join them?

Frequently Asked Questions

Everything you need to know about the boycott

The AI Safety Index is a comprehensive assessment by the Future of Life Institute (FLI) that evaluates how well AI companies are addressing safety concerns. It grades companies across six domains: Risk Assessment, Current Harms, Safety Frameworks, Existential Safety, Governance & Accountability, and Information Sharing. The Winter 2025 report is their most recent evaluation.

That's completely valid. If you have good reason to believe a different company is the most responsible on AI safety, you're welcome to boycott every company except that one. The goal is to create market incentives for safety leadership—if your own research leads you to a different conclusion about who's leading, acting on that conviction still supports the broader mission of rewarding safety-conscious AI development.

AI safety research is a natural exception: if you're studying model behavior, testing for vulnerabilities, or developing safety techniques, using multiple models serves the same goal we're all working toward. Beyond that, we trust your judgment. This is about shifting consumer behavior to reward safety leadership, not creating a rigid set of rules.

We believe in rewarding leadership, not just punishing poor performers. By exclusively using products from whichever lab leads the safety index, we create a strong market incentive for all companies to compete on safety. This approach sets a high bar and creates a race to the top.

The FLI releases updated scorecards periodically. If a different lab surpasses the current leader's safety score in a future assessment, we update our recommendation to use that lab instead. This is the goal—we want to reward whichever company leads on safety, creating healthy competition.

After taking the pledge, share this website on social media with #BoycottCarelessAI. Talk to friends and colleagues about AI safety. When asked about AI tools, explain why you only use the safety leader. Every conversation helps normalize making AI safety a factor in consumer decisions.