MASHINIi

Anthropic.

ANTHROPIC.P | Computer programming activities

Anthropic is an American artificial intelligence (AI) safety and research company founded in 2021 by former senior members of OpenAI. The company is dedicated to building large-scale AI systems that are reliable, interpretable, and steerable, with a core focus on AI safety. Anthropic's primary produ...Show More

Ethical Profile

Mixed.

Anthropic reached a massive $1.5 billion settlement in September 2025 to resolve a class-action lawsuit alleging the company used millions of pirated books to train its AI models. While the firm operates as a Public Benefit Corporation with a "Long-Term Benefit Trust" to ensure board independence, it was designated a national security supply chain risk by the U.S. Department of Defense in March 2026 following a dispute over military use restrictions. Despite these legal challenges, Anthropic maintains industry-leading safety standards, including ISO/IEC 42001 certification and a 99.78% harmlessness rate in multi-language testing. Environmentally, the company lacks a formal net-zero target but has committed to covering 100% of grid upgrade costs for its data center partners.

Value Scores

Better Health for AllN/A
Not applicable to this business
Fair Money & Economic OpportunityN/A
Not applicable to this business
Fair Pay & Worker Respect0
-100100
Fair Trade & Ethical Sourcing0
-100100
Honest & Fair Business-20
-100100
Kind to AnimalsN/A
Not applicable to this business
No War, No Weapons0
-100100
Planet-Friendly Business-20
-100100
Respect for Cultures & Communities0
-100100
Safe & Smart Tech40
-100100
Zero Waste & Sustainable ProductsN/A
Not applicable to this business

Better Health for All

Not Applicable

Value not applicable to this business. Anthropic is a software company focused on AI safety and large language models; its core business does not involve the development of medical treatments, healthcare services, or public health initiatives.

Fair Money & Economic Opportunity

Not Applicable

Value not applicable to this business. Anthropic is an AI research and software company; its core business does not involve lending, insuring, moving, or storing money, making the Fair Money & Economic Opportunity value inapplicable.

Fair Pay & Worker Respect

0

As a software and AI research company, Anthropic's core business model does not inherently advance or harm fair pay and worker respect; these outcomes are determined by the company's internal HR policies and operational practices rather than the nature of the product itself. Based on the provided evidence, Anthropic maintains a clean record regarding labor-related legal issues. Multiple articles (Articles

1
,
2
,
3
,
4
, and
5
) explicitly state there are no mentions of regulatory actions, labor violations, fines, or discrimination findings in the available documentation.
6
7
8
9
This supports a tier of 10 for labor_violation_incidents, representing zero modern-slavery or discrimination findings over the assessed period. While the company provides qualitative information regarding its 'crowd worker wellness standards' and its commitment to partnering only with platforms that provide 'fair and ethical compensation' and 'safe workplace practices' (Articles
10
,
11
,
12
,
13
), these are primarily policy statements and contractual requirements rather than quantitative performance data.
14
15
16
17
The articles explicitly note the absence of specific metrics for living wage coverage, CEO pay ratios, collective bargaining, safety incident rates, or turnover.
18
19
20
21
Consequently, other KPIs are omitted due to a lack of substantive quantitative evidence or third-party verification of the outcomes of these policies.

Fair Trade & Ethical Sourcing

0

As a software company, Anthropic's primary supply chain involves data centers and hardware infrastructure, which are subject to risks regarding conflict minerals and labor practices in the electronics sector. While the core business is digital, the company's reliance on physical compute infrastructure makes ethical sourcing of hardware a relevant, though not inherent, aspect of their operations. Based on the provided evidence articles, Anthropic is an AI safety and research company (ISIC 6201) focused on software development and large language models.

1
The articles discuss AI safety protocols, 'Responsible Scaling Policies,' and geopolitical disputes regarding the use of their technology in military contexts.
2
3
4
There is no evidence in the provided texts that Anthropic procures or trades physical commodities, high-risk minerals, or agricultural products typically associated with fair-trade certifications or physical supply-chain labor audits.
5
6
The articles explicitly state that information regarding fair trade certifications, supplier audit frequencies, forced/child labor incidents, and supplier diversity spend is not mentioned.
7
8
9
10
11
Because the company's primary business is computer programming and AI research, and the provided articles contain no data on physical supply chain management or procurement of at-risk materials, all KPIs are scored at Tier 0 (N/A) as the organization does not appear to operate in a sector or manner that involves the procurement of physical commodities or the management of a traditional manufacturing supply chain requiring these specific welfare and sourcing audits.
12
13

Honest & Fair Business

-20

As a software company, Anthropic's core business is neutral regarding 'Honest & Fair Business' by default. While its 'Constitutional AI' framework and PBC status suggest a commitment to transparency and ethical alignment, these are behavioral attributes rather than inherent features of the ISIC 6201 classification. Anthropic’s score is primarily driven by a massive $1.5 billion settlement reached in September 2025 to resolve a class-action lawsuit.

1
The suit alleged the company used millions of pirated books from unauthorized websites to train its AI models, a significant breach of ethical business practices and intellectual property rights.
2
This billion-dollar civil settlement triggers the lowest tiers for regulatory fines and complaint resolution.
3
Further impacting its controversy profile, the U.S. Department of Defense designated Anthropic as a "national security supply chain risk" in March 2026.
4
This designation followed the company's refusal to lift restrictions on its Claude AI tool regarding military use for autonomous weapons, leading to a high-profile legal challenge by Anthropic against the Pentagon.
5
Conversely, Anthropic demonstrates proactive governance and transparency. It operates as a Public Benefit Corporation with a unique "Long-Term Benefit Trust" designed to ensure board independence from financial interests.
6
The company maintains a robust whistleblower policy under its Responsible Scaling Policy (RSP), featuring anonymous reporting channels and escalation paths to the Board.
7
8
Anthropic also subjects its safety and ethical claims to rigorous third-party verification, including ISO/IEC 42001:2023 certification and external evaluations by the U.S. and U.K. AI Safety Institutes.
9
10
It publishes extensive transparency reports, including those required by the EU Digital Services Act, and maintains a public Transparency Hub.
11
12

Kind to Animals

Not Applicable

Value not applicable to this business. As a pure-software and AI research company, Anthropic's core business operations do not involve animal products, testing, or agricultural activities, making the value of 'Kind to Animals' inapplicable.

No War, No Weapons

0

As a general-purpose AI research company, Anthropic's core business is not inherently tied to the defense or weapons sector; however, because its LLMs can be integrated into dual-use applications, the company's impact on this value depends on its specific deployment policies and military-related partnerships. Anthropic’s relationship with defense and national security is characterized by a significant tension between its commercial engagement and its safety-first mission. Regarding **revenue_arms_contracts**, the company held a $200 million contract with the Pentagon (July 2025) for intelligence analysis, battle simulation, and cyber operations.

1
2
While this represents a small fraction of the revenue of major defense primes, it constitutes a direct defense contract, placing it in the -30 tier. In terms of **dual_use_technology**, Anthropic developed 'Claude Gov' specifically for national security customers and deployed models on classified networks via partners like Palantir.
3
4
These systems are evaluated on a case-by-case basis, but the core technology is inherently dual-use, warranting a -50 tier. Anthropic distinguishes itself through its **ethical_red_lines_coded** and **ai_military_safeguards**. The company has codified two 'red lines': a ban on mass domestic surveillance and a ban on fully autonomous weapons (lethal systems without a human in the loop).
5
6
CEO Dario Amodei publicly refused Department of Defense (DoD) requests to remove these safeguards in February 2026, leading to a high-profile conflict where the U.S. government designated Anthropic a 'supply chain risk' and ordered federal agencies to cease using its technology.
7
8
9
This refusal to allow military use for lethal autonomous systems aligns with the -10 tier for both KPIs (external/policy-driven prohibition of specific military uses). Finally, for **surveillance_transparency**, Anthropic has published non-confidential use cases and maintained an ethics-driven stance against domestic surveillance despite federal pressure, justifying a tier of 50.
10
11
The company also demonstrated conflict-related divestment by forgoing 'several hundred million dollars' in revenue to cut off access to firms linked to the Chinese military.
12

Planet-Friendly Business

-20

As a software-based AI research company, Anthropic's core business does not inherently harm or advance environmental stewardship, though its high-compute operations carry an indirect carbon footprint typical of the tech sector. Anthropic is an AI research company that operates as a Public Benefit Corporation.

1
Based on the provided evidence, the company's environmental performance is characterized by high-level commitments and emerging initiatives, but lacks granular quantitative disclosure.
2
3
4
Regarding emissions, the company is classified as a pure services provider in a very low carbon intensity industry (Computer Services), justifying a tier of 0 for scope_1_2_3_emissions.
5
However, it has not set a formal net-zero target year or validated science-based targets (SBTi), leading to negative scores for those KPIs.
6
7
While Anthropic claims to conduct annual carbon footprint analyses and purchase offsets to maintain a "net-zero climate impact," it does not disclose the specific quality or certification (e.g., Gold Standard) of these offsets, resulting in a tier of -80 for carbon_offset_quality_rating.
8
9
On the positive side, Anthropic has launched several climate-positive initiatives.
10
It has committed to covering 100% of grid upgrade costs for its data centers, investing in curtailment systems for demand management, and facilitating new power generation capacity.
11
These actions align with the -40 tier for climate_positive_initiatives (funding frontier R&D/partnerships).
12
Additionally, the company has established emerging climate justice initiatives by pledging to absorb demand-driven price effects to protect American ratepayers from energy cost increases caused by AI data centers, and working with local leaders to generate construction and permanent roles, matching the -40 tier for climate justice.
13
Other KPIs, such as water use, renewable energy percentages, and supply chain transparency, were omitted due to a lack of specific quantitative data in the provided articles.
14
15
16

Respect for Cultures & Communities

0

As a software-based AI company, Anthropic's core business does not inherently harm or advance community rights; however, it remains in scope due to the potential for its hardware-intensive supply chain (data centers and server infrastructure) to impact communities through mineral extraction and energy consumption. Anthropic’s performance regarding 'Respect for Cultures & Communities' is characterized by a lack of operational grievances but significant gaps in formal human rights governance and specific cultural risks. Regarding cultural heritage, evidence suggests an isolated incident where the use of Claude in military intelligence (via Palantir’s Maven platform) may have contributed to the targeting of an Iranian school due to out-of-date mapping data.

1
While this is an operational error rather than intentional destruction, it represents a failure in heritage protection within a high-stakes deployment context.
2
On community grievance resolution, the company scores poorly (-100) because it has not yet adopted a comprehensive human rights policy or a functional community grievance mechanism as outlined by the UN Guiding Principles on Business and Human Rights.
3
4
External observers have noted that Anthropic lacks a formal process for affected communities to seek remediation, particularly concerning the use of its models in conflict zones.
5
6
In terms of cultural inclusivity, Anthropic has integrated principles into its 'Constitutional AI' to respect non-Western perspectives and universal human rights.
7
However, critics in New Zealand argue that these 'universal' frameworks are insufficient for indigenous data sovereignty (e.g., Māori data), noting a dissonance between Silicon Valley safety definitions and indigenous cultural safety.
8
Most other KPIs (FPIC, displacement, water rights, supply chain harm) are tiered at 0 as there are no documented incidents or specific community interfaces reported in the provided evidence, which is typical for a software-focused entity without direct extractive operations.
9
10
11
12

Safe & Smart Tech

40

Anthropic's core business is built around the research and implementation of AI safety, specifically through its 'Constitutional AI' framework designed to ensure models are reliable, interpretable, and steerable, which directly aligns with the goals of responsible and safe technology. Anthropic demonstrates world-class performance in Safe & Smart Tech, characterized by its 'Constitutional AI' methodology and its pioneering 'Responsible Scaling Policy' (RSP).

1
2
As a top global target in the AI sector, Anthropic employs advanced security measures including ASL-3 standards, egress bandwidth controls, and isolated cloud execution environments.
3
Its vulnerability management is exemplary, featuring a robust bug bounty program via HackerOne and a commitment to 3-day acknowledgment for disclosures.
4
In AI ethics, the company sets global standards through its RSP v3.0, which mandates tiered safety levels (ASL-2 to ASL-4) and includes a Responsible Scaling Officer reporting to an independent Trust.
5
6
Anthropic conducts frequent, rigorous audits using autonomous 'investigator agents' and Sparse Autoencoders to identify and mitigate biases, achieving a 99.78% harmlessness rate in multi-language testing.
7
8
Transparency is a core strength; the company publishes detailed 'Risk Reports' every 3-6 months and 'System Cards' for model deployments.
9
10
It provides industry-leading user data controls, including opt-out mechanisms for training and 'zero data retention' options for enterprise clients.
11
12
Compliance is comprehensive, with SOC 2 Type II, ISO 27001, and ISO 42001 (AI Management) certifications, alongside HIPAA-ready configurations.
13
14
While it acknowledges the difficulty of achieving 'SL5' security against nation-state actors unilaterally, its proactive advocacy for a federal transparency framework and its 'Constitutional AI' framework position it as a transformative leader in ethical AI development.
15
16

Zero Waste & Sustainable Products

Not Applicable

Value not applicable to this business. Anthropic is a pure-software company providing AI models and services; as it does not manufacture physical products, the value of Zero Waste & Sustainable Products is not applicable.

Common Questions

Is Anthropic ethical?

Anthropic (ANTHROPIC.P) received a "Mixed" ethics rating from Mashinii. Anthropic reached a massive $1.5 billion settlement in September 2025 to resolve a class-action lawsuit alleging the company used millions of pirated books to train its AI models. While the firm operates as a Public Benefit Corporation with a "Long-Term Benefit Trust" to ensure board independence, it was designated a national security supply chain risk by the U.S. Department of Defense in March 2026 following a dispute over military use restrictions. Despite these legal challenges, Anthropic maintains industry-leading safety standards, including ISO/IEC 42001 certification and a 99.78% harmlessness rate in multi-language testing. Environmentally, the company lacks a formal net-zero target but has committed to covering 100% of grid upgrade costs for its data center partners.

What is Anthropic most controversial for?

Anthropic scores lowest on Planet-Friendly Business (-20), Honest & Fair Business (-20) based on court records, regulatory actions, and investigative journalism. These are the dimensions where the strongest negative evidence is documented.

How does Anthropic score across ethical dimensions?

Anthropic scores positively on Safe & Smart Tech (+40) and negatively on Planet-Friendly Business (-20), Honest & Fair Business (-20). Each dimension is scored on a -100 to +100 scale using documented evidence rather than corporate self-reports.

How does Mashinii score Anthropic?

We score Anthropic across 11 ethical dimensions — including human rights, environmental damage, corruption, and labour practices — using court filings, regulatory actions, investigative journalism, and NGO reports. Our data is adversarial: it comes from sources companies cannot edit or suppress, not from corporate ESG disclosures. Each claim is cited. Read the full scoring manual

Own Anthropic?

Upload your portfolio and see how all your holdings score across 11 ethical dimensions.

Audit My Portfolio

AI-generated analysis based on publicly available data. Not financial advice. Ratings are expressions of opinion derived from automated models and may contain inaccuracies. See our Risk Disclosure for full details.