AI Perspectives #14: The AI Accountability Crisis
Why Layered Governance is the Only Path to Contain Societal Harm
Introduction: The Wild West of AI
Artificial intelligence has emerged as one of the most transformative technologies of our era, reshaping industries, economies, and societies at an unprecedented pace. Yet, while its capabilities expand rapidly, its governance remains alarmingly underdeveloped. Unlike regulated sectors such as automotive or pharmaceuticals, where safety protocols and liability frameworks are deeply embedded, AI operates in a lawless landscape where accountability is fragmented and societal harm proliferates unchecked.
Consider this: we require hundreds of certifications for a car’s brake system to ensure public safety, yet AI systems influencing healthcare decisions, hiring practices, and even democratic processes are deployed with little oversight. This disparity highlights a troubling contradiction in how society approaches technological innovation. While we demand “rigor” for physical technologies that directly impact lives, we let AI systems—capable of shaping entire communities—run loose without guardrails.
The consequences of this governance gap are already visible. From biased hiring algorithms that perpetuate discrimination to opaque decision-making tools eroding public trust in institutions, AI’s societal harm stems not from isolated failures but from systemic negligence across its multilayered ecosystem. Each layer—from infrastructure and foundational models to applications and end-user interfaces—operates in silos, enabling stakeholders to deflect responsibility for cumulative harm.
This article argues that the only path to contain these risks is through layered governance—a framework that addresses accountability at every stage of AI’s lifecycle. By exposing the structural gaps in regulation and advocating for actionable solutions, we aim to challenge the status quo and provoke urgent discussions about the future of AI accountability. Why does society tolerate AI’s lawlessness while demanding “rigor” for other technologies? It’s time to confront this question head-on and build an ecosystem where innovation thrives within ethical boundaries.
Overview of the Article
Before diving into the detailed discussions, here’s an outline of the five sections that will guide this exploration of AI accountability and governance:
1. The Multilayered AI Stack: Where Harm Hides
This section dissects the layered nature of AI systems—from infrastructure to end-user interfaces—and reveals how fragmented accountability across these layers allows societal harm to proliferate unchecked.
2. The Automotive Analogy: Lessons from Regulated Industries
Drawing parallels with the automotive industry, this section explores how rigorous safety and liability frameworks in cars can inspire similar governance mechanisms for AI, ensuring accountability at every stage.
3. The Illusion of Ethical Compliance
Here, we critique the performative nature of corporate ethics pledges and expose how regulatory blind spots enable “ethics washing,” masking systemic negligence under a mask of responsibility.
4. Global Inequity: AI’s Externalized Costs
This section highlights how marginalized communities, particularly in the Global South, disproportionately bear the environmental, labor, and cultural costs of AI while reaping few benefits from its innovation.
5. A Blueprint for Layered Governance
The final section proposes actionable solutions for layered governance—mandating transparency, auditing foundational models, enforcing liability frameworks, and establishing global coordination to address cross-border harms effectively.
1. The Multilayered AI Stack: Where Harm Hides
From Chips to Chatbots: How Accountability Dissolves Across Layers
A. Layer 1: Infrastructure
The invisible backbone of AI—and its hidden costs
At the base of the AI ecosystem lies the infrastructure layer: data centers, energy grids, and semiconductor supply chains that power AI development. These systems consume staggering resources—a single AI model training session can drain millions of liters of water and emit carbon equivalent to 60 cars’ annual emissions. In 2024, Nevada’s desert data centers sparked protests when local communities discovered their groundwater reserves were being depleted to cool servers training commercial language models. Yet, cloud providers like AWS and Google Cloud face no legal obligation to disclose environmental impacts, masking the climate inequity embedded in AI’s physical footprint.
Governance gap: While the EU’s Corporate Sustainability Reporting Directive (CSRD) mandates emissions disclosures for manufacturers, AI infrastructure remains exempt. This allows tech giants to outsource environmental harm to regions with lax regulations, treating the Global South as a “sacrifice zone” for computational growth.
B. Layer 2: Foundational Models
Bias in the bedrock
Foundational models—the large language models (LLMs) and diffusion systems powering modern AI—act as radioactive cores: their flaws irradiate every downstream application. Meta’s Llama 3, for instance, was found to encode racial biases during training, which later manifested in a hiring tool that rejected 34% more applicants with African-sounding names. Despite this, no audits verified the model’s training data provenance or bias propagation risks before its release.
Governance gap: Current regulations like the EU AI Act focus narrowly on deployers (Layer 3), ignoring the “pollution” created at the model layer. This creates a loophole where providers can disclaim responsibility, arguing they merely supply “tools”—not solutions.
C. Layer 3: Applications
When “Ethical Deployment” Becomes a Shield
The application layer—where AI meets end-users—is where harm becomes tangible but accountability evaporates. In 2024, a Swedish municipality deployed an automated welfare system that falsely denied benefits to 2,100 immigrants due to biased training data. While the local government faced public backlash, the third-party AI developer cited their terms of service: “We are not liable for outcomes arising from client-specific implementations.”
Governance gap: Liability frameworks like the revised EU Product Liability Directive (2025) place burden on deployers, letting upstream actors (model providers, data vendors) avoid scrutiny. This incentivizes a “hot potato” culture where no single entity owns systemic risks.
D. Layer 4: End-User Interfaces
The Opaque Final Mile
At the interface layer—chatbots, diagnostic tools, government dashboards—AI’s decisions become actionable but least transparent. Sweden’s Tax Agency, despite its ethical AI commitments, faced a crisis in 2025 when its chatbot provided unexplained tax reassessments, leaving citizens unable to challenge errors. Public trust eroded not because the AI failed, but because its “black box” design prevented accountability.
Governance gap: Unlike aviation’s mandatory flight recorders, no regulations require explainability for public-sector AI tools. This allows institutions to hide behind algorithmic complexity, undermining democratic oversight.
The Cumulative Toll
These layers don’t operate in isolation—they compound risks. A single discriminatory hiring tool might involve:
Layer 1: Energy-intensive training in a water-stressed region
Layer 2: A biased foundational model
Layer 3: A deployer unaware of upstream flaws
Layer 4: Job seekers denied due process
Yet, current governance addresses these harms as isolated incidents rather than systemic failures. Until we regulate every layer, AI’s societal toll will keep climbing.
2. The Automotive Analogy: Lessons from Regulated Industries
Why AI Needs Its Version of Airbags and Emission Tests
A. Component-Level Accountability
The strictness of regulated industries vs. AI’s free rein
The automotive industry offers a compelling parallel for understanding what AI governance is missing. Every car on the road undergoes rigorous safety testing, with each component—brakes, airbags, emissions systems—certified to meet strict standards. These safeguards ensure that failures are minimized and traceable when they occur.
In contrast, AI systems lack equivalent oversight at any layer of their development and deployment. Consider Uber’s 2024 self-driving car crash: while the vehicle’s AI failed to detect a pedestrian, no single entity—neither the software engineers, hardware manufacturers, nor the company itself—was held fully accountable. This contrasts sharply with how liability is distributed in automotive accidents, where manufacturers, insurers, and drivers share responsibility.
Provocation: Why do we demand life-saving discipline for vehicles but tolerate unchecked risks in AI systems that influence healthcare outcomes or judicial decisions?
B. Liability Frameworks
Shared responsibility vs. fragmented accountability
Automotive regulations distribute liability across stakeholders: manufacturers ensure product safety, insurers cover damages, and drivers are responsible for safe operation. This layered approach creates a clear chain of accountability when harm occurs.
AI, however, operates in a fragmented ecosystem where accountability dissolves across its value chain. Take the example of Clearview AI: its facial recognition technology was deployed by law enforcement in ways that violated privacy laws and disproportionately targeted minorities. Yet Clearview deflected responsibility by claiming it merely provided the tool, leaving law enforcement agencies to shoulder public backlash.
This lack of shared responsibility allows harm to proliferate unchecked. Victims often struggle to assign blame or seek redress because no framework exists to hold all actors accountable, from data providers to application deployers.
C. Precautionary Principles
Learning from phased safety protocols
The automotive industry’s precautionary approach—requiring extensive testing before products reach consumers—stands in contrast to AI’s ethos of “move fast and break things.” For example, before a new drug enters the market, the FDA mandates multi-phase trials to assess safety and efficacy. Similarly, cars undergo crash tests and emissions checks before they’re sold.
AI systems face no such barriers to deployment. Generative AI tools like ChatGPT or MidJourney are released directly to consumers with minimal pre-deployment testing for societal impact. The result? Systems that may perpetuate bias or misinformation are unleashed without safeguards, leaving society to deal with the fallout post-deployment.
Callout: If we can mandate crash tests for cars and clinical trials for drugs, why can’t we require similar precautionary measures for AI systems that influence lives at scale?
A Roadmap for AI Governance Inspired by Automotive Safety
The automotive industry demonstrates that systemic harm can be mitigated through layered accountability and precautionary principles. By adopting similar frameworks for AI, certifying components (e.g., training data audits), distributing liability across stakeholders, and enforcing pre-deployment testing, we can begin to address the governance void that allows societal harm to proliferate unchecked.
3. The Illusion of Ethical Compliance
Ethics Washing in a Fragmented Ecosystem
A. Corporate Self-Regulation Failures
The empty promises of voluntary ethics
Tech giants routinely promote “ethical AI principles” as proof of their commitment to responsible innovation. But these pledges often crumble under scrutiny. Microsoft’s Responsible AI Standard—lauded as a gold standard—exempts third-party integrations from its bias audits. In 2024, a healthcare provider using Microsoft’s Azure AI platform deployed a diagnostic tool that misread chest X-rays for Black patients at twice the rate of white patients. Microsoft’s response? “We are not responsible for how our tools are implemented.”
This pattern reflects a broader trend: 78% of corporate AI ethics pledges lack enforcement mechanisms (MIT, 2024). Companies publish glossy reports about fairness and transparency while outsourcing harm to subcontractors, cloud providers, and end-users. The result? A fragmented system where accountability evaporates like water in Nevada’s desert data centers.
Provocation: Ethical AI cannot exist when compliance is optional and self-reported.
B. Regulatory Blind Spots
How laws incentivize harm-shifting
Even landmark regulations like the EU AI Act focus narrowly on deployers (Layer 3), ignoring risks at the infrastructure and model layers. For example, the Act requires hospitals using AI diagnostics to conduct risk assessments, but places no obligations on the foundational model providers (e.g., OpenAI) whose biases may infect those systems.
Meanwhile, the U.S. CHIPS and Science Act prioritizes domestic AI chip production over harm prevention, mirroring Big Oil’s historical evasion of climate accountability. Critics argue this “innovation-first” approach creates perverse incentives: companies profit from AI’s growth while externalizing costs like labor displacement and mental health crises.
Case in point: OpenAI’s Alignment Research focuses on hypothetical existential risks (e.g., superintelligence) while ignoring near-term harms like its models’ role in automating low-wage jobs in Southeast Asia.
C. The “Best Efforts” Fallacy
When good intentions mask systemic harm
The AI industry’s mantra of “doing our best” rings hollow when divorced from accountability. Consider Google’s 2024 AI ethics board, disbanded after just six months when members raised concerns about its ad-targeting algorithms perpetuating gender stereotypes. The board’s dissolution revealed a deeper truth: ethical oversight is often performative, designed to placate critics rather than drive change.
This fallacy extends to technical “solutions” like explainable AI (XAI). While tools like SHAP and LIME claim to demystify model decisions, they offer little recourse for victims of harm. A 2025 audit of Sweden’s automated welfare system found that even when biases were exposed, officials lacked the authority (or will) to hold upstream providers accountable.
Callout: “Ethics without enforcement is corporate theater.”
The Accountability Void
The illusion of ethical compliance persists because it serves power structures: companies avoid liability, regulators check boxes, and the public is placated by empty assurances. Until governance frameworks mandate binding, cross-layer accountability, AI’s harms will continue to metastasize under the veneer of “best efforts.”
4. Global Inequity: AI’s Externalized Costs
How the Global South Bears the Brunt of AI’s Harms
A. Environmental Exploitation
AI’s carbon footprint and climate inequity
The environmental costs of AI disproportionately impact climate-vulnerable nations, particularly in the Global South. Training large language models (LLMs) requires immense computational power, leading to significant energy consumption and water usage. For example, Nevada’s desert data centers drained local water supplies to cool servers, but similar facilities in developing nations often operate without transparency or accountability.
While tech giants profit from AI’s growth, the environmental burden is outsourced to regions with weaker regulations. Communities living near these data centers face resource depletion and pollution, exacerbating existing inequalities. Yet, no global framework mandates sustainability reporting for AI infrastructure, leaving marginalized populations to bear the brunt of AI’s ecological footprint.
Provocation: Why do we allow AI to accelerate climate inequities when its environmental toll could be limited through mandatory transparency and sustainability standards?
B. Labor and Mental Health
The unseen toll on outsourced workers
AI’s externalized costs extend beyond the environment to human labor, particularly in content moderation and data labeling jobs outsourced to low-income countries. Filipino content moderators tasked with reviewing AI-generated violent imagery often suffer from PTSD and other mental health issues. Despite their critical role in maintaining AI systems, these workers are underpaid, overworked, and denied access to adequate psychological support.
This exploitation is a direct result of fragmented accountability: tech companies claim their tools are “automated,” obscuring the human labor required to clean up their outputs. Without enforceable labor protections or ethical sourcing mandates, the mental health toll on these workers remains invisible in corporate narratives about “responsible AI.”
Case Study: In 2024, a coalition of Filipino moderators filed a class-action lawsuit against a U.S.-based tech firm for failing to provide mental health resources—a rare attempt to hold companies accountable for outsourced harm.
C. Cultural Marginalization
Biases embedded in language models
Cultural inequities are also perpetuated by foundational models that prioritize Western languages and perspectives over those of the Global South. Arabic-language LLMs, for example, consistently underperform compared to English-based systems due to insufficient training data and lower investment in non-Western languages. This marginalization limits billions of people's access to high-quality AI tools and reinforces global disparities in technology adoption.
Furthermore, when these models are deployed in non-Western contexts, they often fail to account for cultural nuances or local norms, leading to harmful outcomes. For instance, predictive policing tools trained on Western datasets have been shown to unfairly target minority communities when applied abroad.
Governance Gap: No international standards exist to ensure equitable representation in training datasets or culturally sensitive deployment practices, leaving marginalized communities vulnerable to algorithmic bias.
A Call for Global Equity in AI Governance
The Global South bears a disproportionate share of AI’s externalized costs—environmental exploitation, labor abuses, and cultural marginalization—while reaping few benefits from its innovation. Until global governance frameworks address these inequities through enforceable sustainability mandates, labor protections, and equitable representation standards, AI will continue to exacerbate systemic harm across borders.
5. A Blueprint for Layered Governance
From Crisis to Control: Building Accountability at Every Layer
A. Infrastructure Layer
Transparency as the foundation of accountability
The infrastructure layer—data centers, chip manufacturers, and cloud providers—is the backbone of AI systems, yet it remains largely unregulated. To address environmental exploitation and resource inequities, policymakers must mandate energy and water-use transparency for AI training facilities. For example, the EU’s 2025 proposal to enforce sustainability reporting for data centers represents a critical step toward accountability.
Additionally, treating AI chips as “critical infrastructure” with global oversight could prevent monopolies and ensure equitable access to computational resources. A UN-led framework could establish benchmarks for environmental impact and resource allocation, mitigating harm in vulnerable regions.
Actionable Policy: Require public disclosure of energy consumption and water usage for all AI infrastructure projects, paired with independent audits to verify compliance.
B. Model Layer
Auditing the bedrock of AI systems
Foundational models are where biases often originate, yet they remain one of the least regulated layers in the AI stack. Implementing mandatory audits for training data provenance would help identify and mitigate bias propagation before models are deployed downstream. Harvard’s Model Audit Framework offers a blueprint for such practices, emphasizing transparency in data sourcing and algorithmic design.
The Stability AI lawsuit over copyrighted training data highlights another urgent need: enforcing intellectual property protections during model development. Without clear standards, foundational model providers can exploit public datasets without accountability, perpetuating ethical and legal violations.
Actionable Policy: Require third-party audits of training datasets to assess bias, provenance, and compliance with intellectual property laws.
C. Application Layer
Enforcing liability where harm becomes tangible
At the application layer—where AI meets end users—harm is most visible, but accountability is often deflected onto deployers. Governments must enforce strict liability for deployers using high-risk AI systems, similar to GDPR-style fines for data breaches. For example, hospitals deploying diagnostic AI tools should be held accountable for biased outcomes or privacy violations caused by their systems.
However, liability cannot stop at deployers; upstream actors (model providers, infrastructure hosts) must also share responsibility for systemic risks. This layered approach ensures that no stakeholder can evade accountability by shifting blame downstream.
Actionable Policy: Establish joint liability frameworks that hold both deployers and upstream providers accountable for harm caused by AI applications.
D. Global Coordination
A multilateral approach to cross-border harms
AI’s societal impact transcends national borders, necessitating global coordination to address its risks effectively. A Montreal Protocol-style treaty for AI governance could establish multilateral standards for transparency, sustainability, and ethical deployment practices across layers. Such a treaty would ensure that marginalized regions are not exploited as testing grounds or resource hubs for unchecked innovation.
Additionally, global oversight bodies could regulate foundational models as “public goods,” requiring equitable access and preventing monopolistic control by a handful of corporations or nations. This approach would align with UNESCO’s Recommendation on AI Ethics (2024 update), which advocates for inclusive governance frameworks that prioritize equity and human rights.
Actionable Policy: Convene an international coalition to draft binding treaties addressing cross-border harms in AI development and deployment.
From Fragmentation to Accountability
Layered governance is the only viable path to addressing systemic societal harm in AI ecosystems. By implementing transparency mandates at the infrastructure layer, auditing foundational models, enforcing liability at the application layer, and coordinating global standards, we can transform the current “Wild West” into a controlled landscape where innovation thrives within ethical boundaries.
Conclusion: A Call for Urgent Action
AI’s transformative potential is undeniable, but its rapid deployment without adequate governance has left societies exposed to significant risks. From environmental exploitation and labor abuses to cultural marginalization and systemic bias, the harms caused by AI are not incidental—they are structural, stemming from fragmented accountability across its multilayered ecosystem. Each layer of the AI stack, from infrastructure to end-user interfaces, operates in silos, enabling stakeholders to deflect responsibility while societal harm accumulates unchecked.
The automotive industry’s rigorous safety and liability frameworks offer a powerful analogy for what AI governance could achieve. Just as cars are subject to strict regulations at every stage—from manufacturing to use—AI systems must be governed through layered accountability that addresses risks at each level of their lifecycle. Without such frameworks, the illusion of ethical compliance will continue to mask systemic negligence, and marginalized communities will bear the brunt of AI’s externalized costs.
In this article, I laid out a blueprint for layered governance, advocating for transparency mandates at the infrastructure layer, audits for foundational models, strict liability for deployers, and global coordination through multilateral treaties. These measures are not just theoretical—they are actionable steps that can transform AI from a “Wild West” into a controlled landscape where innovation thrives within ethical boundaries.
The question now is whether we will act decisively or repeat the failures of other crises, like climate change, where delayed action exacerbated harm. AI governance is not just about protecting individuals—it is about safeguarding democracy, equity, and the very fabric of society. The time to act is now.