AI Perspectives #6: The Case Against Ethical AI and Responsible AI
The Dangerous Illusion of "Ethical" and "Responsible" AI
Introduction
The rise of artificial intelligence (AI) has ushered in a new era of technological advancement, promising to reshape industries, redefine social structures, and fundamentally alter the way we live and work. As AI systems become increasingly sophisticated and autonomous, their impact on society is becoming ever more profound, raising complex ethical, legal, and social questions that demand careful consideration.
In the midst of this technological revolution, the concept of AI Governance has emerged as a critical framework for navigating the challenges and opportunities presented by AI. AI Governance seeks to establish guidelines, principles, and mechanisms to ensure that AI is developed and deployed in a manner that aligns with human values, protects fundamental rights, and promotes the well-being of society.
However, the current discourse on AI Governance is often dominated by terms like "Ethical AI" and "Responsible AI," which, while well-intentioned, can be misleading and even counterproductive. These labels tend to oversimplify complex ethical considerations, create a false sense of control, and potentially divert attention from the need for robust governance frameworks.
In this special issue of AI Perspectives, I aim to challenge the prevailing narratives around AI ethics and governance, offering a more nuanced and pragmatic perspective. Drawing on my experience as the Director-General of the Swedish AI Association (AICenter), I will explore the limitations of "Ethical AI" and "Responsible AI," expose the potential for "ethics washing" and misdirection, and advocate for a more concrete and action-oriented approach to AI Governance.
This issue will touch upon the following key themes:
The Illusion of "Ethical AI": Unpacking the limitations of this term and its potential to mislead.
The "Responsibility" Paradox: Examining the ambiguity of "Responsible AI" and the challenges of attributing responsibility in complex AI systems.
AI Governance, A Pragmatic Approach: Advocating for a liberal framework that prioritizes freedom and innovation while implementing safeguards against potential harm.
Building an AI future: To empower society and enhance Sweden’s global competitiveness, we should aim for a resilient, adaptive AI ecosystem in which technology drives progress while safeguarding our shared values.
This issue is not just a critique of current approaches but also a call to action. It urges individuals, organizations, and policymakers to move beyond simplistic labels and engage in a more nuanced and proactive dialogue about AI Governance. By embracing a pragmatic and human-centered approach, we can harness the transformative potential of AI while safeguarding against its risks and ensuring a future where AI truly benefits all of humanity.
Join me on this journey as we explore the complexities of AI Governance and work together to shape a future where AI empowers, connects, and strengthens our society.
PART 1: The Illusion of "Ethical AI"
Unpacking the Limitations
The term "Ethical AI" has become a dominant narrative in discussions surrounding artificial intelligence. Policymakers, corporations, and advocacy groups have embraced the phrase as a guiding principle for responsible AI development. However, a closer examination reveals that "Ethical AI" is, in many ways, an illusion—an ambiguous and often misleading term that oversimplifies the profound challenges of AI governance and accountability.
At its core, "Ethical AI" implies a set of universally accepted moral guidelines that can be applied to AI development and deployment. Yet, ethical considerations are deeply contextual, varying across cultures, legal systems, and societal structures. What is considered ethical in one country may be viewed as problematic in another. AI systems do not operate in a vacuum; they interact with social, economic, and political realities that shape their impact.
Moreover, AI ethics is not a static or easily definable field. Ethical dilemmas in AI evolve as the technology itself advances. A rigid ethical framework risks becoming obsolete or inadequate in addressing the ever-changing nature of AI and its societal implications. Thus, instead of offering clear solutions, "Ethical AI" often functions as an appealing yet ultimately hollow label—one that disguises the complexities of AI governance under the comforting illusion of moral clarity.
The Seductive Appeal of "Ethics Washing"
One of the most troubling aspects of the "Ethical AI" discourse is its frequent use as a tool for "ethics washing"—a practice where organizations and corporations adopt the rhetoric of ethical responsibility without making substantive changes to their AI systems or business models. Companies often publish AI ethics statements, form advisory boards, or launch public initiatives to signal their commitment to ethical principles. Yet, these efforts frequently lack enforceability, transparency, or meaningful oversight.
The danger of ethics washing is that it can create a false sense of security. When companies declare that their AI systems are "ethical," the public and policymakers may assume that the necessary safeguards are in place. This assumption can lead to complacency, as regulators delay meaningful intervention and consumers remain unaware of the risks associated with AI-driven decision-making.
Furthermore, ethics washing enables corporations to shift the burden of ethical scrutiny onto vague, self-imposed guidelines rather than adhering to concrete regulatory measures. A company may claim to prioritize fairness and transparency in its AI systems while simultaneously deploying algorithms that perpetuate bias or exploit user data. Without rigorous enforcement mechanisms, ethics remains little more than a marketing strategy—an illusion of responsibility that masks the underlying realities of AI's societal impact.
Oversimplification and False Dichotomies
Another fundamental issue with "Ethical AI" is its tendency to oversimplify complex ethical dilemmas into binary choices: ethical versus unethical, fair versus unfair, transparent versus opaque. This black-and-white framing fails to capture the nuanced trade-offs that AI developers and policymakers must navigate.
For example, consider the challenge of balancing transparency with security. AI decision-making processes should be explainable to encourage trust and accountability. However, certain AI applications—such as cybersecurity systems—require a level of opacity to prevent adversaries from manipulating them. Labeling an AI system as "unethical" simply because it lacks full transparency ignores the legitimate reasons behind design choices.
Similarly, AI bias is often discussed in absolute terms, with the goal of eliminating all forms of bias from AI systems. While preventing harmful biases is essential, complete bias removal is an unrealistic standard. AI systems are trained on historical data, which inherently reflects societal inequalities. The more pragmatic approach is to focus on bias reduction and ongoing monitoring rather than chasing an unattainable ideal of total impartiality.
By framing AI ethics as a rigid dichotomy, we risk missing the subtleties that define real-world applications of AI. Ethical considerations in AI are not about choosing between good and evil; they involve making difficult decisions about trade-offs, competing interests, and long-term consequences. The illusion of "Ethical AI" encourages simplistic solutions to problems that demand deep, context-sensitive engagement.
The Need for a More Pragmatic Approach
Instead of relying on the ambiguous concept of "Ethical AI," we must adopt a more pragmatic approach—one grounded in enforceable governance frameworks, concrete accountability measures, and a commitment to ongoing evaluation.
A pragmatic AI governance model acknowledges that ethical considerations are fluid and context-dependent. It does not rely on broad ethical declarations but instead focuses on measurable outcomes: Are AI systems demonstrably reducing harm? Are they subject to oversight by independent bodies? Do they have mechanisms for redress when they fail?
Furthermore, AI governance should emphasize transparency in decision-making processes. This does not mean forcing all AI models to be fully interpretable but ensuring that when AI impacts individuals' lives, those affected have a right to understand how decisions are made and to challenge unfair outcomes.
Lastly, meaningful AI governance requires regulatory action rather than voluntary ethical commitments. Governments and international bodies must establish clear guidelines that dictate what AI systems can and cannot do, just as we have laws governing financial markets, environmental protection, and data privacy. AI should not be exempt from the same level of scrutiny and accountability that applies to other transformative technologies.
Beyond the Illusion
The discourse surrounding "Ethical AI" has been instrumental in bringing attention to the ethical challenges posed by artificial intelligence. However, simply declaring AI ethical is not enough. The term itself is too malleable, too prone to misuse, and too detached from the structural realities of AI governance.
Suppose we truly want AI to serve humanity's interests. In that case, we must move beyond the illusion of "Ethical AI" and commit to a more substantive, enforceable approach—one that prioritizes accountability, transparency, and governance over vague ethical commitments. AI is too powerful and too pervasive to be governed by well-intentioned rhetoric alone. It is time to demand more than just ethics; it is time to demand real action.
PART 2: The "Responsibility" Paradox
The Ambiguity of "Responsible AI"
The concept of "Responsible AI" has gained traction as an alternative to "Ethical AI," positioning itself as a more pragmatic and action-oriented approach. However, the term itself remains ambiguous, leaving room for broad interpretation and inconsistent application. Unlike clearly defined regulatory frameworks, "responsibility" implies a moral or voluntary commitment, which can be selectively enforced or even manipulated for strategic advantage.
One of the key challenges with "Responsible AI" is its flexibility. On the surface, it suggests an obligation to develop AI systems that align with societal values and human rights. However, who determines what constitutes responsibility? Is it the AI developers, the corporations deploying these systems, or policymakers? Without clear accountability mechanisms, the notion of responsibility becomes a moving target, allowing organizations to shape its meaning in ways that serve their interests rather than the public good.
Furthermore, "Responsible AI" is often presented as a solution to AI-related risks without addressing the systemic challenges that create those risks in the first place. Instead of setting enforceable standards, it relies on voluntary compliance, which can result in performative gestures rather than meaningful change.
The Pitfalls of Responsibilization
Closely related to the ambiguity of "Responsible AI" is the phenomenon of responsibilization. This term, borrowed from governance studies, describes a shift in responsibility from institutions or governing bodies to individuals or private actors. In the AI space, responsibilization manifests when the onus of ethical AI development is placed on individual engineers, developers, or end-users rather than on corporations, governments, or regulatory bodies.
The dangers of this shift are evident in various ways:
Deflecting Regulatory Oversight: By framing AI ethics as a matter of individual responsibility, companies can sidestep formal regulations and avoid legally binding obligations.
Burdening Individuals with Ethical Dilemmas: Developers are often expected to incorporate ethical considerations into their work, yet they may lack the training, authority, or structural support to make impactful decisions.
Perpetuating Inequality: When responsibility is distributed unevenly, marginalized groups are often the most affected. The lack of formal accountability mechanisms means that those with the least power may bear the brunt of AI failures.
Creating an Illusion of Control: Organizations may emphasize internal AI ethics committees or voluntary guidelines as evidence of responsibility while continuing to prioritize profit over ethical considerations.
Responsibilization does not create real accountability; instead, it disperses responsibility in ways that make it difficult to pinpoint who should be held accountable when AI systems cause harm.
Shifting Responsibility vs. Shared Accountability
A fundamental question arises: If "Responsible AI" is flawed, what should replace it? The answer lies in shifting from an individualistic approach to one based on shared accountability. Instead of placing responsibility solely on AI developers or corporations, AI governance should establish enforceable mechanisms that distribute accountability across multiple levels:
Regulatory Frameworks: Governments must enact and enforce laws that clearly define the responsibilities of AI developers, deployers, and users.
Corporate Responsibility Beyond PR: Companies should be held accountable through independent audits, transparency mandates, and enforceable guidelines rather than relying on self-regulation.
Public Participation: AI governance must include diverse stakeholders, ensuring that affected communities have a voice in shaping AI policies and practices.
International Cooperation: As AI systems transcend national borders, accountability must be addressed at a global level through treaties and cross-border regulatory efforts.
Shared accountability ensures that AI governance does not become a game of passing responsibility. It shifts the focus from individual actors making ethical choices to systemic mechanisms that align AI development with societal needs.
From Responsibility to Accountability
The paradox of "Responsible AI" lies in its ability to appear proactive while failing to deliver concrete accountability. Its ambiguity allows for ethics washing, its reliance on responsibilization shifts the burden away from institutions, and its lack of enforcement makes it ineffective in ensuring AI aligns with human values.
A more meaningful approach requires shifting from vague notions of responsibility to concrete mechanisms of shared accountability. By implementing enforceable regulations, encouraging corporate transparency, engaging the public, and promoting global cooperation, AI governance can move beyond symbolic responsibility and toward real-world impact.
PART 3: AI Governance: A Pragmatic Approach
Embracing a Liberal Framework
The conversation around AI governance is often polarized between calls for stringent regulation and the push for unchecked innovation. However, a pragmatic approach rooted in a liberal governance framework offers an alternative—one that balances freedom with responsibility while maximizing societal benefits.
A liberal framework for AI governance does not prescribe what AI developers and organizations must do but instead focuses on what they must not do. This approach aligns with classical liberal principles, emphasizing individual and corporate autonomy while ensuring mechanisms are in place to prevent harm. Instead of preemptively constraining AI development, this model encourages an environment of open exploration, intervening only where necessary to reduce risks.
Liberal governance in AI is not about laissez-faire deregulation but about crafting a set of well-defined boundaries within which innovation can thrive. It ensures that while AI remains a powerful tool for progress, its applications do not undermine fundamental human rights, safety, or social cohesion. This model also promotes self-regulation and industry-led initiatives, allowing AI stakeholders to take an active role in ethical considerations without being suppressed by excessive government intervention.
Maximizing Freedom, Minimizing Harm
A pragmatic governance model does not seek to eliminate all risks associated with AI—an impossible task—but to manage them effectively. The fundamental principle is simple: allow AI the freedom to evolve but set clear limits on harmful applications.
For example, AI-driven medical diagnosis tools should be permitted and encouraged, given their potential to revolutionize healthcare. However, an AI system that discriminates against certain demographic groups in loan approvals should be scrutinized and corrected. Similarly, AI-generated deepfakes used for satire and artistic expression should be allowed, while those designed to spread misinformation or manipulate elections should be restricted.
The key to maximizing freedom while minimizing harm lies in dynamic oversight rather than static regulation. Traditional laws, which often take years to develop and implement, struggle to keep pace with AI’s rapid advancements. Instead, AI governance should rely on flexible guidelines, continuous assessment, and agile intervention strategies. Regulatory sandboxes, for instance, provide a controlled environment where AI applications can be tested for potential risks before widespread deployment.
The Role of Negative Liberty
The liberal governance model is closely tied to the concept of negative liberty—the freedom from external constraints rather than the provision of specific entitlements. This principle ensures that AI developers are not burdened with excessive compliance requirements that slow down innovation but are held accountable for preventing harm.
Negative liberty in AI governance means:
Regulating AI based on harm prevention rather than prescriptive compliance. Instead of requiring AI developers to seek approval for every new application, governance should focus on restricting AI technologies only when they demonstrably pose a threat.
Ensuring that AI regulations do not create monopolies or reinforce existing power structures. Overly complex regulations often favor large corporations that can afford compliance costs while suffocating smaller startups and independent researchers. By focusing on negative liberty, governance remains accessible and innovation-friendly.
Prioritizing personal and organizational autonomy in AI development. Developers and businesses should have the freedom to explore AI’s potential, provided they do not infringe on the rights and well-being of others.
This approach does not mean governance is passive—it means that intervention is targeted, proportionate, and justified by real-world evidence. The goal is not to dictate AI’s trajectory but to ensure that its evolution aligns with democratic values and human rights.
A Path Forward
A pragmatic approach to AI governance based on a liberal framework offers a sustainable way to harness AI’s potential without falling into the traps of overregulation or negligence. By focusing on harm prevention, promoting negative liberty, and enabling dynamic oversight, this model provides the flexibility necessary to adapt to AI’s rapid advancements while safeguarding the public interest.
PART 4: Conclusion – Shaping a Future Where AI Empowers
As we reach the end of our exploration into the limitations of “Ethical AI” and “Responsible AI,” and have established AI Governance as a pragmatic, flexible, and forward-thinking framework, we now arrive at a pivotal moment: the opportunity to shape a future where AI empowers society rather than undermining it. This concluding section is a call to action—a rallying cry for the Swedish AI community, policymakers, industry leaders, and citizens alike to come together and build an AI-enabled future that is resilient, adaptive, and beneficial to all.
The AICenter’s Commitment to Society
At the heart of this vision is the AICenter, which stands not merely as a proponent of technological progress, but as a dedicated steward of society. Our commitment is to enable Sweden to harness the transformative power of AI while simultaneously safeguarding against its potential harms. The AICenter’s approach is deeply rooted in a liberal governance framework that values negative liberty—maximizing freedom for innovation while implementing targeted safeguards to prevent harm.
Our work is built on clear, actionable principles:
Empowering the People: We strive to ensure that every Swedish citizen has the opportunity to engage with AI—through education, dialogue, and direct involvement in shaping policies. This is not about endorsing technology for its own sake, but about equipping society to utilize its benefits without falling victim to unforeseen disruptions.
Balancing Innovation with Protection: Rather than stifling innovation with blanket regulations, we advocate for a nuanced, adaptive approach. By focusing on what AI systems should not do, we can create flexible boundaries that protect public interests—be it in preventing job displacement, mitigating misinformation, or safeguarding digital rights—without hampering the creative and economic potential of AI.
Creating Resilience and Adaptability: The AICenter envisions a future where the rapid evolution of AI is met with equally dynamic governance. Continuous learning, regular policy updates, and proactive engagement with global best practices ensure that our governance frameworks remain relevant as new challenges emerge.
A Call to Collective Engagement
The transformative power of AI is both immense and undeniable. However, its benefits will only be fully realized when society actively participates in shaping its trajectory. The challenge is not only technological—it is fundamentally social and political. For AI to serve as a positive force, we must bridge the gap between innovation and society. This requires:
Inclusive Public Dialogue: Every voice matters. The diverse perspectives of our citizens must be integrated into policy-making. This is why initiatives like AI Shift exist—to gather, reflect, and amplify the collective will of the people.
Collaborative Policymaking: Effective AI governance calls for joint efforts between government, industry, academia, and civil society. By working together, we can craft policies that are robust yet flexible, responsive yet forward-looking.
Global and Local Alignment: While AI is a global phenomenon, its impact is profoundly local. Sweden’s leadership in AI governance hinges on our ability to align international best practices with the unique needs and values of our society.
We must move beyond abstract discussions of “Ethical AI” that often serve more as a veneer than as a catalyst for real change. Instead, we need a governance framework that is tangible, enforceable, and grounded in the lived realities of our communities. This is the future the AICenter envisions—a future where AI is an engine of progress that drives workforce development supports sustainable economic growth, and fortifies our democratic institutions.
Building a Future of Empowerment and Resilience
The future of AI is not predetermined. It will be shaped by the decisions we make today. With the collective effort of all stakeholders, we have the power to direct AI’s evolution so that it:
Enhances Economic Opportunities: By integrating AI into our industrial processes and business strategies, we can ensure that technological advancements lead to job creation, improved productivity, and sustainable growth.
Strengthens Social Cohesion: With robust safeguards against the misuse of AI, such as preventing the spread of misinformation or ensuring unbiased decision-making, we can protect our society from potential harm.
Promotes Global Leadership: By pioneering a liberal and adaptive governance model, Sweden can set a global example for how nations can harness AI for the public good, ensuring that innovation is matched by accountability.
The AICenter is at the forefront of this movement, working tirelessly to build a future where AI not only drives technological progress but also strengthens the very fabric of society. Our initiatives—spanning public engagement, international collaboration, and dynamic policy development—are all steps toward a future where every citizen is empowered and every risk is addressed.
Final Reflections
The path forward is clear. Our work is not merely about regulating technology; it is about shaping a future where AI serves as a tool for collective empowerment. We have the opportunity to build an ecosystem that is resilient, adaptable, and fundamentally aligned with the needs of the people.
I call upon every stakeholder—government leaders, industry innovators, academic experts, and concerned citizens—to join us in this crucial endeavor. Together, we can create a future where AI is not feared as a disruptive force but embraced as a powerful ally in the pursuit of a more prosperous, equitable, and resilient society.
The time to act is now. Let us harness the transformative potential of AI to build a future where technology empowers humanity, and where Sweden leads the world in responsible, pragmatic, and forward-thinking AI governance.