AI Ethical Approaches – Putting AI Ethics into Practice
As AI systems become more integrated into our lives, a crucial question arises: how do we ensure they align with our values and ethics?
While there’s been much discussion about ethical AI guidelines, critics argue that these often remain abstract ideas, struggling to make a real difference in AI fairness, AI accountability, and responsible deployment in AI and society. It’s like having a beautiful blueprint for a house but no clear instructions on how to lay the foundation – a significant gap between theory and practice.
Several thoughtful approaches are closing this gap by translating ethical ideals into AI development. Let’s explore three prominent ones: the embedded ethics approach, the ethically aligned approach, and Value Sensitive Design (VSD). Understanding these methods will help us build AI systems that adhere to AI fairness and AI accountability while ensuring ethical AI guidelines shape their real-world impact on AI and society.
Current AI Ethics Approaches
Embedded Ethics: Ethics in the Trenches
Imagine a team of engineers developing an AI-powered medical diagnosis tool. The embedded ethics approach suggests integrating an ethicist within the team, similar to adding a skilled carpenter to a construction crew. This ethicist ensures AI fairness and AI accountability by addressing ethical questions as they arise, such as: “Could this algorithm unintentionally discriminate against certain patient groups?” or “How do we ensure patient data security throughout this process?”
McLennan et al. (2020, 2022) advocate for ethicists to be “at the workbench,” actively engaging with AI development teams to uphold ethical AI guidelines. Similarly, Fiske et al. (2020) emphasize the importance of continuous ethical scrutiny to prevent issues from being identified too late in the AI pipeline.
- Theory-Practice Relation: The ethicist bridges the gap by applying ethical AI guidelines directly to practical AI system design and development.
- Strengths: This approach is highly adaptable, fostering AI accountability and ethical awareness within development teams.
- Shortcomings: The effectiveness depends on the ethicist’s expertise and integration within the team. Without clear overarching ethical benchmarks, AI fairness may vary across projects.
Ethically Aligned Approaches: Guiding Principles in Action
Consider a city council drafting ethical AI guidelines for facial recognition technology. An ethically aligned approach would define principles like AI fairness, transparency, and AI accountability to guide regulations, ensuring AI systems align with societal values.
The IEEE’s Ethically Aligned Design (2019) and Morley et al. (2020) propose frameworks that integrate ethical AI guidelines into every stage of AI development. The goal is to move beyond abstract principles and implement structured guidelines for AI and society.
- Theory-Practice Relation: Ethical principles serve as a roadmap for selecting tools and methodologies that align AI systems with ethical AI guidelines.
- Strengths: Provides a widely accepted foundation for AI fairness and AI accountability, ensuring consistency across AI projects.
- Shortcomings: Prioritizing ethical principles can be challenging when conflicts arise, such as balancing AI fairness with security concerns. Without deeper justification and conflict resolution strategies, the approach may lead to arbitrary decisions.
Value Sensitive Design (VSD): Designing with Human Values in Mind
Imagine a team developing a smart home assistant. Using VSD, they engage stakeholders to understand values related to privacy, security, and trust, ensuring AI systems prioritize user autonomy and transparency.
Friedman et al. (2013) emphasize integrating stakeholder values throughout AI development, making ethical AI guidelines central to design.
- Theory-Practice Relation: Stakeholder values directly shape AI system features, ensuring alignment with AI fairness and AI accountability.
- Strengths: Encourages user-centered AI design and proactive ethical considerations.
- Shortcomings: VSD sometimes lacks a clear connection to broader societal norms and legal structures, making it difficult to enforce AI accountability beyond immediate stakeholders.
Building a Stronger Bridge: A More Human-Centered Meta-Framework
To overcome the limitations of these individual approaches, a more comprehensive applied AI ethics framework is needed. This meta-framework highlights three crucial dimensions for aligning AI and society with ethical AI guidelines:
1. Embracing Feelings: The Dimension of Affects and Emotions
AI fairness isn’t just about logic—emotions and lived experiences matter. Consider AI systems in elder care or mental health; overlooking the emotional impact of AI could lead to feelings of loneliness, anxiety, or dependence. Ethical AI guidelines must include emotional considerations to ensure AI and society benefit from human-centered design.
Example: Developers designing robot companions for elderly individuals should consider the psychological impact, ensuring AI systems foster dignity and independence.
2. Justifying Ethical AI Guidelines: The Dimension of Normative Background Theories
Stating ethical AI guidelines isn’t enough—we must clarify their foundations. For example, what does “AI fairness” mean in loan applications? Does fairness mean equal opportunities or equal outcomes? Establishing clear theoretical foundations helps resolve conflicts between competing AI principles.
Example: Addressing algorithmic bias in loan approvals requires defining fairness frameworks, such as equal opportunity versus demographic parity, to justify AI decisions ethically.
3. Governance: The Role of Laws and Regulations in AI Accountability
Ethical AI guidelines exist within legal and political frameworks. AI governance must ensure AI accountability, preventing misuse and aligning AI and society through regulatory mechanisms.
Example: Autonomous vehicles raise ethical and legal challenges. Who is responsible in an accident? How should regulations balance innovation and public safety? Governance structures must ensure AI accountability while fostering ethical AI development.
Towards a More Human Future for AI
A truly human-centered applied AI ethics requires integrating emotions, justified ethical AI guidelines, and robust governance mechanisms. This meta-framework enhances AI fairness, AI accountability, and AI and society’s well-being.
As the demand for AI ethics professionals grows, organizations must prioritize AI accountability and ethical AI guidelines in workforce training. AI CERTs offer an AI+ Ethics Certification, equipping professionals with the skills to develop responsible, human-centered AI systems.