Why Explainability Matters

  • User understanding: Gives affected people reasons they can grasp and act on.
  • Review and appeal: Supports contestability with clear factors, limits, and routes to human review.
  • Regulatory assurance: Documents rationale so decisions stand up to internal and external scrutiny.
  • Trust through clarity: Improves adoption by replacing guesswork with accurate, audience-appropriate explanations.

When Explainability Is Missed

A customer asked Air Canada’s website chatbot about bereavement fares and received incorrect guidance that suggested a refund would be available after purchase. The traveler relied on that answer and later was refused. In 2024 a tribunal ruled that the airline must honor the chatbot’s statements. Poor explanations and weak oversight of automated support led directly to user harm and legal liability.

Explainability Inter-Driver Relationship List

The following table summarizes the 14 explainability related, inter-driver relationships. The full 105 relationships can be viewed here:

Note: The convention when displaying drivers Ds vs. Dt, is to display the first driver alphabetically as Ds.

Drivers Relationship Explanation Example
Inter-Pillar Relationships
Pillar: Operational Integrity
Explainability vs. Governance Reinforcing Explainability enhances governance by providing insights needed for informed oversight and decision-making (Bullock et al., 2024 ) In regulatory contexts, clear AI explanations help policymakers ensure compliance and adapt regulations effectively (Bullock et al., 2024 )
Explainability vs. Robustness Tensioned High explainability may simplify models, potentially reducing their robustness (Rudin, 2019 ) Simplified credit scoring models for explainability may perform poorly under non-standard conditions (Rudin, 2019 )
Explainability vs. Interpretability Reinforcing Explainability aids interpretability by clarifying complex model outputs for user understanding (Hamon et al., 2020 ) In financial AI, explainable models improve decision insight, ensuring model actionability (Hamon et al., 2020 )
Explainability vs. Security Tensioned Explainability can expose security vulnerabilities by revealing AI system operations (Hamon et al., 2020 ) Detailed explanations of security systems can aid adversaries in identifying exploitable weaknesses (Hamon et al., 2020 )
Explainability vs. Safety Reinforcing Explainability enhances safety by making AI decision processes transparent and aiding risk assessment (Dubber et al., 2020 ) Explainable models in autonomous vehicles help identify decision-making flaws, promoting operational safety (Dubber et al., 2020 )
Cross-Pillar Relationships
Pillar: Ethical Safeguards vs. Operational Integrity
Explainability vs. Fairness Reinforcing Explainability assists in ensuring fairness by elucidating biases, enabling equitable AI systems (Ferrara, 2024 ) In credit scoring, explainable models help identify discrimination, promoting fairer lending practices (Ferrara, 2024 )
Explainability vs. Inclusiveness Reinforcing Explainability promotes inclusiveness by making AI decisions understandable, encouraging equitable stakeholder participation (Shams et al., 2023 ) Explainable AI models help identify underrepresented groups’ needs, ensuring inclusive design in public policy (Shams et al., 2023 )
Bias Mitigation vs. Explainability Tensioned Bias mitigation can obscure model operations, conflicting with the transparency needed for explainability (Rudin, 2019 ) In high-stakes justice applications, improving model explainability can compromise bias mitigation efforts (Busuioc, 2021 )
Accountability vs. Explainability Reinforcing Accountability promotes explainability by requiring justifications for AI decisions, fostering transparency and informed oversight (Busuioc, 2021 ) Implementing clear explanations in credit scoring ensures accountability and compliance with regulations, enhancing stakeholder trust (Cheong, 2024 )
Explainability vs. Privacy Tensioned Explainability can jeopardize privacy by revealing sensitive algorithm details (Solove, 2025 ) Disclosing algorithm logic in healthcare AI might infringe patient data privacy (Solove, 2025 )
Pillar: Operational Integrity vs. Societal Empowerment
Explainability vs. Sustainability Reinforcing Explainability aids sustainable AI practices by ensuring accountable development and deployment, promoting ethical standards (Schmidpeter & Altenburger, 2023 ) AI systems explaining carbon footprints can align sustainability goals with operational transparency (Hamon et al., 2020 )
Explainability vs. Human Oversight Reinforcing Explainability enhances human oversight by providing clear model outputs, aiding in decision-making accuracy (UNESCO, 2022 ) In healthcare, explainable AI systems allow practitioners to verify treatment recommendations, ensuring oversight (UNESCO, 2022 )
Explainability vs. Transparency Reinforcing Both explainability and transparency enhance trust by making AI systems’ inner workings and decisions understandability essential for accountability (Cheong, 2024 ) In healthcare AI, both drive accessible patient diagnosis explanations and transparent model algorithms (Ananny & Crawford, 2018 )
Explainability vs. Trustworthiness Reinforcing Explainability enhances trustworthiness by providing clarity on AI decisions, reinforcing confidence in system operations (Toreini et al., 2019 ) In financial AI, clear loan decision explanations increase consumer trust in automated evaluations (Lipton, 2016 )