Figure 1: RAISEF's three pillars (societial empowerment/SHOTT highlighted).
Figure 1: RAISEF's three pillars (societial empowerment/SHOTT highlighted).

Societal Empowerment's 4 Drivers

SHOTT

Sustainability

Sustainability looks at the environmental and organizational footprint of AI over time. It encourages efficient data and compute choices, awareness of energy and carbon impacts, and lifecycle decisions that balance performance with resource use. Sustainability also includes maintainability and end of life planning so systems can be updated, reused, or retired responsibly. The goal is long term value with fewer hidden costs. Teams are explicit about trade offs, monitor key indicators, and favor designs that remain serviceable and affordable as scale and context change.

Human Oversight

Human oversight ensures accountable people stay in the loop during design, deployment, and operations. It defines when humans review or override system outputs, what evidence they expect, and how they escalate issues. Oversight is effective when roles and decision rights are clear, tools surface the right context, and time is reserved for review rather than afterthought. It focuses on real authority and timely intervention so the system supports human judgment rather than replacing it blindly.

Transparency

Transparency provides clear and accurate information about what the system is, what data it relies on, and how it behaves in typical and unusual conditions. It uses concise documentation for general audiences and deeper technical materials for specialists. Release notes, known limitations, and change logs are easy to find. Good transparency avoids vague promises and gives people what they need to use the system responsibly, to evaluate suitability, and to challenge outcomes when necessary.

Trustworthiness

Trustworthiness is the outcome of consistent and verifiable conduct. It grows when commitments are clear, risks are disclosed, and the system performs as claimed across settings. Practices such as reliable operations, honest communication, responsive issue handling, and measurable improvement build credibility over time. Trustworthiness is not a slogan. It is earned through repeatable behavior that aligns user experience, technical evidence, and organizational accountability.

Societal Empowerment Inter-Driver Relationship List

The following table summarizes the 50 societal empowerment related, inter-driver relationships. The full 105 relationships can be viewed here:

Note: The convention when displaying drivers Ds vs. Dt, is to display the first driver alphabetically as Ds.

Drivers Relationship Explanation Example
Inter-Pillar Relationships
Pillar: Societal Empowerment
Human Oversight vs. Sustainability Reinforcing Human oversight supports sustainable AI, ensuring ethical standards are achieved, reducing environmental impacts (Dubber et al., 2020 ) AI projects evaluated with human oversight consider sustainability impacts, aligning environmental goals with tech innovations (Rohde et al., 2023 )
Human Oversight vs. Transparency Reinforcing Human oversight and transparency collectively foster accountability, enhancing ethical governance in AI systems (UNESCO, 2022 ) In AI-driven medical diagnostics, both drivers ensure user trust and effective oversight (Ananny & Crawford, 2018 )
Human Oversight vs. Trustworthiness Reinforcing Human oversight enhances AI trustworthiness by ensuring ethical adherence and aligning AI actions with human values (Dubber et al., 2020 ) Continuous human monitoring in secure systems ensures AI actions align with trust standards, boosting user confidence (Lu et al., 2024 )
Sustainability vs. Transparency Neutral Sustainability and transparency influence AI’s lifecycle but don’t directly conflict or reinforce, promoting governance synergy (van Wynsberghe, 2021 ) Deploying sustainable AI while maintaining transparency in energy sourcing exemplifies balanced governance goals in AI systems (van Wynsberghe, 2021 )
Sustainability vs. Trustworthiness Reinforcing Sustainability and trustworthiness together enhance long-term responsible AI deployment, creating societal and environmental benefits (van Wynsberghe, 2021 ) Implementing energy-efficient AI models can increase trust, aligning with corporate sustainability goals (Accenture, 2024 )
Transparency vs. Trustworthiness Reinforcing Transparency enhances trustworthiness by clarifying AI operations, fostering informed user relationships (Floridi et al., 2018 ) Transparent AI applications provide clear justifications for decisions, leading to higher user trust (Floridi et al., 2018 )
Cross-Pillar Relationships
Pillar: Ethical Safeguards vs. Societal Empowerment
Accountability vs. Human Oversight Reinforcing Accountability necessitates human oversight for ensuring responsible AI operations, requiring active human involvement and supervision (Leslie, 2019 ) AI systems in healthcare employ human oversight for accountable decision-making, preventing potential adverse outcomes (Novelli et al., 2024 )
Accountability vs. Sustainability Reinforcing Accountability aligns with sustainability by ensuring responsible practices that support ecological integrity and social justice (van Wynsberghe, 2021 ) Implementing accountable AI practices reduces carbon footprint while enhancing brand trust through sustainable operations (van Wynsberghe, 2021 )
Accountability vs. Transparency Reinforcing Transparency supports accountability by enabling oversight and verification of AI systems’ behavior (Dubber et al., 2020 ) In algorithmic finance, transparency enables detailed audits for accountability, curbing unethical financial practices (Dubber et al., 2020 )
Accountability vs. Trustworthiness Reinforcing Accountability builds trustworthiness by enhancing transparency and integrity in AI operations (Schmidpeter & Altenburger, 2023 ) AI systems with clear accountability domains are generally more trusted in healthcare settings (Busuioc, 2021 )
Bias Mitigation vs. Human Oversight Reinforcing Human oversight supports bias mitigation by ensuring continual auditing to detect and address biases (Ferrara, 2024 ) In hiring AI, human oversight helps identify bias in training data biases, enhancing fairness (Ferrara, 2024 )
Bias Mitigation vs. Sustainability Reinforcing Bias mitigation supports sustainability by fostering fair access to AI benefits, reducing societal imbalances (Rohde et al., 2023 ) Ensuring AI equitable data distribution reduces systemic biases, contributing to sustainable growth (Rohde et al., 2023 )
Bias Mitigation vs. Transparency Reinforcing Bias mitigation relies on transparency to ensure fair AI systems by revealing discriminatory patterns (Ferrara, 2024 ) Transparent algorithms in recruitment help identify bias in decision processes, ensuring fair practices (Ferrara, 2024 )
Bias Mitigation vs. Trustworthiness Reinforcing Bias mitigation fosters trustworthiness by addressing discrimination, thereby improving user confidence in AI systems (Ferrara, 2024 ) In lending AI, bias audits enhance algorithm reliability, fostering trust among users and stakeholders (Ferrara, 2024 )
Fairness vs. Human Oversight Reinforcing Human oversight supports fairness by ensuring AI decisions reflect equitable practices grounded in human judgment (Voeneky et al., 2022 ) For recruitment AI, human oversight calibrates fairness, reviewing bias mitigation strategies before final implementation (Bateni et al., 2022 )
Fairness vs. Sustainability Reinforcing Fairness supports sustainability by advocating equitable resource distribution, essential for sustainable AI solutions (Schmidpeter & Altenburger, 2023 ) AI systems ensuring fair access to renewable energy results underscore this synergy (van Wynsberghe, 2021 )
Fairness vs. Transparency Reinforcing Transparency in AI increases fairness by allowing for the identification and correction of biases (Ferrara, 2024 ) Transparent hiring algorithms enable fairness by revealing discriminatory patterns in recruitment practices (Lu et al., 2024 )
Fairness vs. Trustworthiness Reinforcing Fairness enhances trustworthiness by promoting equal treatment, diminishing bias, thus fostering confidence in AI systems (Cheong, 2024 ) Mortgage AI with fair credit evaluations strengthens trustworthiness, ensuring non-discriminatory decisions for applicants (Dubber et al., 2020 )
Human Oversight vs. Inclusiveness Reinforcing Human oversight promotes inclusiveness by ensuring diverse perspectives shape AI ethics and implementation (Dubber et al., 2020 ) Human oversight in AI enhances inclusiveness by involving diverse stakeholder consultations during system development (Zowghi & Da Rimini, 2024 )
Human Oversight vs. Privacy Tensioned Human oversight might collide with privacy, requiring access to sensitive data for supervision (Solove, 2025 ) AI deployment often requires human oversight conflicting with privacy norms to evaluate sensitive data algorithms (Dubber et al., 2020 )
Inclusiveness vs. Sustainability Reinforcing Inclusive AI development inherently supports sustainable goals by considering diverse needs and reducing inequalities (van Wynsberghe, 2021 ) AI initiatives promoting inclusivity often align with sustainability, as seen in projects that address accessibility in green technologies (van Wynsberghe, 2021 )
Inclusiveness vs. Transparency Reinforcing Both inclusiveness and transparency promote equitable access and understanding in AI, enhancing collaborative growth (Buijsman, 2024 ) Diverse teams enhance transparency tools in AI systems, ensuring fair representation and increased public understanding (Buijsman, 2024 )
Inclusiveness vs. Trustworthiness Tensioned Trust-building measures, like rigorous security checks, can marginalize less-privileged stakeholders (Bullock et al., 2024 ) Expensive trust audits in AI systems may exclude smaller organizations from participation (Dubber et al., 2020 )
Privacy vs. Sustainability Tensioned Privacy demands limit data availability, hindering AI’s potential to achieve sustainability goals (van Wynsberghe, 2021 ) Strict privacy laws restrict data collection necessary for AI to optimize urban energy use (Bullock et al., 2024 )
Privacy vs. Transparency Tensioned High transparency can inadvertently compromise user privacy (Cheong, 2024 ) Algorithm registries disclose data sources but risk exposing personal data (Buijsman, 2024 )
Privacy vs. Trustworthiness Reinforcing Privacy measures bolster trustworthiness by safeguarding data against misuse, fostering user confidence (Lu et al., 2024 ) Adopting privacy-centric AI practices enhances trust by ensuring user data isn’t exploited deceptively (Lu et al., 2024 )
Pillar: Operational Integrity vs. Societal Empowerment
Explainability vs. Human Oversight Reinforcing Explainability enhances human oversight by providing clear model outputs, aiding in decision-making accuracy (UNESCO, 2022 ) In healthcare, explainable AI systems allow practitioners to verify treatment recommendations, ensuring oversight (UNESCO, 2022 )
Explainability vs. Sustainability Reinforcing Explainability aids sustainable AI practices by ensuring accountable development and deployment, promoting ethical standards (Schmidpeter & Altenburger, 2023 ) AI systems explaining carbon footprints can align sustainability goals with operational transparency (Hamon et al., 2020 )
Explainability vs. Transparency Reinforcing Both explainability and transparency enhance trust by making AI systems’ inner workings and decisions understandability essential for accountability (Cheong, 2024 ) In healthcare AI, both drive accessible patient diagnosis explanations and transparent model algorithms (Ananny & Crawford, 2018 )
Explainability vs. Trustworthiness Reinforcing Explainability enhances trustworthiness by providing clarity on AI decisions, reinforcing confidence in system operations (Toreini et al., 2019 ) In financial AI, clear loan decision explanations increase consumer trust in automated evaluations (Lipton, 2016 )
Governance vs. Human Oversight Reinforcing Governance frameworks guide human oversight, ensuring responsible decision-making, enhancing effective AI system regulation (Bullock et al., 2024 ) Regulations require human oversight for AI use in healthcare, ensuring ethical decisions aligned with governance mandates (Yeung et al., 2019)
Governance vs. Sustainability Reinforcing Governance establishes guidelines supporting sustainable AI practices, ensuring long-term societal and environmental benefits (Schmidpeter & Altenburger, 2023 ) Sustainability standards mandated by governance frameworks ensure energy-efficient AI development and deployment practices (van Wynsberghe, 2021 )
Governance vs. Transparency Reinforcing Governance frameworks enhance transparency, mandating disclosure and open practices to ensure accountability in AI systems (Bullock et al., 2024 ) Governance laws requiring transparent AI audits bolster accountability, fostering public trust in government-aligned AI use (Batool et al., 2023 )
Governance vs. Trustworthiness Reinforcing Governance frameworks bolster trustworthiness by implementing mechanisms ensuring AI systems adhere to ethical principles (Gillis et al., 2024 ) Trustworthiness in AI is strengthened by governance-mandated transparency and accountability standards (Bullock et al., 2024 )
Human Oversight vs. Interpretability Reinforcing Human oversight bolsters interpretability by guiding transparency in AI processes, ensuring systems remain clear to users (Hamon et al., 2020 ) Interpretable algorithms in medical AI gain user trust through human-supervised transparency during their development (Doshi-Velez & Kim, 2017 )
Human Oversight vs. Robustness Reinforcing Human oversight strengthens robustness by mitigating risks through active monitoring and intervention (Tocchetti et al., 2022 ) Human oversight ensures robust system behavior during AI deployment in high-stakes environments like aviation (High-Level Expert Group on Artificial Intelligence, 2020 )
Human Oversight vs. Safety Reinforcing Human oversight improves safety by providing necessary monitoring and intervention capabilities in AI operations (Bullock et al., 2024 ) In aviation, human oversight actively ensures safety by intervening during unexpected autonomous system failures (Williams & Yampolskiy, 2024 )
Human Oversight vs. Security Reinforcing Human oversight enhances security by providing checks against unauthorized access and misuse in AI systems (Lu et al., 2024 ) Security protocols are strengthened by human oversight to monitor potential AI system breaches (Dubber et al., 2020 )
Interpretability vs. Sustainability Neutral Interpretability and sustainability operate independently, focusing on different AI aspects (van Wynsberghe, 2021 ) An AI model could be interpretable but unsustainable due to high computational demands (van Wynsberghe, 2021 )
Interpretability vs. Transparency Reinforcing Interpretability enhances transparency by providing insights into AI mechanisms, fortifying user understanding (Lipton, 2016 ) Transparent models boost public trust, as stakeholders understand how AI decisions are made clearly (Lipton, 2016 )
Interpretability vs. Trustworthiness Reinforcing Interpretability boosts trustworthiness by enhancing users’ understanding, encouraging confidence in AI systems (Rudin, 2019 ) Understanding AI predictions in healthcare improves trust in medical diagnostics (Rudin, 2019 )
Robustness vs. Sustainability Tensioned Minimizing energy consumption could compromise robustness under variable conditions (Carayannis & Grigoroudis, 2023 ) Energy-efficient machine learning models may struggle with edge-case data (Braiek & Khomh, 2024 )
Robustness vs. Transparency Reinforcing Robustness enhances transparency by providing consistent operations, reducing opaque behaviors (Hamon et al., 2020 ) Greater AI robustness minimizes erratic outcomes, facilitating clearer system transparency (Hamon et al., 2020 )
Robustness vs. Trustworthiness Reinforcing Robustness directly contributes to the trustworthiness of AI by enhancing operational reliability under diverse conditions (Braiek & Khomh, 2024 ) AI models with robust architectures improve trust by reliably handling environmental changes without function loss (Braiek & Khomh, 2024 )
Safety vs. Sustainability Reinforcing Safety measures contribute to the responsible lifecycle management, essential for sustainability in AI projects (van Wynsberghe, 2021 ) Applying safety protocols in AI reduces environmental risks, contributing to sustainable management practices (van Wynsberghe, 2021 )
Safety vs. Transparency Reinforcing Transparency reinforces safety by enabling detection and mitigation of risks effectively (Leslie, 2019 ) Clear documentation of AI processes ensures safety, enabling effective oversight and risk management (Leslie, 2019 )
Safety vs. Trustworthiness Reinforcing Safety measures enhance AI systems’ trustworthiness by ensuring reliability and robust risk management (Leslie, 2019 ) Safety protocols in autonomous vehicles improve trustworthiness, ensuring public confidence and acceptance of the technology (Leslie, 2019 )
Security vs. Sustainability Neutral Security and sustainability address different areas, with minimal direct overlap in AI system design (van Wynsberghe, 2021 ) An AI system could be secure without considering sustainability impacts like energy use (van Wynsberghe, 2021 )
Security vs. Transparency Tensioned Security needs might impede transparency efforts, as disclosure could expose vulnerabilities (Bullock et al., 2024 ) When AI transparency compromises security, it can lead to potential breaches, hindering open communications (Bullock et al., 2024 )
Security vs. Trustworthiness Reinforcing Security underpins trustworthiness by safeguarding AI from breaches, thus enhancing reliability (Lu et al., 2024 ) Secure AI systems, protected against data breaches, inherently build user trust (Lu et al., 2024 )