| Inter-Pillar Relationships | 
                            
                            
                            
    
        | Pillar: Societal Empowerment | 
        
            | Human Oversight vs. Sustainability | ■ Reinforcing | Human oversight supports sustainable AI, ensuring ethical standards are achieved, reducing environmental impacts
    
            (Dubber et al., 2020 ) | AI projects evaluated with human oversight consider sustainability impacts, aligning environmental goals with tech innovations
    
            (Rohde et al., 2023 ) | 
        
            | Human Oversight vs. Transparency | ■ Reinforcing | Human oversight and transparency collectively foster accountability, enhancing ethical governance in AI systems
    
            (UNESCO, 2022 ) | In AI-driven medical diagnostics, both drivers ensure user trust and effective oversight
    
            (Ananny & Crawford, 2018 ) | 
        
            | Human Oversight vs. Trustworthiness | ■ Reinforcing | Human oversight enhances AI trustworthiness by ensuring ethical adherence and aligning AI actions with human values
    
            (Dubber et al., 2020 ) | Continuous human monitoring in secure systems ensures AI actions align with trust standards, boosting user confidence
    
            (Lu et al., 2024 ) | 
                            
                                | Cross-Pillar Relationships | 
                            
                            
    
        | Pillar: Ethical Safeguards vs. Societal Empowerment | 
        
            | Fairness vs. Human Oversight | ■ Reinforcing | Human oversight supports fairness by ensuring AI decisions reflect equitable practices grounded in human judgment
    
            (Voeneky et al., 2022 ) | For recruitment AI, human oversight calibrates fairness, reviewing bias mitigation strategies before final implementation
    
            (Bateni et al., 2022 ) | 
        
            | Human Oversight vs. Inclusiveness | ■ Reinforcing | Human oversight promotes inclusiveness by ensuring diverse perspectives shape AI ethics and implementation
    
            (Dubber et al., 2020 ) | Human oversight in AI enhances inclusiveness by involving diverse stakeholder consultations during system development
    
            (Zowghi & Da Rimini, 2024 ) | 
        
            | Bias Mitigation vs. Human Oversight | ■ Reinforcing | Human oversight supports bias mitigation by ensuring continual auditing to detect and address biases
    
            (Ferrara, 2024 ) | In hiring AI, human oversight helps identify bias in training data biases, enhancing fairness
    
            (Ferrara, 2024 ) | 
        
            | Accountability vs. Human Oversight | ■ Reinforcing | Accountability necessitates human oversight for ensuring responsible AI operations, requiring active human involvement and supervision
    
            (Leslie, 2019 ) | AI systems in healthcare employ human oversight for accountable decision-making, preventing potential adverse outcomes
    
            (Novelli et al., 2024 ) | 
        
            | Human Oversight vs. Privacy | ■ Tensioned | Human oversight might collide with privacy, requiring access to sensitive data for supervision
    
            (Solove, 2025 ) | AI deployment often requires human oversight conflicting with privacy norms to evaluate sensitive data algorithms
    
            (Dubber et al., 2020 ) | 
                            
    
        | Pillar: Operational Integrity vs. Societal Empowerment | 
        
            | Governance vs. Human Oversight | ■ Reinforcing | Governance frameworks guide human oversight, ensuring responsible decision-making, enhancing effective AI system regulation
    
            (Bullock et al., 2024 ) | Regulations require human oversight for AI use in healthcare, ensuring ethical decisions aligned with governance mandates
    
            (Yeung et al., 2019) | 
        
            | Human Oversight vs. Robustness | ■ Reinforcing | Human oversight strengthens robustness by mitigating risks through active monitoring and intervention
    
            (Tocchetti et al., 2022 ) | Human oversight ensures robust system behavior during AI deployment in high-stakes environments like aviation
    
            (High-Level Expert Group on Artificial Intelligence, 2020 ) | 
        
            | Human Oversight vs. Interpretability | ■ Reinforcing | Human oversight bolsters interpretability by guiding transparency in AI processes, ensuring systems remain clear to users
    
            (Hamon et al., 2020 ) | Interpretable algorithms in medical AI gain user trust through human-supervised transparency during their development
    
            (Doshi-Velez & Kim, 2017 ) | 
        
            | Explainability vs. Human Oversight | ■ Reinforcing | Explainability enhances human oversight by providing clear model outputs, aiding in decision-making accuracy
    
            (UNESCO, 2022 ) | In healthcare, explainable AI systems allow practitioners to verify treatment recommendations, ensuring oversight
    
            (UNESCO, 2022 ) | 
        
            | Human Oversight vs. Security | ■ Reinforcing | Human oversight enhances security by providing checks against unauthorized access and misuse in AI systems
    
            (Lu et al., 2024 ) | Security protocols are strengthened by human oversight to monitor potential AI system breaches
    
            (Dubber et al., 2020 ) | 
        
            | Human Oversight vs. Safety | ■ Reinforcing | Human oversight improves safety by providing necessary monitoring and intervention capabilities in AI operations
    
            (Bullock et al., 2024 ) | In aviation, human oversight actively ensures safety by intervening during unexpected autonomous system failures
    
            (Williams & Yampolskiy, 2024 ) |