Skip to Main Content
Skip Nav Destination
Purpose

This study investigates how artificial intelligence (AI) integrates into school leadership by examining organisational benefits and ethical challenges. As AI permeates educational administration, school leaders must navigate risks and opportunities in data privacy, fairness, and accountability.

Design/methodology/approach

A PRISMA-aligned structured literature review was conducted on publications from 2019 to 2025. Searches were performed in Scopus, Web of Science, ERIC, and Google Scholar, focussing on K–12 school leadership, with selective higher education sources included only for transferable governance mechanisms (e.g. policy, procurement, documentation/explainability, and auditability). Studies were screened for leadership relevance and ethical-legal engagement. Findings were synthesised using reflexive thematic analysis and conceptual mapping, yielding a final corpus of 50 publications.

Findings

AI affords benefits for school leadership, including administrative efficiency, decision support and, under data governance, more equitable resource allocation. However, adoption introduces ethical-legal challenges. Key concerns include algorithmic bias, opacity in decision-making, and diffuse accountability. Many systems lack robust oversight, clear roles, and targeted training for ethical implementation.

Practical implications

School leaders should embed AI in distributed leadership, mandate explainability and audits in procurement, invest in privacy/data literacy, and align analytics with instructional priorities to secure equity, lawful processing, and reviewable accountability. A one-page governance map (Leaders' Governance Guide) is provided, mapping use cases to risks, safeguards, and an equity note.

Originality/value

The review links benefits and risks to accountability and legal implications, proposing a leadership governance frame to support equitable, transparent AI in schools.

Rapid technological advances are reshaping education, particularly how school leaders respond to digital innovation (Kafa, 2025). At the heart of this transformation lies artificial intelligence (AI), a branch of computer science focused on systems that replicate human-like capabilities and learn from data to perform tasks with a degree of autonomy (Peltier et al., 2024). AI tools such as machine learning, natural language processing, and predictive analytics enable large-scale processing, pattern recognition, and automation (Anastasiou, 2025).

In this article, school leadership refers to setting direction, developing people, and redesigning the organisation to improve learning (Leithwood et al., 2004). Emphasis is placed on instructional leadership, with a focus on leaders' deliberate use of evidence to support teaching and learning (Hallinger, 2005), and on distributed leadership, where practice is shared across roles and contexts (Harris, 2008; Spillane, 2006). In AI-enabled contexts, this includes selecting and governing tools, ensuring lawful and ethical data practices, building staff capacity, and aligning analytics with pedagogical goals.

In schools, AI is increasingly embedded in leadership and administration through intelligent dashboards, predictive models, and automated workflows that support monitoring, resource management, and timely intervention (Kesim et al., 2025; Koukaras et al., 2025). This shift supports more strategic, evidence-informed leadership, and earlier identification of at-risk students while streamlining routine processes (Dai et al., 2025; Sain et al., 2024; Sposato, 2025).

Yet the evidence base on AI for school leadership is fragmented (Kafa and Eteokleous, 2024): many reviews prioritise classroom applications or higher education, separate benefits from risks, or discuss policy without connecting it to everyday leadership routines and accountability. An integrative account is needed that synthesises organisational benefits (e.g. administrative efficiency, decision support, resource management, and equity) with ethical challenges (e.g. privacy, surveillance, bias, transparency), links these to accountability and legal implications for responsibility, oversight, and compliance, and uses leadership theory to clarify when AI augments rather than substitutes leadership.

To address this gap, a PRISMA-based structured review of 50 publications (2019–2025) examined the following research question: What are the organisational benefits and ethical implications of integrating AI into school leadership, and how can leaders navigate these opportunities and risks responsibly? The review provides an evidence-informed foundation for policy and practice by linking concrete applications to governance mechanisms (e.g. procurement, documentation/explainability, human oversight, and audits) grounded in equity, transparency, and ethical stewardship.

This study employs a structured literature review to examine how AI intersects with school leadership, organisational benefits, and ethical-legal challenges. The process followed PRISMA guidance to ensure transparency, replicability, and methodological rigour (Page et al., 2021). The completed PRISMA 2020 checklist is provided in Appendix 1. The publication window (2019–2025) captures the post-2019 acceleration of AI-in-education research and the generative-AI inflection after 2022, while retaining the most recent policy and governance developments through 2025. To avoid under-representing foundational pre-2019 work on learning analytics and algorithmic accountability, targeted scoping searches and backward citation chasing were conducted. These sources served as contextual anchors and were excluded from the PRISMA-coded corpus.

A Boolean strategy combined four concept blocks with AND: (AI) AND (school leadership) AND (administration/governance) AND (ethics/risks). Full database-specific search strings, search dates, operationalised inclusion/exclusion criteria (with examples), and screening workflow details are provided in Appendix 2. Searches covered Scopus, Web of Science, ERIC, and Google Scholar, limited to peer-reviewed journal articles and scholarly books/chapters in English (2019–2025). The review centred on K–12 leadership and administration, using selective higher education governance sources only for sector-agnostic mechanisms (e.g. procurement, lawful processing, documentation/explainability, auditability, and oversight) transferable to school leadership; these sources informed governance frameworks rather than evidence of K–12 impacts. The search yielded 154 records. After deduplication and title/abstract screening, full texts were assessed against inclusion criteria (education context; leadership/administrative dimension; explicit engagement with AI tools, impacts, or governance). Exclusions removed purely technical or engineering-focused items (e.g. model development/performance without an explicit leadership, administrative, or governance application), classroom-only work without leadership relevance, and conceptual commentaries lacking a substantive analytical contribution (e.g. no clear framework, governance implications, or engagement with evidence/policy). Conceptual and policy-analytical work was not excluded per se. The final corpus comprised 50 studies (Appendix 3).

Two researchers independently screened titles/abstracts and assessed full texts against the eligibility criteria. Inter-rater reliability was calculated using Cohen's kappa: title/abstract screening κ = 0.79 (88% agreement); full-text eligibility κ = 0.75 (87% agreement). Disagreements were resolved through discussion and consensus, with decisions recorded in the audit trail. Peer debriefing took place in structured meetings during analysis and synthesis to review memos, refine code definitions, and agree theme boundaries.

Two authors independently coded the included publications using a hybrid deductive–inductive codebook aligned with the review questions. Coding proceeded iteratively, using analytic memos and peer debriefing to refine code definitions; disagreements were resolved through discussion until consensus was reached. Supporting materials are available via the Open Science Framework (OSF; DOI: 10.17605/OSF.IO/TEW74), including Appendix 2 (search documentation) and Appendix 3 (study characteristics), alongside a condensed mini codebook (OSF supplement).

Analysis used reflexive thematic analysis with a hybrid deductive–inductive approach (Braun and Clarke, 2019). Deductive codes reflected the review questions (e.g. organisational benefits, ethical challenges, accountability/legal implications), while inductive codes captured emergent patterns (e.g. procurement controls, meaningful human oversight, data-provenance practices). Two iterative coding rounds and analytic memos produced a codebook; an audit trail and peer debriefing enhanced credibility. A design-sensitive critical appraisal grouped studies by design family (empirical vs conceptual/policy-analytical) and by empirical approach where applicable, to weigh evidence in the synthesis, guided by JBI and MMAT criteria for empirical designs (Appendix 2). Consistent with reflexive thematic analysis, agreement statistics were used to evaluate screening decisions only; the credibility of interpretive coding was supported through the audit trail and peer debriefing. During extraction, publications were classified as empirical or conceptual/policy-analytical to support cautious interpretation and to avoid treating conceptual claims as impact evidence (Appendix 3).

Findings reflect English-language coverage, selected databases, heterogeneity across designs and contexts, and rapid AI change (risk of obsolescence). Several included studies are conceptual or policy-analytical, limiting causal inference. These risks were mitigated through multi-database searching, explicit inclusion/exclusion criteria, PRISMA-aligned reporting, and a transparent coding protocol. Grey literature (e.g. vendor reports and non-peer-reviewed outputs) was not systematically included to prioritise traceable peer-reviewed sources aligned with the review questions. Nevertheless, residual bias cannot be entirely ruled out (Braun and Clarke, 2019; Page et al., 2021).

Figure 1 presents the study selection process and Table 1 summarises the synthesis.

Figure 1
A flow diagram showing the study identification, screening, and inclusion stages of literature review process.The flow diagram is titled “Identification of studies via databases”. Three text boxes are arranged in a vertical series on the left, depicting three stages. From top to bottom, the text boxes are labeled as follows: “Identification”, “Screening”, and “Included”. The first text box in the “Identification” stage reads, “Studies identified from Databases n equals 154”. A rightward arrow points from the first text box to the second text box positioned directly on the right in the same stage. The text box reads, “Studies removed before screening: Duplicate records removed: n equals 27”. A downward arrow points from the first text box to the third, positioned directly below in the “Screening” stage. The text box reads, “Studies assessed for eligibility: Google Scholar n equals 52; Scopus n equals 34; Web of Science n equals 28; ERIC n equals 13; Total: n equals 127”. A rightward arrow points from the third text box to the fourth text box positioned directly on the right in the same stage. The text box reads, “Studies excluded: AI without leadership focus n equals 28; Technical or engineering studies n equals 26; Conceptual essays without empirical basis n equals 23”. A downward arrow points from the third text box to the fifth, positioned directly below in the “Included” stage. The text box reads, “Studies included in review n equals 50”.

PRISMA flow diagram

Figure 1
A flow diagram showing the study identification, screening, and inclusion stages of literature review process.The flow diagram is titled “Identification of studies via databases”. Three text boxes are arranged in a vertical series on the left, depicting three stages. From top to bottom, the text boxes are labeled as follows: “Identification”, “Screening”, and “Included”. The first text box in the “Identification” stage reads, “Studies identified from Databases n equals 154”. A rightward arrow points from the first text box to the second text box positioned directly on the right in the same stage. The text box reads, “Studies removed before screening: Duplicate records removed: n equals 27”. A downward arrow points from the first text box to the third, positioned directly below in the “Screening” stage. The text box reads, “Studies assessed for eligibility: Google Scholar n equals 52; Scopus n equals 34; Web of Science n equals 28; ERIC n equals 13; Total: n equals 127”. A rightward arrow points from the third text box to the fourth text box positioned directly on the right in the same stage. The text box reads, “Studies excluded: AI without leadership focus n equals 28; Technical or engineering studies n equals 26; Conceptual essays without empirical basis n equals 23”. A downward arrow points from the third text box to the fifth, positioned directly below in the “Included” stage. The text box reads, “Studies included in review n equals 50”.

PRISMA flow diagram

Close modal
Table 1

Main findingsa

ReferencesMain finding
Abulibdeh et al. (2025), Diallo and Tudose (2024), Kafa (2025), Kesim et al. (2025), Khairullah et al. (2025), Sain et al. (2024)  AI is associated with improved administrative efficiency 
Labadze et al. (2023), Deep et al. (2024)  AI is widely reported to assist with grading, reporting, and parent/guardian communication 
Adams and Thompson (2025), Berkovich (2025), Dai et al. (2025), Gasevic et al., 2019, Ghamrawi et al. (2025), Koukaras et al. (2025), Sposato (2025)  AI supports evidence-informed decision-making and strategic planning 
Arar et al. (2025), Gasevic et al., 2019, Ghamrawi et al. (2025)  AI can enable more participatory, evidence-based leadership (e.g. distributed leadership across units with clear roles and transparent practices) 
Koukaras et al. (2025), Marino and Vasquez (2024), Pietsch and Mah (2025), Sain et al. (2024)  AI may enhance equitable resource allocation when coupled with sound data governance and monitoring 
Anastasiou (2025), Berkovich and Eyal (2025), Bixler and Ceballos (2025), Igbokwe (2023), Kim and Wargo (2025), Koukaras et al. (2025), Sain et al. (2024), Sposato (2025)  AI can improve HRM and help anticipate school-level needs (e.g. staffing signals, workload) 
Chiu (2024), Fu and Weng (2024), Ghimire and Edwards (2024), Khairullah et al. (2025), Kovacevic et al. (2025), Ocen et al. (2025), O'Daffer et al. (2025), Oncioiu and Bularca (2025), Sain et al. (2024), Xue et al. (2025)  AI's processing of sensitive data may heighten privacy and surveillance risks 
Chiu (2024), Dogan and Arslan (2025), Ghimire and Edwards (2024), Jin et al. (2025), Kafa (2025), Kelley and Wenzel (2025), Kovacevic et al. (2025), Oncioiu and Bularca (2025)  School leaders commonly lack AI-related training and understanding (including ethics, security, and data privacy), targeted capacity-building is needed 
Colonna (2024), Koukaras et al. (2025), Wang (2024)  Regulatory compliance under jurisdiction-specific regimes (e.g. GDPR in the EU/EEA, FERPA in the US) is hampered by AI system complexity (automation, opacity, multi-step decisions) 
Bixler and Ceballos (2025), Fu and Weng (2024), Polat et al. (2025)  AI systems can perpetuate algorithmic bias and discrimination, especially with low-quality or unrepresentative data 
Bollaert (2025), Ilieva et al. (2025), Khosravi et al. (2022), Polat et al. (2025), Türkmen (2025), Wang (2024)  Limited transparency/explainability (ΧΑΙ) constrains informed decision-making and challenge processes 
Colonna (2024), Jin et al. (2025), Li et al. (2025), Wang (2024), Wu et al. (2024)  Accountability is diffuse across actors (vendors, IT, teachers, school leaders), leaving responsibilities unclear 
Ally and Mishra (2024), An et al. (2025), Arar et al. (2025), Chan (2023), Chiu (2024), Dabis and Csáki (2024), Dogan and Arslan (2025), García-López and Trujillo-Liñán (2025), Kovacevic et al. (2025), Li et al. (2025), Oncioiu and Bularca (2025), Pinho et al. (2025)  Robust AI governance frameworks and oversight mechanisms are required (policies, DPIAs, role clarity, human oversight) 
Colonna (2024), Dabis and Csáki (2024), Jin et al. (2025), Xue et al. (2025)  Legal obligations regarding AI in education remain ambiguous and uneven across jurisdictions, risk-based local policies are needed 
Colonna (2024), García-López and Trujillo-Liñán (2025), Gasevic et al. (2019), Ilieva et al. (2025), Li et al. (2025), Ocen et al. (2025), Sposato (2025), Wu et al. (2024)  AI deployments carry non-compliance risk (privacy/anti-discrimination) if safeguards are weak 
Adams and Thompson (2025), Ally and Mishra (2024), An et al. (2025), Chan (2023), Dogan and Arslan (2025), Pinho et al. (2025), Richardson et al. (2025)  School leaders should require and oversee regular AI audits and adherence to ethical compliance standards (privacy/security, bias/fairness, impact) 
ReferencesMain finding
Abulibdeh et al. (2025), Diallo and Tudose (2024), Kafa (2025), Kesim et al. (2025), Khairullah et al. (2025), Sain et al. (2024)  AI is associated with improved administrative efficiency 
Labadze et al. (2023), Deep et al. (2024)  AI is widely reported to assist with grading, reporting, and parent/guardian communication 
Adams and Thompson (2025), Berkovich (2025), Dai et al. (2025), Gasevic et al., 2019, Ghamrawi et al. (2025), Koukaras et al. (2025), Sposato (2025)  AI supports evidence-informed decision-making and strategic planning 
Arar et al. (2025), Gasevic et al., 2019, Ghamrawi et al. (2025)  AI can enable more participatory, evidence-based leadership (e.g. distributed leadership across units with clear roles and transparent practices) 
Koukaras et al. (2025), Marino and Vasquez (2024), Pietsch and Mah (2025), Sain et al. (2024)  AI may enhance equitable resource allocation when coupled with sound data governance and monitoring 
Anastasiou (2025), Berkovich and Eyal (2025), Bixler and Ceballos (2025), Igbokwe (2023), Kim and Wargo (2025), Koukaras et al. (2025), Sain et al. (2024), Sposato (2025)  AI can improve HRM and help anticipate school-level needs (e.g. staffing signals, workload) 
Chiu (2024), Fu and Weng (2024), Ghimire and Edwards (2024), Khairullah et al. (2025), Kovacevic et al. (2025), Ocen et al. (2025), O'Daffer et al. (2025), Oncioiu and Bularca (2025), Sain et al. (2024), Xue et al. (2025)  AI's processing of sensitive data may heighten privacy and surveillance risks 
Chiu (2024), Dogan and Arslan (2025), Ghimire and Edwards (2024), Jin et al. (2025), Kafa (2025), Kelley and Wenzel (2025), Kovacevic et al. (2025), Oncioiu and Bularca (2025)  School leaders commonly lack AI-related training and understanding (including ethics, security, and data privacy), targeted capacity-building is needed 
Colonna (2024), Koukaras et al. (2025), Wang (2024)  Regulatory compliance under jurisdiction-specific regimes (e.g. GDPR in the EU/EEA, FERPA in the US) is hampered by AI system complexity (automation, opacity, multi-step decisions) 
Bixler and Ceballos (2025), Fu and Weng (2024), Polat et al. (2025)  AI systems can perpetuate algorithmic bias and discrimination, especially with low-quality or unrepresentative data 
Bollaert (2025), Ilieva et al. (2025), Khosravi et al. (2022), Polat et al. (2025), Türkmen (2025), Wang (2024)  Limited transparency/explainability (ΧΑΙ) constrains informed decision-making and challenge processes 
Colonna (2024), Jin et al. (2025), Li et al. (2025), Wang (2024), Wu et al. (2024)  Accountability is diffuse across actors (vendors, IT, teachers, school leaders), leaving responsibilities unclear 
Ally and Mishra (2024), An et al. (2025), Arar et al. (2025), Chan (2023), Chiu (2024), Dabis and Csáki (2024), Dogan and Arslan (2025), García-López and Trujillo-Liñán (2025), Kovacevic et al. (2025), Li et al. (2025), Oncioiu and Bularca (2025), Pinho et al. (2025)  Robust AI governance frameworks and oversight mechanisms are required (policies, DPIAs, role clarity, human oversight) 
Colonna (2024), Dabis and Csáki (2024), Jin et al. (2025), Xue et al. (2025)  Legal obligations regarding AI in education remain ambiguous and uneven across jurisdictions, risk-based local policies are needed 
Colonna (2024), García-López and Trujillo-Liñán (2025), Gasevic et al. (2019), Ilieva et al. (2025), Li et al. (2025), Ocen et al. (2025), Sposato (2025), Wu et al. (2024)  AI deployments carry non-compliance risk (privacy/anti-discrimination) if safeguards are weak 
Adams and Thompson (2025), Ally and Mishra (2024), An et al. (2025), Chan (2023), Dogan and Arslan (2025), Pinho et al. (2025), Richardson et al. (2025)  School leaders should require and oversee regular AI audits and adherence to ethical compliance standards (privacy/security, bias/fairness, impact) 
Note(s):
a

The table lists only authors, year, and main findings. Other details (e.g. country, design, methodology) were deemed unnecessary for this synthesis, as the purpose was to highlight the thematic evidence rather than the study characteristics. Study characteristics are provided in Appendix 3

The structured literature review revealed three major areas of impact. First, AI is associated with organisational benefits, including administrative efficiency and support for data-informed decision-making and strategic planning; with sound governance and monitoring, it may also support more equitable resource allocation. Reported benefits extend to grading/reporting and parent/guardian communication and, in emerging evidence, to Human Resource Management (HRM) and anticipation of school-level needs. Second, the literature highlights ethical concerns around sensitive data processing, surveillance, algorithmic bias, and limited transparency/explainability – risks that may yield unfair treatment, reinforce stereotypes, and erode student autonomy; capacity gaps in privacy and consent management can amplify these risks. Third, accountability and legal arrangements remain underdeveloped: responsibility is diffuse across vendors, IT staff, teachers, and school leaders; compliance is hampered by automation, opacity, and multi-step decision flows; and obligations vary across jurisdictions, which increases non-compliance risks where safeguards are weak and underscores the need for regular audits and Data Protection Impact Assessments (DPIAs), clearly assigned roles and responsibilities, and meaningful human oversight. Findings are synthesised into a Leaders' Governance Guide (Appendix 4), mapping common school AI uses to risks, safeguards, and equity considerations. AI is used as an umbrella term. However, distinctions are maintained between generative AI (e.g. chatbots for drafting/summarising), predictive/learning analytics and decision-support systems, and platform-embedded AI in learning management and monitoring systems, given differences in risk profiles and governance requirements.

AI is reshaping school leadership through tools that improve administrative efficiency, support data-informed decision-making, and enhance resource management and equity. The following sections examine how these developments expand school leaders' strategic and operational capacities.

AI is increasingly used to automate time-consuming administrative tasks (e.g. attendance tracking, timetabling, report generation), thereby improving administrative workflows and freeing leadership time for instructional and strategic work (Abulibdeh et al., 2025; Kafa, 2025; Kesim et al., 2025; Khairullah et al., 2025; Richardson et al., 2025; Sain et al., 2024). Chatbots and virtual assistants handle routine queries and notifications, while AI-powered scheduling tools minimise conflicts and streamline scheduling, reducing administrative workload and scheduling errors (Diallo and Tudose, 2024; Kesim et al., 2025; Labadze et al., 2023; Sain et al., 2024). Automated grading further lightens teacher workload and accelerates feedback for repetitive or objective tasks (Deep et al., 2024). In parallel, AI improves day-to-day communications by automating parent/guardian notifications, coordinating meetings, and assisting with report drafting, enhancing response times and institutional transparency (Labadze et al., 2023; Richardson et al., 2025).

AI can support strategic decision-making through predictive analytics and real-time dashboards, enabling leaders to anticipate budget requirements, identify students at risk, and plan staffing more effectively (Dai et al., 2025; Koukaras et al., 2025). Related work examines how these tools are operationalised in leadership practice and governance routines (Adams and Thompson, 2025; An et al., 2025; Berkovich, 2025; Chan, 2023; Marino and Vasquez, 2024). Evidence from learning analytics adoption suggests that effective decision support requires organisational governance and leadership coordination for implementation (Gasevic et al., 2019). Scenario modelling can compare policy options and forecast outcomes, improving responsiveness and long-term planning (Dai et al., 2025; Sposato, 2025). Rather than replacing professional judgement, AI acts as a decision-making partner that augments reflection and supports evidence-based leadership; its value is strengthened when distributed leadership across units establishes clear roles and transparent practices in the use of decision-support tools (Arar et al., 2025; Ghamrawi et al., 2025).

AI can enable more strategic and, with appropriate data governance, more equitable resource allocation by identifying gaps in access to interventions, activities, and learning technologies (Koukaras et al., 2025; Marino and Vasquez, 2024; Pietsch and Mah, 2025; Sain et al., 2024). It can support forecasting of enrolment trends, staffing needs, and infrastructure demands for timely planning (Koukaras et al., 2025; Sain et al., 2024). Importantly, efficiency gains should serve equity; with robust oversight, AI can surface disparities and inform more inclusive leadership practices (Pietsch and Mah, 2025). Additionally, AI is used in HRM to support recruitment and workload balancing (Anastasiou, 2025; Igbokwe, 2023; Kim and Wargo, 2025) and to align professional development with institutional priorities (Berkovich and Eyal, 2025; Bixler and Ceballos, 2025; Sposato, 2025).

ΑI adoption in school leadership raises ethical concerns that warrant scrutiny. Core challenges centre on data privacy and surveillance, algorithmic bias, and limited transparency and explainability. Governance needs vary by AI type: low-stakes productivity uses (e.g. drafting communications) typically require lighter-touch controls than predictive analytics or monitoring systems, which may shape high-stakes decisions about support, discipline, or resource allocation. The following sections examine these domains and their practical implications for school leadership practice.

AI systems depend on extensive datasets (academic, behavioural, biometric, and affective signals) collected via learning analytics, smart-campus sensors, and remote proctoring (Koukaras et al., 2025; Ocen et al., 2025; Sain et al., 2024; Xue et al., 2025). When third-party vendors manage storage and processing, privacy risks escalate due to opaque data flows, secondary data use, and limited contractual transparency (Khairullah et al., 2025). School leaders often lack the expertise to assess these risks adequately and communicate them clearly to stakeholders, hindering informed consent (Chiu, 2024; Ghimire and Edwards, 2024; Kelley and Wenzel, 2025). This gap is compounded by limited organisational privacy and legal literacy as well as fragmented guidance (Kovacevic et al., 2025; Oncioiu and Bularca, 2025). AI-driven personalisation can slide into profiling, raising concerns about fairness, student autonomy, and surveillance cultures in schools (Fu and Weng, 2024; Polat et al., 2025). School-based online surveillance involves widespread automated flagging and limited human review, increasing risks for privacy, misclassification, and inequitable impacts (O'Daffer et al., 2025).

AI systems are only as equitable as their training data and design choices allow. The literature indicates that AI can reproduce or amplify social bias by disproportionately flagging marginalised students as “at risk”, shaping discipline and resource allocation (Fu and Weng, 2024; Polat et al., 2025). Critical scholarship shows that data-driven systems can entrench inequality through opaque scoring, proxies, and feedback loops, positioning bias and accountability as governance – not merely technical – issues (Diakopoulos, 2016; Noble, 2018; O'Neil, 2016; Pasquale, 2015). In education, foundational learning analytics scholarship foregrounds privacy, consent, and data governance for responsible analytics use in institutional decision-making (Slade and Prinsloo, 2013; Pardo and Siemens, 2014; Williamson, 2017). Bias also reflects how learners are conceptualised: reducing students to data profiles can strip context (e.g. socio-emotional development, family background) and entrench inequity (Arar et al., 2025; Polat et al., 2025). Data quality and documentation are pivotal; unrepresentative datasets and weak provenance tracking increase discriminatory outcomes and error propagation (Bixler and Ceballos, 2025). Opaque models can mask biased logic and impede scrutiny, increasing risks of misuse and over-reliance (Bixler and Ceballos, 2025).

Transparency underpins ethical and accountable AI use in schools, encompassing technical explainability and institutional openness (Bollaert, 2025). This includes the capacity to interpret, communicate, and justify algorithmic decisions (Ilieva et al., 2025; Polat et al., 2025). Education-focused Explainable AI (XAI) reviews emphasise that interpretable, actionable explanations are prerequisites for trust and meaningful human oversight in risk-prediction and decision-support tools (Khosravi et al., 2022; Türkmen, 2025). In practice, many systems operate as “black boxes”, limiting educators' ability to scrutinise logic, contest outcomes, or provide reasoned feedback to families. Explainability is especially critical in high-stakes contexts (e.g. assessment or discipline), where decisions must be auditable and intelligible to educators, parents, and stakeholders (Ilieva et al., 2025; Wang, 2024). To address this, adopting XAI practices (e.g. clear documentation, model/decision records, interpretable outputs) can mitigate opacity and support fairer, reviewable decisions (Pinho et al., 2025; Jin et al., 2025).

The integration of AI into school leadership raises important questions concerning accountability and legal compliance. This section focuses on two key areas: responsibility and oversight, along with broader legal and policy dimensions associated with AI use in educational settings.

AI integration complicates accountability: responsibility may diffuse across vendors, school leaders, and teachers, creating a “problem of many hands” and weakening oversight unless roles are clearly assigned (Li et al., 2025; Wu et al., 2024). Opacity and multi-step decision flows can obscure who is answerable, even under data protection regimes (Colonna, 2024; Wang, 2024). To retain human judgement, the literature recommends ethics committees, algorithmic audits, and targeted training for leaders and staff (Adams and Thompson, 2025; Dogan and Arslan, 2025; Pinho et al., 2025; Richardson et al., 2025). Distributed leadership across units and clear escalation routes can strengthen day-to-day oversight of AI-supported decisions (Arar et al., 2025; Ghamrawi et al., 2025; Wu et al., 2024). Risk-based regulation increasingly emphasises meaningful human oversight for higher-stakes deployments; where education uses are “high-risk” (e.g. under the EU AI Act), stronger documentation and decision safeguards may be required (Jin et al., 2025; Li et al., 2025).

Legal protections for AI in schools remain uneven and continue to evolve. While frameworks such as the General Data Protection Regulation (GDPR; EU/EEA data protection law) and the Family Educational Rights and Privacy Act (FERPA; US student-records privacy law for schools receiving US Department of Education funds) set important baselines, their application to AI is often ambiguous and unevenly enforced (Colonna, 2024). Current policy debates emphasise compliance by design, embedding DPIAs, auditability, and procurement clauses to safeguard student rights and define appeal processes (Ally and Mishra, 2024; Jin et al., 2025; Wu et al., 2024). Recent reviews call for collaborative regulation to secure privacy, fairness, and accountability (García-López and Trujillo-Liñán, 2025). Monitoring and analytics tools should have a clear lawful basis and meet proportionality and anti-discrimination standards; otherwise, they may breach privacy or equality law (Colonna, 2024; García-López and Trujillo-Liñán, 2025; Ilieva et al., 2025; Sposato, 2025). In this landscape, school leaders act as legal stewards by establishing role clarity, human oversight, and risk-based local policies aligned with evolving regulations (Jin et al., 2025).

Viewed through instructional and distributed leadership lenses, this review suggests that AI is reshaping school leadership in promising ways that are contingent on governance and capacity. Building on the Results, the Discussion examines how leadership practice mediates the conditions under which AI augments (rather than substitutes) instructional and organisational work, with role clarity and ongoing oversight as central mechanisms (Colonna, 2024; Li et al., 2025).

First, administrative efficiency can act as a precursor to instructional focus. From instructional and distributed leadership perspectives, administrative efficiency is less an end than a mechanism that reallocates leaders' time, cognitive capacity, and responsibility from routine coordination to instructional and relational work. Leadership-for-learning research suggests that influence on teaching and learning is largely indirect, operating through routines, collaboration, and shared responsibility rather than managerial control alone (Hallinger and Heck, 2010). When embedded in mediated leadership arrangements, AI-enabled efficiencies can act as enablers: reducing administrative friction can create protected space for pedagogical engagement, professional dialogue, and instructional coherence, rather than merely accelerating managerial routines.

Crucially, evidence from high-trust school contexts suggests that attention reallocation is relationally conditioned. When administrative demands ease and responsibilities are redistributed with clarity, oversight, and trust, principals are better able to sustain instructional leadership and reinforce shared accountability and collective agency (Keravnos et al., 2025). Thus, AI aligns with distributed leadership not by substituting leadership work but by reconfiguring how it is enacted across people, roles, and routines, consistent with disciplined distribution tied to instructional purpose (Harris, 2008). When automation is coupled with clear role design, human judgement, and trust-based delegation, it can reduce bottlenecks and deepen instructional focus; as a standalone technical solution, its benefits are likely to remain fragile, uneven, and short-lived.

Second, decision support is a socio-technical practice rather than a technical fix and is most effective when treated as human judgement supported by interpretable evidence. From a distributed perspective, decisions are enacted through interactions among people, routines, and artefacts, so AI tools shape judgement only when embedded in shared interpretive work rather than substituting for it (Spillane, 2006). The leadership task is less about adopting decision-support tools than designing routines for sensemaking, contestability, and documentation so that AI augments professional judgement rather than displacing it.

Socio-technical scholarship shows that algorithmic outputs acquire meaning through organisational interpretation and governance, not through technical accuracy alone (Williamson and Eynon, 2020). In schools, AI-supported evidence must remain intelligible, explainable, and contestable to teachers and families, consistent with instructional leadership's emphasis on transparent reasoning and pedagogical justification. Where interpretive routines are weak, decision support can narrow professional discretion and shift authority away from educators; where they are intentionally designed, AI can strengthen collective sensemaking and instructional coherence rather than undermine professional agency.

Third, resource management and equity come to the fore. Equity-oriented analytics are not inherent to data-driven systems; they depend on governance choices about what is measured, how proxies are constructed and interpreted, and how allocation decisions are reviewed over time. Data-governance scholarship shows that without transparent criteria and explicit safeguards, efficiency-oriented analytics can reproduce or amplify inequities by normalising historical patterns and embedding them in routine decisions (O'Neil, 2016; Williamson, 2017). From a leadership perspective, the task is not to optimise allocation but to make analytics' normative assumptions visible, contestable, and subject to professional and ethical review.

Where equity goals are explicit, rationales are documented, and review mechanisms are embedded in everyday practice, analytics can support defensible and inclusive allocation decisions, especially in scarcity contexts (Eubanks, 2018; Selwyn, 2022). In such cases, AI-enabled resource management is less a neutral optimisation tool than a governance instrument requiring ongoing leadership judgement, distributed oversight, and alignment with educational values. Without these conditions, efficiency logics can displace equity; with them, these tools can support transparent, accountable, and pedagogically defensible resource use.

Importantly, evidence is not uniform across contexts. Efficiency and decision-support gains depend on leadership capacity, data quality, vendor arrangements, and regulatory clarity, but may be offset by trade-offs such as increased surveillance exposure, additional workload (e.g. monitoring, documentation, and contestation), or amplified inequities where governance and oversight are weak. This variability suggests that benefits depend on implementation conditions rather than technology alone.

Ethical risks are likewise structured around data privacy and surveillance. As shown in the Results, privacy and surveillance concerns rise as monitoring and analytics expand data flows across organisational and vendor boundaries; the Discussion therefore foregrounds the leadership work required to make these systems governable (e.g. lawful basis, proportionality, role clarity, and ongoing oversight). Capacity gaps in privacy, consent, and risk communication can undermine informed participation and increase reliance on vendor assurances (Chiu, 2024; Ghimire and Edwards, 2024; Kelley and Wenzel, 2025).

Algorithmic bias further underscores the need for socio-technical judgement. Unrepresentative data, weak provenance, and proxy variables can institutionalise disparities unless interpretive routines and safeguards are embedded in everyday practice (Fu and Weng, 2024; Wang, 2024). Documentation and data quality therefore become leadership concerns, not merely technical ones. Instructional leadership provides a counterweight: teams need shared criteria for interpreting alerts and safeguards to prevent biased outputs from hardening into routine practice.

Transparency and explainability are governance mechanisms that keep AI-supported decisions reviewable, contestable, and communicable within the school community (Colonna, 2024; Wu et al., 2024). XAI-oriented practices (e.g. decision records and interpretable outputs) are less about technical curiosity and more about keeping decisions scrutinised and justified within the community (Bollaert, 2025; Ilieva et al., 2025; Wang, 2024).

Questions of responsibility and oversight crystallise the role of distributed leadership. Responsibility can become diffuse across vendors, school leaders, and teachers – the classic “problem of many hands” – unless roles and escalation routes are explicit (Li et al., 2025). Distributed leadership across units clarifies responsibilities and escalation routes, improving coordination of policy, tools, and support (Arar et al., 2025; Ghamrawi et al., 2025). Complementary measures such as ethics committees, algorithmic audits, and targeted AI training help maintain human judgement and traceability in everyday decisions (Adams and Thompson, 2025; Pinho et al., 2025).

Finally, legal and policy dimensions frame the boundaries within which leadership operates. Leadership often unfolds amid evolving, ambiguous compliance expectations, making compliance by design (e.g. procurement controls, documentation, auditability, and contestability routes) a practical leadership function rather than a purely legal exercise. For school leaders, this means aligning day-to-day instructional uses with procurement discipline, documented decision processes, and clear routes for contestation and redress.

Across these strands, a consistent picture emerges: AI is not a substitute for leadership but an amplifier of leadership quality. Where schools codify distributed routines, invest in leader capacity for privacy and data literacy, and insist on interpretable decision support that serves pedagogy, the benefits described here become attainable. The practical task for school leaders is to orchestrate people, processes, and data to ensure that automation sustains collective judgement and inclusivity rather than sidelining them.

The integration of AI into school leadership offers opportunities but also significant responsibilities. AI can enhance administrative efficiency, communication, and strategic decision-making, but it introduces ethical, legal, and operational complexity. Addressing these challenges requires coordinated roles among school leaders, policymakers, and authorities.

In resource-constrained schools, a practical sequence is to: set a minimum governance baseline (e.g. data inventory, lawful basis, vendor checks, role clarity), prioritise low-risk workload relief (e.g. routine communications, administrative automation), pilot limited decision support with interpretable outputs and safeguards, and then scale to higher-stakes analytics (e.g. targeting interventions or resource allocation) with documentation, review routines, and periodic audits.

To operationalise this sequence, tools include a one-page procurement checklist (e.g. purpose, lawful basis, data minimisation/retention, vendor due diligence), a RACI (Responsible – Accountable – Consulted - Informed) role map across vendor–district–school, DPIA triggers for higher-stakes or sensitive-data uses, a human-oversight protocol (e.g. override/appeal, documentation requirements), and a simple audit cadence (e.g. bias checks, drift monitoring, incident logging).

At the district level, superintendents can standardise procurement, contracts, and audit schedules, while principals can operationalise day-to-day oversight, escalation routes, and stakeholder communication. Appendix 4 provides a one-page Leaders' Governance Guide, mapping common school and district AI use cases to risks, safeguards (e.g. explainability documentation, procurement clauses, and logging/review), and an equity note.

To enact these measures, educational leaders must cultivate technical competence and ethical awareness: understanding how tools work, judging relevance, and aligning use with human-centred values. Locally, governance mechanisms such as ethics committees, data review panels, and algorithmic audits can guide responsible implementation and ongoing oversight.

Meaningful stakeholder engagement is essential for trust and alignment with community values. Transparent dialogue with teachers, students, and families about purposes, limitations, and risks supports informed participation and shared understanding. Leaders should ensure that AI advances equity by addressing bias and adopting inclusive data practices, including clear communication of rights and safeguards.

At the policy level, attention must extend beyond infrastructure to governance by design: meaningful human oversight protocols, regular algorithmic audits, and procurement controls that require documentation, transparency, and routes for challenge and redress. Policy frameworks should promote explainability and embed ethical evaluation across deployment, ensuring that AI serves pedagogical goals while upholding rights and institutional integrity.

High-quality leadership development is pivotal. Ongoing professional learning for principals and leadership teams should integrate privacy and consent management, data literacy, algorithmic fairness and explainability, and contract oversight, supported by coaching and communities of practice. Programmes should be job-embedded and role-aligned, with assessment of practice and coaching to support transfer into school routines.

Governments play a crucial role in setting the regulatory and infrastructural foundations for AI in education. Priorities include secure digital ecosystems, national data-governance standards, and the integration of AI into broader education strategies. Dedicated public oversight bodies should audit school-based AI, enforce compliance, and safeguard student rights.

As AI becomes embedded in schooling, research should prioritise ethical, inclusive, and context-sensitive implementation. Longitudinal mixed-methods studies should trace how AI reshapes leadership routines, decision quality, and equity outcomes, and move beyond performance metrics to monitor unintended effects (e.g. surveillance, algorithmic bias, opacity, and reduced student autonomy). Comparative research across regulatory contexts should test which governance arrangements most reliably deliver transparent, reviewable decisions.

Interdisciplinary co-design is needed to produce pedagogically sound and ethically robust tools. Educators, data scientists, and legal scholars should co-design systems with documentation, explainability, and routes for challenge and redress, while scrutinising data quality and provenance. National AI literacy strategies should extend beyond staff to students, families, and communities. Finally, research should evaluate job-embedded, sequenced, and coached leadership development models for measurable gains in data literacy, risk communication, and instructional use of evidence, ensuring that efficiency gains translate into inclusive learning.

AI is reshaping how schools operate, make decisions, and engage communities. This review shows that well-governed AI can reduce administrative load, provide timely decision support, and, when paired with sound data practices, enable more strategic and potentially equitable allocation of people and resources. By automating routine tasks and surfacing actionable patterns, AI can help school leaders refocus on pedagogy and target interventions where they are most needed.

These opportunities are contingent on governance by design, sound data practices, and meaningful human oversight. Risks cluster around privacy and surveillance, algorithmic bias, and limited transparency, all of which can undermine fairness and trust if left unchecked. Accountability also becomes diffuse across vendors and school-level actors. Accordingly, leadership work must include embedded governance: clear role definitions and escalation routes, meaningful human oversight, rigorous documentation and explainability, routine audits and procurement controls, and lawful, proportionate use of data. Equally, leadership development is pivotal. Principals and leadership teams need sustained training in privacy and consent management, data literacy, algorithmic fairness, risk communication, and contract oversight, ensuring professional judgement remains central.

Supportive policies and systems are enabling conditions. Governments should provide secure digital ecosystems, national data-governance standards, and public oversight capable of auditing school-based AI and safeguarding rights. Ultimately, the success of AI in school leadership will turn less on the tools themselves than on the vision, judgement, and ethical commitment of those who deploy them: AI should complement, not displace, the relational, contextual, and moral dimensions of educational leadership.

The supplementary material for this article can be found online.

Abulibdeh
,
A.
,
Baya Chatti
,
C.
,
Alkhereibi
,
A.
and
El Menshawy
,
S.
(
2025
), “
A scoping review of the strategic integration of artificial intelligence in higher education: transforming university excellence themes and strategic planning in the digital era
”,
European Journal of Education
, Vol. 
60
No. 
1
, e12908, doi: .
Adams
,
D.
and
Thompson
,
P.
(
2025
), “
Transforming school leadership with artificial intelligence: applications, implications, and future directions
”,
Leadership and Policy in Schools
, Vol. 
24
No. 
1
, pp. 
77
-
89
, doi: .
Ally
,
M.
and
Mishra
,
S.
(
2024
), “
Policies for artificial intelligence in higher education: a call for action
”,
Canadian Journal of Learning and Technology
, Vol. 
50
No. 
3
, pp. 
1
-
12
, doi: .
An
,
Y.
,
Yu
,
J.H.
and
James
,
S.
(
2025
), “
Investigating the higher education institutions' guidelines and policies regarding the use of generative AI in teaching, learning, research, and administration
”,
International Journal of Educational Technology in Higher Education
, Vol. 
22
No. 
1
, p.
10
, doi: .
Anastasiou
,
S.
(
2025
), “
Integrating human resource management and artificial intelligence in educational leadership: pathways toward transformational change
”,
Academic Journal of Interdisciplinary Studies
, Vol. 
14
No. 
3
, p.
7
, doi: .
Arar
,
K.
,
Tlili
,
A.
,
Schunka
,
L.
,
Salha
,
S.
and
Saiti
,
A.
(
2025
), “
Reimagining educational leadership and management through artificial intelligence: an integrative systematic review
”,
Leadership and Policy in Schools
, Vol. 
24
No. 
1
, pp. 
4
-
26
, doi: .
Berkovich
,
I.
(
2025
), “
The rise of AI-assisted instructional leadership: empirical survey of generative AI integration in school leadership and management work
”,
Frontiers in Education
, Vol. 
10
, 1643023, doi: .
Berkovich
,
I.
and
Eyal
,
O.
(
2025
), “
Support for generative artificial intelligence as a predictor of middle leaders' generative artificial intelligence self-efficacy, valuing, and integration in school leadership work
”,
Educational Management Administration and Leadership
, 17411432251361251,
ahead-of-print
, doi: .
Bixler
,
K.
and
Ceballos
,
M.
(
2025
), “
Principals leading AI in schools for instructional leadership: a conceptual model for principal AI use
”,
Leadership and Policy in Schools
, Vol. 
24
No. 
1
, pp. 
137
-
154
, doi: .
Bollaert
,
L.
(
2025
), “
Artificial intelligence: objective or tool in the 21st-century higher education strategy and leadership?
”,
Education Sciences
, Vol. 
15
No. 
6
, p.
774
, doi: .
Braun
,
V.
and
Clarke
,
V.
(
2019
), “
Reflecting on reflexive thematic analysis
”,
Qualitative Research in Sport, Exercise and Health
, Vol. 
11
No. 
4
, pp. 
589
-
597
, doi: .
Chan
,
C.K.Y.
(
2023
), “
A comprehensive AI policy education framework for university teaching and learning
”,
International Journal of Educational Technology in Higher Education
, Vol. 
20
No. 
1
, p.
38
, doi: .
Chiu
,
T.K.F.
(
2024
), “
The impact of generative AI (GenAI) on practices, policies and research direction in education: a case of ChatGPT and Midjourney
”,
Interactive Learning Environments
, Vol. 
32
No. 
10
, pp. 
6187
-
6203
, doi: .
Colonna
,
L.
(
2024
), “
Teachers in the loop? An analysis of automatic assessment systems under Article 22 GDPR
”,
International Data Privacy Law
, Vol. 
14
No. 
1
, pp. 
3
-
18
, doi: .
Dabis
,
A.
and
Csáki
,
C.
(
2024
), “
AI and ethics: investigating the first policy responses of higher education institutions to the challenge of generative AI
”,
Humanities and Social Sciences Communications
, Vol. 
11
, p.
1006
, doi: .
Dai
,
R.
,
Thomas
,
M.K.E.
and
Rawolle
,
S.
(
2025
), “
The roles of AI and educational leaders in AI-assisted administrative decision-making: a proposed framework for symbiotic collaboration
”,
Australian Educational Researcher
, Vol. 
52
No. 
2
, pp. 
1471
-
1487
, doi: .
Deep
,
S.
,
Athimoolam
,
K.
and
Enoch
,
T.
(
2024
), “
Optimizing administrative efficiency and student engagement in education: the impact of AI
”,
International Journal of Current Science Research and Review
, Vol. 
7
No. 
10
, pp. 
7792
-
7804
, doi: .
Diakopoulos
,
N.
(
2016
), “
Accountability in algorithmic decision making
”,
Communications of the ACM
, Vol. 
59
No. 
2
, pp. 
56
-
62
, doi: .
Diallo
,
F.P.
and
Tudose
,
C.
(
2024
), “
Optimizing the scheduling of teaching activities in a faculty
”,
Applied Sciences
, Vol. 
14
No. 
20
,
9554
, doi: .
Dogan
,
M.
and
Arslan
,
H.
(
2025
), “
The role of artificial intelligence in school leadership
”,
Revista de Pedagogie Digitala
, Vol. 
4
No. 
1
, pp. 
23
-
30
, doi: .
Eubanks
,
V.
(
2018
),
Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor
,
St. Martin’s Press
,
New York, NY
.
Fu
,
Y.
and
Weng
,
Z.
(
2024
), “
Navigating the ethical terrain of AI in education: a systematic review on framing responsible human-centered AI practices
”,
Computers and Education: Artificial Intelligence
, Vol. 
7
, 100306, doi: .
García-López
,
I.M.
and
Trujillo-Liñán
,
L.
(
2025
), “
Ethical and regulatory challenges of generative AI in education: a systematic review
”,
Frontiers in Education
, Vol. 
10
, 1565938, doi: .
Gasevic
,
D.
,
Tsai
,
Y.-S.
,
Dawson
,
S.
and
Pardo
,
A.
(
2019
), “
How do we start? An approach to learning analytics adoption in higher education
”,
International Journal of Information and Learning Technology
, Vol. 
36
No. 
4
, pp. 
342
-
353
, doi: .
Ghamrawi
,
N.
,
Shal
,
T.
and
Ghamrawi
,
N.A.R.
(
2025
), “
Effective school leadership enactment of GAI: a 5C's framework for integration
”,
Frontiers in Education
, Vol. 
10
, 1561414, doi: .
Ghimire
,
A.
and
Edwards
,
J.
(
2024
), “From guidelines to governance: a study of AI policies in education”, in
Olney
,
A.M.
,
Chounta
,
I.-A.
,
Liu
,
Z.
,
Santos
,
O.C.
and
Bittencourt
,
I.I.
(Eds),
Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky (AIED 2024), Communications in Computer and Information Science
,
Springer
,
Cham
, Vol. 
2151
, pp. 
299
-
307
, doi: .
Hallinger
,
P.
(
2005
), “
Instructional leadership and the school principal: a passing fancy that refuses to fade away
”,
Leadership and Policy in Schools
, Vol. 
4
No. 
3
, pp. 
221
-
239
, doi: .
Hallinger
,
P.
and
Heck
,
R.H.
(
2010
), “
Leadership for learning: does collaborative leadership make a difference?
”,
Educational Management Administration and Leadership
, Vol. 
38
No. 
6
, pp. 
654
-
678
, doi: .
Harris
,
A.
(
2008
), “
Distributed leadership: according to the evidence
”,
Journal of Educational Administration
, Vol. 
46
No. 
2
, pp. 
172
-
188
, doi: .
Igbokwe
,
I.C.
(
2023
), “
Application of artificial intelligence (AI) in educational management
”,
International Journal of Scientific and Research Publications
, Vol. 
13
No. 
3
, pp. 
300
-
307
, doi: .
Ilieva
,
G.
,
Yankova
,
T.
,
Ruseva
,
M.
and
Kabaivanov
,
S.
(
2025
), “
A framework for generative AI-driven assessment in higher education
”,
Information
, Vol. 
16
No. 
6
, p.
472
, doi: .
Jin
,
Y.
,
Yan
,
L.
,
Echeverria
,
V.
,
Gasevic
,
D.
and
Martinez-Maldonado
,
R.
(
2025
), “
Generative AI in higher education: a global perspective of institutional adoption policies and guidelines
”,
Computers and Education: Artificial Intelligence
, Vol. 
8
, 100348, doi: .
Kafa
,
A.
(
2025
), “
Exploring integration aspects of school leadership in the context of digitalization and artificial intelligence
”,
International Journal of Educational Management
, Vol. 
39
No. 
8
, pp. 
98
-
115
, doi: .
Kafa
,
A.
and
Eteokleous
,
N.
(
2024
), in
The Power of Technology in School Leadership during COVID-19: Insights from the Field
,
Springer International Publishing
,
Cham
.
Kelley
,
M.
and
Wenzel
,
T.
(
2025
), “
Advancing artificial intelligence literacy in teacher education through professional partnership inquiry
”,
Education Sciences
, Vol. 
15
No. 
6
, p.
659
, doi: .
Keravnos
,
N.
,
Lipsou
,
E.
and
Pavlakis
,
M.
(
2025
), “
Building high faculty trust through leadership integration in Cypriot primary schools: the role of transformational, instructional and distributed styles
”,
Educational Management Administration and Leadership
, 17411432251398349,
ahead-of-print
, doi: .
Kesim
,
E.
,
Atmaca
,
T.
and
Turan
,
S.
(
2025
), “
Reshaping school cultures: AI's influence on organizational dynamics and leadership behaviors
”,
Leadership and Policy in Schools
, Vol. 
24
No. 
1
, pp. 
117
-
136
, doi: .
Khairullah
,
S.A.
,
Harris
,
S.
,
Hadi
,
H.J.
,
Sandhu
,
R.A.
,
Ahmad
,
N.
and
Alshara
,
M.A.
(
2025
), “
Implementing artificial intelligence in academic and administrative processes through responsible strategic leadership in higher education institutions
”,
Frontiers in Education
, Vol. 
10
, 1548104, doi: .
Khosravi
,
H.
,
Buckingham Shum
,
S.
,
Chen
,
G.
,
Conati
,
C.
,
Tsai
,
Y.-S.
,
Kay
,
J.
,
Knight
,
S.
,
Martinez-Maldonado
,
R.
,
Sadiq
,
S.
and
Gasevic
,
D.
(
2022
), “
Explainable artificial intelligence in education
”,
Computers and Education: Artificial Intelligence
, Vol. 
3
, 100074, doi: .
Kim
,
J.
and
Wargo
,
E.
(
2025
), “
Empowering educational leaders for AI integration in rural STEM education: challenges and strategies
”,
Frontiers in Education
, Vol. 
10
, 1567698, doi: .
Koukaras
,
C.
,
Hatzikraniotis
,
E.
,
Mitsiaki
,
M.
,
Koukaras
,
P.
,
Tjortjis
,
C.
and
Stavrinides
,
S.G.
(
2025
), “
Revolutionising educational management with AI and wireless networks: a framework for smart resource allocation and decision-making
”,
Applied Sciences
, Vol. 
15
No. 
10
, p.
5293
, doi: .
Kovacevic
,
M.
,
Dagen
,
T.
and
Rajter
,
M.
(
2025
), “
Leading AI-driven student engagement: the role of digital leadership in higher education
”,
Education Sciences
, Vol. 
15
No. 
6
, p.
775
, doi: .
Labadze
,
L.
,
Grigolia
,
M.
and
Machaidze
,
L.
(
2023
), “
Role of AI chatbots in education: systematic literature review
”,
International Journal of Educational Technology in Higher Education
, Vol. 
20
No. 
1
, p.
56
, doi: .
Leithwood
,
K.
,
Louis
,
K.S.
,
Anderson
,
S.
and
Wahlstrom
,
K.
(
2004
),
How Leadership Influences Student Learning: Review of Research
,
The Wallace Foundation
,
New York, NY
,
available at:
 https://www.wallacefoundation.org/knowledge-center/Documents/How-Leadership-Influences-Student-Learning.pdf (
accessed
 18 August 2025).
Li
,
X.
,
Turner
,
D.A.
and
Liu
,
B.
(
2025
), “
AI as sub-symbolic systems: understanding the role of AI in higher education governance
”,
Education Sciences
, Vol. 
15
No. 
7
, p.
866
, doi: .
Marino
,
M.T.
and
Vasquez
,
E.I.
(
2024
), “
Special education administrators' use of artificial intelligence (AI) to synthesize data
”,
Journal of Special Education Leadership
, Vol. 
37
No. 
2
, pp. 
62
-
76
.
Noble
,
S.U.
(
2018
),
Algorithms of Oppression: How Search Engines Reinforce Racism
,
New York University Press
,
New York, NY
.
Ocen
,
S.
,
Elasu
,
J.
,
Aarakit
,
S.M.
and
Olupot
,
C.
(
2025
), “
Artificial intelligence in higher education institutions: review of innovations, opportunities and challenges
”,
Frontiers in Education
, Vol. 
10
, 1530247, doi: .
Oncioiu
,
I.
and
Bularca
,
A.R.
(
2025
), “
Artificial intelligence governance in higher education: the role of knowledge-based strategies in fostering legal awareness and ethical artificial intelligence literacy
”,
Societies
, Vol. 
15
No. 
6
, p.
144
, doi: .
O'Daffer
,
A.
,
Liu
,
W.
and
Bloss
,
C.S.
(
2025
), “
School-based online surveillance of youth: systematic search and content analysis of surveillance company websites
”,
Journal of Medical Internet Research
, Vol. 
27
, e71998, doi: .
O'Neil
,
C.
(
2016
),
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
,
Crown
,
New York, NY
.
Page
,
M.J.
,
McKenzie
,
J.E.
,
Bossuyt
,
P.M.
,
Boutron
,
I.
,
Hoffmann
,
T.C.
,
Mulrow
,
C.D.
,
Shamseer
,
L.
,
Tetzlaff
,
J.M.
,
Akl
,
E.A.
,
Brennan
,
S.E.
,
Chou
,
R.
,
Glanville
,
J.
,
Grimshaw
,
J.M.
,
Hróbjartsson
,
A.
,
Lalu
,
M.M.
,
Li
,
T.
,
Loder
,
E.W.
,
Mayo-Wilson
,
E.
,
McDonald
,
S.
,
McGuinness
,
L.A.
,
Stewart
,
L.A.
,
Thomas
,
J.
,
Tricco
,
A.C.
,
Welch
,
V.A.
,
Whiting
,
P.
and
Moher
,
D.
(
2021
), “
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews
”,
BMJ
, Vol. 
372
No. 
71
, p.
n71
, doi: .
Pardo
,
A.
and
Siemens
,
G.
(
2014
), “
Ethical and privacy principles for learning analytics
”,
British Journal of Educational Technology
, Vol. 
45
No. 
3
, pp. 
438
-
450
, doi: .
Pasquale
,
F.A.
(
2015
),
The Black Box Society: The Secret Algorithms that Control Money and Information
,
Harvard University Press
,
Cambridge, MA
.
Peltier
,
J.W.
,
Dahl
,
A.J.
and
Schibrowsky
,
J.A.
(
2024
), “
Artificial intelligence in interactive marketing: a conceptual framework and research agenda
”,
The Journal of Research in Indian Medicine
, Vol. 
18
No. 
1
, pp. 
54
-
90
, doi: .
Pietsch
,
M.
and
Mah
,
D.-K.
(
2025
), “
Leading the AI transformation in schools: it starts with a digital mindset
”,
Educational Technology Research and Development
, Vol. 
73
No. 
2
, pp. 
1043
-
1069
, doi: .
Pinho
,
I.
,
Costa
,
A.P.
and
Pinho
,
C.
(
2025
), “
Generative AI governance model in educational research
”,
Frontiers in Education
, Vol. 
10
, 1594343, doi: .
Polat
,
M.
,
Karataş
,
İ.H.
and
Varol
,
N.
(
2025
), “
Ethical artificial intelligence (AI) in educational leadership: literature review and bibliometric analysis
”,
Leadership and Policy in Schools
, Vol. 
24
No. 
1
, pp. 
46
-
76
, doi: .
Richardson
,
J.W.
,
Vedder
,
B.C.
,
Roberts
,
A.B.
and
McLeod
,
S.
(
2025
), “
What's the chatter about AI and school leaders?
”,
Leadership and Policy in Schools
, Vol. 
24
No. 
1
, pp. 
103
-
116
, doi: .
Sain
,
Z.H.
,
Sain
,
S.H.
and
Serban
,
R.
(
2024
), “
Implementing artificial intelligence in educational management systems: a comprehensive study of opportunities and challenges
”,
Asian Journal of Managerial Science
, Vol. 
13
No. 
1
, pp. 
23
-
31
, doi: .
Selwyn
,
N.
(
2022
),
Education and Technology: Key Issues and Debates
, (3rd ed.) ,
Bloomsbury Academic
,
London
.
Slade
,
S.
and
Prinsloo
,
P.
(
2013
), “
Learning analytics: ethical issues and dilemmas
”,
American Behavioral Scientist
, Vol. 
57
No. 
10
, pp. 
1510
-
1529
, doi: .
Spillane
,
J.P.
(
2006
),
Distributed Leadership
,
Jossey-Bass
,
San Francisco, CA
.
Sposato
,
M.
(
2025
), “
Artificial intelligence in educational leadership: a comprehensive taxonomy and future directions
”,
International Journal of Educational Technology in Higher Education
, Vol. 
22
No. 
1
, p.
20
, doi: .
Türkmen
,
G.
(
2025
), “
The review of studies on explainable artificial intelligence in educational research
”,
Journal of Educational Computing Research
, Vol. 
63
No. 
2
, pp. 
277
-
310
, doi: .
Wang
,
Y.
(
2024
), “
Algorithmic decisions in education governance: implications and challenges
”,
Discover Education
, Vol. 
3
No. 
1
, p.
229
, doi: .
Williamson
,
B.
(
2017
),
Big Data in Education: The Digital Future of Learning, Policy and Practice
,
SAGE Publications
,
London
.
Williamson
,
B.
and
Eynon
,
R.
(
2020
), “
Historical threads, missing links, and future directions in AI in education
”,
Learning, Media and Technology
, Vol. 
45
No. 
3
, pp. 
223
-
235
, doi: .
Wu
,
C.
,
Zhang
,
H.
and
Carroll
,
J.M.
(
2024
), “
AI governance in higher education: case studies of guidance at Big Ten universities
”,
Future Internet
, Vol. 
16
No. 
10
, p.
354
, doi: .
Xue
,
Y.
,
Chinapah
,
V.
and
Zhu
,
C.
(
2025
), “
A comparative analysis of AI privacy concerns in higher education: news coverage in China and Western countries
”,
Education Sciences
, Vol. 
15
No. 
6
, p.
650
, doi: .
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at Link to the terms of the CC BY 4.0 licence.

Supplementary data

or Create an Account

Close Modal
Close Modal