This paper explores the underexamined human dimension of artificial intelligence in education (AIED) within open universities (OUs) in developing Asia, focusing on students’ critical AI literacy and how their insights may help shape more humanistic approaches that integrate ethical and sociopolitical concerns.
Using the method of empathy-based stories (MEBS), responses were gathered from 44 postgraduate students at an OU in developing Asia, then thematically analysed and interpreted through critical discourses on technology and education.
While most respondents demonstrate some degree of critical AI literacy, this is largely limited to a foundational level marked by curious scepticism. The study underscores the value of student input not merely as feedback but as epistemic contribution to humanistic AIED. It highlights students’ shared concern about preserving what they see as essential human qualities amid accelerating AI integration.
This paper contributes to the limited research on the human dimension of AIED in OUs in developing Asia by centring the student body as key stakeholders and advocating a more humanistic approach. It also offers a novel methodological lens through MEBS to provide fresh insight into student engagement with critical AI literacy.
1. Introduction: centring the human in artificial intelligence in education (AIED)
As conventional universities increasingly adopt educational technologies and practices once pioneered by open universities (OUs), the latter’s first-mover advantage has diminished. OUs are understandably eager to reclaim their innovative edge. Many have turned to artificial intelligence (AI), viewing it as uniquely capable of transforming education, the economy, and society. Here AI refers to “a range of technologies, from an algorithm or app to machine learning and neural networks, that perform cognitive tasks usually associated with human minds, particularly learning and problem-solving” (Baker et al., 2019).
No less enthusiastic in pursuing AI in education (AIED) are OUs in “developing Asia”, a contingent discursive construct used in this paper to describe countries within Asia undergoing economic growth and industrialisation, often marked by expanding digital infrastructure, constrained educational resources and uneven access to higher education. Despite their enthusiasm, progress has been limited, if research output is any indication. Most studies remain confined to reviews (e.g. Firat, 2023), surveys (e.g. Hidayat and Kahar, 2023; Rafiq and Ahmad, 2025), or limited proof-of-concept experiments (e.g. Subramaniam, 2023). They primarily emphasise technological implementation (i.e. “how to create AIED tools and systems”), while giving little to no attention to the human dimension (i.e. “how to ensure individuals and communities, especially those at the margins of power, are protected from AIED’s potential misapplications”).
Often used interchangeably with “social” or “ethical,” the “human” dimension of AIED is frequently overlooked or downplayed by OUs in developing Asia, as well as across higher education more broadly. The situation is gradually changing but it remains the case, as Holmes (2024) notes, that even when acknowledged, this dimension is often treated “almost as an afterthought, once ‘sexier’ topics (e.g. machine learning and large language models) have been studied” (p. 6). This tendency is similarly observed by Zawacki-Richter et al. (2019, p. 21), who, in a systematic review of AIED research, underlined a “dramatic lack of critical reflection of […] the ethical implications as well as risks of implementing AI applications in higher education.”
This paper examines the under-researched human dimension of AIED in the context of OUs in developing Asia. Advocating for humanistic AIED, it argues that these institutions, by investing in the human dimension, would be better positioned to understand AI holistically, not merely as a computational tool but also, as McQuillan (2022, p. 2) puts it, “a form of knowledge production, a paradigm for social organization, and a political project.” It further asserts that OUs in developing Asia prioritising humanistic AIED are also more likely to be able to anticipate and mitigate harms associated with AIED. This is crucial because the unintended consequences of technologies like AI tend to disproportionately affect vulnerable populations and resource-lean institutions (Mohamed et al., 2020). When attentive to the human dimension of AIED, these institutions will be better positioned to avoid costly, mission-jeopardising missteps or investments. In short, by prioritising the human dimension of AIED, OUs in developing Asia stand to develop the requisite “critical AI literacy” to align the AIED technologies they seek to deploy with broader values of care and the common good.
Two challenges, however, impede the alignment of AIED ambitions with an ethics of care, a humanistic philosophical framework emphasising empathy and responsibility. First is the question of whether the majority within these institutions – researchers, educators, and leaders operating within an engineering mindset – can be persuaded by the minority of critical AI literacy proponents to prioritise the human dimension alongside technical aspects. This poses a significant challenge, as AIED research remains largely shaped by computer science and STEM disciplines, where ethical concerns are often treated as solvable technical glitches or dismissed as uninformed critiques. The second, arguably more difficult challenge, is translating institutional investment in the human dimension of AIED into actual policy and practice informed by critical AI literacy. These challenges are formidable, yet they may be mitigated by a key stakeholder whose input is rarely accounted for in institutional considerations of AIED: the OU student body.
Among all stakeholders, students are likely to be most directly impacted by AIED. Yet their voices are rarely solicited or given due consideration in institutional decision-making. To address this gap and to begin confronting the dual challenges outlined above, we engaged a cohort of OU students to gather their views on critical AI literacy. Three research questions guided this inquiry. First, how prevalent is critical AI literacy among OU students, understood at minimum as curious scepticism towards AI? Second, what social narratives shape their critical or uncritical orientations, and what do these narratives reveal about their perceptions, reasoning, expectations, and values? Third, how might the insights gathered serve as constructive input for humanistic AIED planning, particularly given the complex demands of online, distance, and blended learning environments that characterise OUs?
To gather these insights, we employed the qualitative method of empathy-based stories (MEBS). MEBS invites respondents to reflect on fictional but plausible scenarios, offering emotionally engaged, narrative-driven responses. These responses were analysed through textual methods and interpreted using critical discourses on technology and education to address the three guiding research questions.
The following discussion is structured as follows. We begin by unpacking the concept of critical AI literacy and contrasting it with functional AI literacy. We then explain our methodological approach before presenting findings, each subsection addressing one of the research questions. The paper concludes with a reflection on why AIED must not be reduced to a technical matter, but approached through an interdisciplinary, humanistic lens marked by both curiosity and critique.
2. Conceptualizing “critical AI literacy”
In developing Asia, critical AI literacy remains largely absent from AIED discourse and is even more markedly lacking within the specific context of OUs. One contributing factor is the region’s strong optimism about AI, which often prioritises adoption over critique. According to the latest AI Index Report (Stanford Institute for Human-Centred Artificial Intelligence, 2025), countries such as China, Indonesia, and Thailand report high levels of public confidence in AI’s benefits, yet lag in ethical governance and participatory frameworks. AI education in the region similarly emphasises technical skills over reflective or humanistic engagement, leaving limited space for the development of critical AI literacy in institutional settings such as OUs (Van, 2025; Wong, 2025; Dharmaraj, 2025). By contrast, as an emerging field of inquiry, critical AI literacy is rapidly gaining traction within Western academe, with scholars across disciplines having proposed various formulations. Although there is “not yet a consistent definition of AI literacy” in the literature (Bausili and O’Hara, n.d.), a shared thread has emerged, emphasising critical engagement with AI technologies rather than mere technical proficiency.
Synthesising key contributions (e.g. Goodlad and Conrad, 2024; DeVasto and Palmer, 2024; Hauck et al., 2025), this paper contingently defines critical AI literacy as the ability to critically engage with artificial intelligence technologies by understanding their affordances and limitations, interrogating their social, ethical, and political implications, and making informed, reflective decisions about their use, particularly in education. This definition incorporates foundational technical awareness, ethical reasoning, an understanding of power and bias in algorithmic systems, and the cultivation of human agency in the face of increasingly automated tools. Crucially, it positions users (including students, educators, citizens) not as passive recipients of AI outputs but as informed actors capable of questioning and shaping the role of AI in society (Goodlad and Stoerger, 2023; Hauck et al., 2025).
Unlike functional AI literacy, which focuses on operating AI tools or understanding how they work, critical AI literacy draws on traditions of critical digital literacy to interrogate how power, bias and inequality are embedded in digital systems. The Open University UK framework, for example, situates critical AI literacy within a broader ethos of equity, diversity, inclusion and accessibility, noting its role in addressing epistemic injustices and in equipping learners to navigate AI in ways that do not reproduce existing hierarchies (Hauck et al., 2025). Goodlad and Stoerger (2023) further stress the urgency of critical AI literacy in light of AI’s documented harms, such as surveillance, labour exploitation, misinformation, and environmental degradation.
For the purposes of this study, this paper further conceptualises critical AI literacy as a continuum comprising three indicative levels: bare, basic, and advanced. Blending into one another rather than forming rigid tiers, these levels serve as a heuristic device to parse and render intelligible the student-respondents’ orientations towards AI as expressed in the collected data. At the bare minimum, critical AI literacy can be understood as a mindset marked by curious scepticism, one that “isn’t siloed into a pro or against faction” (Watkins, 2024) but instead engages AI with the same circumspection applied to any emerging, complex phenomenon. From the bare level, critical AI literacy progresses towards a basic level, characterised by an increasingly informed and active awareness of AI’s affordances and limitations. This includes developing a thoughtful awareness of how generative AI systems operate, why their outputs should be approached with caution rather than treated as authoritative, and what broader social, cognitive and environmental effects they may carry (Bausili and O’Hara, n.d.). At the advanced level, critical AI literacy involves a scholarly capacity to interrogate AI as a sociotechnical and political discourse. Here, it aligns with the criticality of educational technology studies, an interdisciplinary field concerned not only with improving edtech like AIED, but also with promoting accountability, transparency, and equity (Decuypere and Williamson, 2023; Castañeda and Williamson, 2021; Macgilchrist, 2021).
Having conceptually unpacked critical AI literacy, the next section outlines the research method used to explore OU students’ critical AI orientation.
3. Method and methodology
3.1 Method of empathy-based stories (MEBS)
This study employed MEBS to explore OU students’ critical AI orientation. MEBS is a method for data collection and interpretation grounded in the methodology of social constructionism, which holds that knowledge is not objective or fixed but shaped through subjective, culturally embedded processes (Wallin, 2022). Our understandings of the world and ourselves are expressed through narratives or structured sequences of events shaped by personal and social context. While these stories offer individual agency, they are also shaped by dominant societal narratives (Spector-Mercel, 2010). Critically, self-narration influences action: “If we narrate ourselves as active agents, we will conduct ourselves in the ‘real world’ very differently than if we base our life stories on victimhood” (2010, p. 208).
MEBS gathers data through written responses to fictional prompts or “frame stories,” encouraging respondents to use empathy and imagination. These narratives capture their “perceptions, expectations, and values regarding a specific phenomenon” (Wallin et al., 2019, p. 3). Rather than recording objective facts, the method foregrounds how individuals construct meaning in specific social contexts.
MEBS was selected for three reasons. First, fictional scenarios situated AI-related issues within realistic contexts, helping respondents engage more reflectively and emotionally. Second, unlike personal interviews, MEBS reduced direct interaction, reducing social pressure and researcher influence. Third, framing ethical issues through fiction allowed respondents to explore ideas freely while maintaining distance from sensitive topics. MEBS narratives were generally concise and focused, making them ideal for extracting respondents’ positions. The method also helped address the over-representation of quantitative research and the corresponding under-representation of narrative methods in open, distance and digital education (ODDE) research. By introducing MEBS, the study sought to open up a post-positivist space in a largely positivist field (Lim et al., 2024).
Like any method, MEBS has limitations. The minimal interaction between researcher and respondent can be a drawback, especially when responses are brief or ambiguous. However, as Wallin (2022, p. 55) observes, even brief responses can reveal important insights. Another challenge lies in the subjective nature of the data, which may be seen as a weakness by positivist scholars, but is considered a strength within post-positivist paradigms that critique empiricism’s claims to neutrality. Moreover, relying on a single qualitative method with a modest sample of 44 respondents from one OU may constrain the empirical depth of the findings, potentially limiting their robustness and applicability across the diverse contexts of OUs in developing Asia. To address this, we are now embarking on a second phase of the study, comparing OUs in three countries, to broaden the scope and validate themes.
In summary, MEBS offered a reflective means of gathering insight into critical AI orientations amongst OU students. While not without drawbacks, the method enabled this study to elicit rich, individualised responses and promote alternative research approaches in the study of AIED and ODDE. Within this exploratory scope, MEBS remains well-suited for eliciting reflective, student-centred perspectives, with future research poised to build on these findings by triangulating with complementary methods.
3.2 Respondent selection
Given the study’s focus on adult learners and their nuanced perspectives on AIED, OU students from Open University Malaysia (OUM), all working adults, were selected as respondents for the MEBS frame stories. We focused on postgraduates anticipating their work and life experience would offer richer insights compared to undergraduates. Our selection was further narrowed to postgraduate students enrolled in the Master of Counselling programme. This programme’s on-site requirement, mandated by the Board of Counsellors, facilitated in-person MEBS data collection within a one-hour session, making these students ideal respondents. A total of 45 OU students provided informed consent to participate. For MEBS analysis, 15-20 responses per frame story were targeted. Beyond this range, “the stories started to resemble each other” (Wallin et al., 2019).
3.3 Procedure
Guided by the three research questions of this study and standard MEBS protocols, two frame stories were developed based on a hypothetical friend’s orientation towards AI and AIED. In preparation, informed consent forms were produced. Arrangements were made to meet two postgraduate classes during a weekend lunch break to introduce the research team, explain the project and data collection process, and invite participation.
On 3 March 2024, the MEBS activity was conducted during a scheduled weekend class session. Forty-five postgraduate students were randomly divided into two groups, each assigned a different frame story. Participants received an info sheet containing a URL and QR code that linked to a Google Form. They accessed the form via personal devices, read their assigned frame story, and submitted written responses within a suggested 20-min timeframe. All responses were then compiled into a single document and numbered for analysis.
3.4 The two MEBS frame stories
The two frame stories, designed to be concise to avoid distracting or misleading respondents, centre on a hypothetical friend’s orientation towards AI and AIED. Both stories are nearly identical, differing only in one key aspect: in Frame Story 1, Linda fully embraces AI without a hint of critical AI literacy, while in Frame Story 2, she demonstrates curious scepticism, reflecting critical AI literacy. The frame stories are reproduced below in full:
Frame Story 1
Linda is pursuing her degree in an open university at a time when artificial intelligence (AI) is becoming a hot topic and anticipated by many to potentially transform the future of teaching and learning, and even life itself.
Based on what she has learnt about AI, Linda believes that everyone should be open to AI, and even embrace AI without reservation because AI can only bring overwhelming good.
She believes that it is unproductive, and even anti-technology, to be sceptical about AI and to ask questions about the potential negative impact of AI on teaching and learning, and on life in general.
You are Linda's course mate and good friend.
What do you think of Linda's orientation towards AI? To what extent would you agree or disagree with her? Why? What honest advice would you give her if she shares her views with you?
Frame Story 2
Sara is pursuing her degree in an open university at a time when artificial intelligence (AI) is becoming a hot topic and expected by many to potentially transform the future of teaching and learning, and even life itself.
Based on what she has learnt about AI, Sara believes that everyone should be open to AI, but they should also approach AI with caution because AI can potentially bring harmful effects as well as positive effects.
She believes that it is good, essential even, to be sceptical about AI and to ask questions about the potential negative impact of AI on teaching and learning, and on life in general.
You are Sara’s course mate and good friend.
What do you think of Sara’s orientation towards AI? To what extent would you agree or disagree with her? Why? What honest advice would you give her if she shares her views with you?
3.5 Data analysis
Thematic textual analysis was used to identify patterns and themes in the MEBS responses, guided by our research questions (Särkelä and Suoranta, 2020). The process began with repeated readings to gain familiarity, followed by the generation of initial codes capturing key ideas. These were reviewed and refined, then grouped into overarching themes by identifying recurring patterns. Themes were subsequently re-evaluated to ensure they were grounded in the data and consistent with the research questions.
3.6 Researcher positionality and bias mitigation
To mitigate bias and enhance rigour, we adopted several strategies. Internal peer review among the three authors involved regular discussions, collaborative coding, and critical reflection on interpretations. This helped challenge individual assumptions and deepen our understanding of the data. We documented our analysis and included illustrative examples. Emphasising contextual sensitivity within the OU setting, we prioritised methodological transparency throughout. These steps were taken to reduce subjectivity and strengthen the trustworthiness of our findings.
4. Findings and discussion
4.1 Preliminary review: effective vs expressed stance
Following initial data screening, one response from Frame Story 1 was excluded due to incoherence, resulting in 44 analysable responses (22 per frame story). A preliminary review assessed each response’s alignment with the AI orientations presented by Linda (pro-AI) and Sara (open but questioning). This review focused on identifying the effective stance of each respondent (the overall sentiment conveyed), which was then compared to their explicitly expressed position.
Response L15 serves as an illustrative example of this discrepancy. While the respondent explicitly stated, “I disagree with Linda,” the subsequent content echoed Linda’s pro-AI views, emphasising AI’s transformative potential in education without addressing any associated concerns. This exemplifies how a respondent’s effective stance, derived from the totality of their response, may diverge from their initial expressed position.
Based on this effective alignment, responses were categorised as “Pro-Linda/Sara,” “Anti-Linda/Sara,” or “On-Fence” (neutral/ambivalent):
The data revealed a significant pattern: respondents generally rejected Linda’s unreserved endorsement of AI, while overwhelmingly favouring Sara’s open and critical perspective. This contrast in AI orientation will be further explored in the subsequent discussion.
A donut chart shows three opinion categories regarding Linda. The data in clockwise order are as follows: Pro-LINDA: 22.7 percent. Anti-LINDA: 36.4 percent. On-Fence: 40.9 percent. To the right of the chart, 22 human icons are arranged in three rows, divided by the same categories. The distribution is: “Pro-LINDA:” 5 out of 22, “Anti-LINDA:” 8 out of 22, and “On-Fence:” 9 out of 22. Text above the icons reiterates these values.Respondent alignment with Linda’s AI orientation in Frame Story 1. Source: Figure by authors
A donut chart shows three opinion categories regarding Linda. The data in clockwise order are as follows: Pro-LINDA: 22.7 percent. Anti-LINDA: 36.4 percent. On-Fence: 40.9 percent. To the right of the chart, 22 human icons are arranged in three rows, divided by the same categories. The distribution is: “Pro-LINDA:” 5 out of 22, “Anti-LINDA:” 8 out of 22, and “On-Fence:” 9 out of 22. Text above the icons reiterates these values.Respondent alignment with Linda’s AI orientation in Frame Story 1. Source: Figure by authors
A donut chart shows two opinion categories regarding Sara. The data in clockwise order are as follows: On-FENCE: 9.1 percent. Pro-SARA: 90.9 percent. To the right of the chart, 22 human icons are arranged in three rows, divided by the same categories. The distribution is: “Pro-SARA:” 20 out of 22, “Anti-SARA:” 0 out of 22, and “On-Fence:” 2 out of 22. Text above the icons reiterates these values.Respondent alignment with Sara’s AI Orientation in Frame Story 2. Source: Figure by authors
A donut chart shows two opinion categories regarding Sara. The data in clockwise order are as follows: On-FENCE: 9.1 percent. Pro-SARA: 90.9 percent. To the right of the chart, 22 human icons are arranged in three rows, divided by the same categories. The distribution is: “Pro-SARA:” 20 out of 22, “Anti-SARA:” 0 out of 22, and “On-Fence:” 2 out of 22. Text above the icons reiterates these values.Respondent alignment with Sara’s AI Orientation in Frame Story 2. Source: Figure by authors
4.2 Findings and discussion in relation to the first research question
This section addresses the first research question: How prevalent is critical AI literacy among the sampled OU students? To reiterate, this study conceptualises critical AI literacy as a three-stage continuum: bare (curious scepticism), basic (informed awareness and ethical articulation), and advanced (sociopolitical critique). This analysis examines the distribution of these levels within the student sample.
4.2.1 Prevalence of critical AI literacy
Analysis of all 44 MEBS responses revealed only bare and basic levels of critical AI literacy, with no instances of the advanced level. Overall, 79.55% (35 out of 44) demonstrated some form of critical AI literacy, dominated by bare literacy (65.9%, or 29 responses) over basic (13.6%, or 6 responses). This pattern held across both frame stories: in Frame Story 1, 55.4% showed bare and 13.6% basic literacy; in Frame Story 2, the proportions were 54.5 and 13.6% respectively (see Figures 3–5).
The density plot has a bell-curve-like distribution showing three overlapping, colored areas representing three categories: “BASIC,” “BARE,” and “ZERO.” The vertical axis ranges from 0 to 50 in increments of 10 units. Each curve has a label at the top: “BASIC” above the peak of the left curve, “BARE” at the peak of the central curve, and “ZERO” above the right curve. The top peak points of the three bell curves are: BASIC: 6. BARE: 29. ZERO: 9. Three circular pictographs are shown below the graph. From left to right, they are as follows: The first pictograph on the left shows a human icon inside a circle with a small filled arc. A horizontal bar to the right of the circle shows the bar filled to 13.6 percent. Below the bar, the text on the next line reads “BASIC CAIL,” and the last line has the text “6 over 44.” The second pictograph in the center shows a human icon inside a circle with an arc filled halfway. A horizontal bar to the right of the circle shows the bar filled to 65.9 percent. Below the bar, the text on the next line reads “BARE CAIL,” and the last line has the text “29 over 44.” The third pictograph on the right shows a human icon inside a circle with a small filled arc. A horizontal bar to the right of the circle shows the bar filled to 20.5 percent. Below the bar, the text on the next line reads “ZERO CAIL,” and the last line has the text “9 over 44.”Overall prevalence of critical AI literacy in responses to Frame Stories 1 and 2 (Linda and Sara). Source: Figure by authors
The density plot has a bell-curve-like distribution showing three overlapping, colored areas representing three categories: “BASIC,” “BARE,” and “ZERO.” The vertical axis ranges from 0 to 50 in increments of 10 units. Each curve has a label at the top: “BASIC” above the peak of the left curve, “BARE” at the peak of the central curve, and “ZERO” above the right curve. The top peak points of the three bell curves are: BASIC: 6. BARE: 29. ZERO: 9. Three circular pictographs are shown below the graph. From left to right, they are as follows: The first pictograph on the left shows a human icon inside a circle with a small filled arc. A horizontal bar to the right of the circle shows the bar filled to 13.6 percent. Below the bar, the text on the next line reads “BASIC CAIL,” and the last line has the text “6 over 44.” The second pictograph in the center shows a human icon inside a circle with an arc filled halfway. A horizontal bar to the right of the circle shows the bar filled to 65.9 percent. Below the bar, the text on the next line reads “BARE CAIL,” and the last line has the text “29 over 44.” The third pictograph on the right shows a human icon inside a circle with a small filled arc. A horizontal bar to the right of the circle shows the bar filled to 20.5 percent. Below the bar, the text on the next line reads “ZERO CAIL,” and the last line has the text “9 over 44.”Overall prevalence of critical AI literacy in responses to Frame Stories 1 and 2 (Linda and Sara). Source: Figure by authors
The density plot has a bell-curve-like distribution showing three overlapping, colored areas representing three categories: “BASIC,” “BARE,” and “ZERO.” The vertical axis ranges from 0 to 25 in increments of 5 units. Each curve has a label at the top: “BASIC” above the peak of the left curve, “BARE” at the peak of the central curve, and “ZERO” above the right curve. The top peak values of the three bell curves are: BASIC: 3. BARE: 12. ZERO: 7. Three circular pictographs are shown below the graph. From left to right: The first pictograph on the left shows a human icon inside a circle with a small filled arc. A horizontal bar to the right of the circle shows the bar filled to 13.6 percent. Below the bar, the next line reads “BASIC CAIL,” and the last line has the text “3 over 22.” The second pictograph in the center shows a human icon inside a circle with an arc filled a little more than halfway. A horizontal bar to the right of the circle shows the bar filled to 54.5 percent. Below the bar, the next line reads “BARE CAIL,” and the last line has the text “12 over 22.” The third pictograph on the right shows a human icon inside a circle with an arc fill to about one-fourth. A horizontal bar to the right of the circle shows the bar filled to 31.8 percent. Below the bar, the next line reads “ZERO CAIL,” and the last line has the text “7 over 22.”Prevalence of critical AI literacy in responses to Frame Story 1 (Linda). Source: Figure by authors
The density plot has a bell-curve-like distribution showing three overlapping, colored areas representing three categories: “BASIC,” “BARE,” and “ZERO.” The vertical axis ranges from 0 to 25 in increments of 5 units. Each curve has a label at the top: “BASIC” above the peak of the left curve, “BARE” at the peak of the central curve, and “ZERO” above the right curve. The top peak values of the three bell curves are: BASIC: 3. BARE: 12. ZERO: 7. Three circular pictographs are shown below the graph. From left to right: The first pictograph on the left shows a human icon inside a circle with a small filled arc. A horizontal bar to the right of the circle shows the bar filled to 13.6 percent. Below the bar, the next line reads “BASIC CAIL,” and the last line has the text “3 over 22.” The second pictograph in the center shows a human icon inside a circle with an arc filled a little more than halfway. A horizontal bar to the right of the circle shows the bar filled to 54.5 percent. Below the bar, the next line reads “BARE CAIL,” and the last line has the text “12 over 22.” The third pictograph on the right shows a human icon inside a circle with an arc fill to about one-fourth. A horizontal bar to the right of the circle shows the bar filled to 31.8 percent. Below the bar, the next line reads “ZERO CAIL,” and the last line has the text “7 over 22.”Prevalence of critical AI literacy in responses to Frame Story 1 (Linda). Source: Figure by authors
The density plot has a bell-curve-like distribution showing three overlapping, colored areas representing three categories: “BASIC,” “BARE,” and “ZERO.” The vertical axis ranges from 0 to 25 in increments of 5 units. Each curve has a label at the top: “BASIC” above the peak of the left curve, “BARE” at the peak of the central curve, and “ZERO” above the right curve. The top peak values of the three bell curves are: BASIC: 3. BARE: 17. ZERO: 2. Three circular pictographs are shown below the graph. From left to right: The first pictograph on the left shows a human icon inside a circle with a small filled arc. A horizontal bar to the right of the circle shows the bar filled to 13.6 percent. Below the bar, the next line reads “BASIC CAIL,” and the last line has the text “3 over 22.” The second pictograph in the center shows a human icon inside a circle with an arc filled a little more than halfway. A horizontal bar to the right of the circle shows the bar filled to 54.5 percent. Below the bar, the next line reads “BARE CAIL,” and the last line has the text “17 over 22.” The third pictograph on the right shows a human icon inside a circle with an arc fill to about one-fourth. A horizontal bar to the right of the circle shows the bar filled to 31.8 percent. Below the bar, the next line reads “ZERO CAIL,” and the last line has the text “2 over 22.”Prevalence of critical AI literacy in responses to Frame Story 2 (Sara). Source: Figure by authors
The density plot has a bell-curve-like distribution showing three overlapping, colored areas representing three categories: “BASIC,” “BARE,” and “ZERO.” The vertical axis ranges from 0 to 25 in increments of 5 units. Each curve has a label at the top: “BASIC” above the peak of the left curve, “BARE” at the peak of the central curve, and “ZERO” above the right curve. The top peak values of the three bell curves are: BASIC: 3. BARE: 17. ZERO: 2. Three circular pictographs are shown below the graph. From left to right: The first pictograph on the left shows a human icon inside a circle with a small filled arc. A horizontal bar to the right of the circle shows the bar filled to 13.6 percent. Below the bar, the next line reads “BASIC CAIL,” and the last line has the text “3 over 22.” The second pictograph in the center shows a human icon inside a circle with an arc filled a little more than halfway. A horizontal bar to the right of the circle shows the bar filled to 54.5 percent. Below the bar, the next line reads “BARE CAIL,” and the last line has the text “17 over 22.” The third pictograph on the right shows a human icon inside a circle with an arc fill to about one-fourth. A horizontal bar to the right of the circle shows the bar filled to 31.8 percent. Below the bar, the next line reads “ZERO CAIL,” and the last line has the text “2 over 22.”Prevalence of critical AI literacy in responses to Frame Story 2 (Sara). Source: Figure by authors
Responses demonstrating bare critical AI literacy often expressed general scepticism, such as, “Scepticism [towards AI] is important considering that AI is still relatively new …” (Respondent S6), or used analogies like, “AI is like instant noodles – convenient and efficient – but it lacks the depth …” (Respondent S16). These responses, while sceptical, lacked the informed and detailed engagement found in basic critical AI literacy.
Responses reflecting basic critical AI literacy extended beyond mere scepticism to critique AI’s institutional implications, addressing inequalities and socio-cultural contexts. For example, Respondent L14 criticised the assumption of AI’s neutrality, advocating for ethical frameworks and scrutiny of ownership and commercialisation. Similarly, Respondent S13 raised specific concerns about data validity, intellectual property, and privacy, demonstrating a nuanced understanding of AI’s risks.
In essence, while most students exhibited some critical awareness, the prevalence of bare critical AI literacy underscores the need for institutional support to foster the more informed and critical engagement shown in basic critical AI literacy.
4.3 Findings and discussion in relation to the second research question
To address the second research question – what social narratives and student perceptions, reasoning, expectations and values are revealed by their MEBS responses? – this section explores four key themes: hedged disagreement and social desirability bias; folk wisdom and bare critical AI literacy; student ethics and agency in AIED; and a rethinking of the human in the age of AI.
4.3.1 Hedged disagreement and social desirability bias as factors behind divergent responses to AI orientations
To understand the contrasting responses to Linda’s and Sara’s opposing AI orientations, we must first examine the distribution of these responses. Notably, Frame Story 1, featuring Linda’s pro-AI stance, yielded a relatively even distribution: 22.7% (5/22) pro-Linda, 36.4% (8/22) anti-Linda, and 40.9% (9/22) on-fence (see Figure 1). Conversely, Frame Story 2, portraying Sara’s open and questioning approach, exhibited a striking consensus, with 90.9% (20/22) aligning with Sara, no respondents expressing opposition, and only 9.1% (2/22) remaining on-fence (see Figure 2).
Given the absence of significant differences between the respondent groups, this divergence is unexpected. Logically, the response patterns should have been more similar across both frame stories. However, the data reveal a clear disparity. A plausible explanation for this divergence lies in the respondents’ comfort level in expressing agreement versus disagreement. The overwhelming support for Sara likely reflects genuine uninhibited agreement. In contrast, respondents inclined to disagree with Linda may have hedged their responses to avoid appearing disagreeable, accounting for the high number of on-fence positions. This reflects social desirability bias (Edwards, 1957), where agreement is more easily expressed than disagreement. Therefore, the on-fence responses to Linda may not indicate neutrality, but rather a veiled form of disagreement. Consequently, the actual anti-Linda sentiment may be higher than initially observed, potentially reaching 77.3% (17/22) if on-fence responses are considered disguised opposition. This adjusted interpretation aligns more closely with the overall prevalence of critical AI literacy (79.55%), indicating a deeper critical engagement among respondents than initially perceived.
4.3.2 “Folk wisdom” as a driver of bare critical AI literacy
Beyond the influence of hedged disagreement and social desirability bias, a notable characteristic of responses demonstrating bare critical AI literacy is their reliance on “folk wisdom,” or commonsense and everyday reasoning, to assess Linda’s and Sara’s AI orientations. This contrasts sharply with responses exhibiting basic critical AI literacy, all six of which demonstrate a foundation in acquired critical knowledge of AI. In the former case, respondents frequently employed familiar aphorisms to articulate their perspectives, such as “There are always two sides,” “Everything has its pros and cons,” “Too much of anything is bad,” and “Less is more.”
Best (2021, p. 23) astutely observes that folk wisdom, by its nature, is highly flexible, often simplifying complex issues, capable of supporting virtually any argument, and potentially discouraging critical engagement with the claims presented. This observation is typically applicable and arguably applies to the MEBS responses in question. While the “pros and cons” approach to AI, as adopted by some respondents, may indeed hinder deeper investigation and confine understanding to a commonsense level, it is also instrumental in the manifestation of bare critical AI literacy within these responses. Without folk wisdom, many responses currently classified as demonstrating bare critical AI literacy (see Figure 3) would necessitate reclassification into the zero critical AI literacy category, the least desirable outcome for any response. Therefore, despite its apparent simplicity, folk wisdom may have a role to play in the development of AI-related policies and practices within OUs.
The everyday reasoning that characterises folk wisdom also predictably shapes how OU student–respondents with bare critical AI literacy perceive the role of AI in teaching and learning. Lacking the scholarly knowledge that typically underpins basic critical AI literacy, these respondents tend to draw upon popular knowledge and personal experiences, filtered through the lens of folk wisdom, to formulate their stance on AI’s educational applications. This generally predisposes them to an openness towards the potential usefulness of AI, implicitly understood as generative AI such as ChatGPT.
4.3.3 Rethinking the “human” in the age of machines
Many respondents expressed concern about preserving what they deemed innately “human” in the age of machines. Prompted by neither frame story, the respondents’ call for the protection of what is human – qualities that AI was claimed to be unable to replicate – stemmed from a collective anxiety about what might be irretrievably lost if humanity became overdependent on AI. The composite picture that emerged from the respondents’ exploration of what it meant to be human was one imbued with irony. On the one hand, there was insistence that “AI definitely cannot compete with humans in many ways” (Respondent S8), even if it “can potentially outperform humans in certain areas” (Respondent S10). AI cannot, for instance, match “the power of the human brain” which was capable of “generating thoughts and ideas that were based on lived experience” (Respondent L10). “Even at its peak”, it was furthermore claimed, “AI would never be able to fulfil the human need for love, belonging, and emotional connection” (Respondent S6).
Yet, on the other hand, despite its irreplaceable uniqueness, humanity remained under threat as AI technologies advanced, perceived to be at risk of losing what defined it, including “an intuitive sense for art, the ability to navigate complex social dynamics without technological mediation, and problem-solving skills that relied on creativity and emotional intelligence” (Respondent L16). The implicit unanswered question that arose from this ironic tension is: What, ultimately, constitutes human value, when the boundary between organic and synthetic intelligence starts to blur?
It would be tempting to downplay or dismiss the respondents’ anxiety about human vulnerability in the face of AI as excessive worry, resistance to technological change, or an over-romanticisation of human uniqueness. To do so, however, would be to overlook a legitimate concern rooted in the human dimension of AI, as set out in the opening section of this paper. Even if the respondents’ concerns for the human in the age of machines arose naively from everyday reasoning, rather than from critical engagement with advanced scholarship, they remained salient, not least because what they raised coincided with an emerging area of inquiry in critical AI studies, where “rethinking the human” has become “part of a broader recalibration” (P. Prinsloo, personal communication, September 3, 2024). As Prinsloo puts it, “It is as if, in our engagement with the ‘machine’, we are rediscovering the wonders of human thinking, capacities for empathy and awe.” For Vallor (2024), the task and responsibility of rediscovering what it is to be human is urgent – not because AI is an external threat to humanity, as commonly assumed, but because humanity is losing touch with the rich potential of the human experience as a result of it taking AI as the future instead of a mirror of its past. By our own acquiescence, “AI holds us frozen in place, fascinated by endless permutations of a reflected past that only the magic of marketing can disguise as the future” (Vallor, 2024, p. 6).
In this, Vallor echoes the prescient concerns of Respondent S17, who wrote, as we saw earlier, that “AI simply feeds us existing knowledge, preventing us from expanding our minds and truly discovering ourselves, as we are influenced by its output.”
4.4 Findings and discussion in relation to the third research question
This section discusses the third research question: How can insights from OU students on critical AI literacy inform humanistic AIED planning?
A word on generalisability before we proceed. Although this study was conducted within a single OU in developing Asia, many of the challenges and characteristics observed, such as resource limitations, distance and online learning modes, and the increasing interest and adoption of AIED, are shared by OUs across the region. Despite these commonalities, it is important to recognise the considerable diversity across developing Asia that influences AIED implementation. Countries like Malaysia, India, Thailand, Indonesia, and the Philippines present distinct educational landscapes with varying technological infrastructure, cultural attitudes towards AI, regulatory frameworks, and resource availability. These contextual variations suggest humanistic AIED approaches must be calibrated to specific regional realities rather than applied universally. The specific findings of this study may not be directly generalisable, but the conceptual framework developed, particularly concerning the importance of critical AI literacy and student engagement, nonetheless offers valuable insights for other OUs. Furthermore, the students’ concern about preserving the human element in the age of AI, is likely to be a concern in other OUs, too. This study, therefore, serves as a crucial starting point for further research and discussion on the human dimension of AIED in these contexts. Future studies should not only explore the applicability of these findings in diverse OU settings and examine contextual variations, but also consider incorporating mixed-methods approaches to provide a more comprehensive understanding and strengthen the generalisability of findings.
4.4.1 Recognising the emergent capacity of OU students
Regardless of whether researchers, teachers, and leaders in OUs across developing Asia are themselves adopting or applying critical AI literacy to inform their research, practice, and policy-making concerning AIED, a majority of the sampled OU students are already actively acquiring and using it. This MEBS study reveals that the students are utilising it to reflect on and navigate the challenges and implications of AI in connection to their daily realities, largely without institutional intervention or facilitation. For them, critical AI literacy is not a distant or abstract concept but a practical and personally relevant competency they are developing independently due to their individual entanglement with AI. Their responses reflect genuine engagement, even if their level of critical AI literacy is not yet advanced. These findings position students not merely as recipients of AIED but as epistemic contributors whose perspectives can meaningfully inform the development of more humanistic, context-sensitive approaches to AIED.
4.4.2 Deepening institutional engagement with the OU student body and other internal stakeholders
Given the existing levels of critical AI literacy among the OU students sampled in our MEBS study, OUs in developing Asia now face a crucial choice: maintain the status quo or actively engage their students for meaningful insights. By choosing engagement, OUs can gain a deeper understanding and appreciation of their students’ evolving views on AI, positioning themselves to more effectively support the continued advancement of their critical AI literacy. Collaboration can help safeguard the human dimension in AI integration. This partnership could promote the creation of institutional AI frameworks that prioritise ethical considerations, human connection, and equitable access to AI-enhanced education.
If OUs in developing Asia choose to engage their student body in these and similar ways, they should also work simultaneously to ensure that their researchers, teachers, and leaders develop critical AI literacy as well. The goal should be for this group of stakeholders to not only match their students’ growing competency in critical AI literacy but to surpass it by achieving the advanced level previously outlined, enabling them to provide informed, forward-thinking guidance and leadership. As mentioned earlier, this will be a formidable challenge, given the dominance of AIED research by computer science and STEM researchers and the general inclination of technology proponents to reduce AI to a purely technical matter and frame potential misapplications of AIED as mere technical problems to be resolved.
4.4.3 Drawing on readily available academic resource
Amidst the challenge, a constructive way forward lies in drawing on a resource that is readily available but often underutilised or even overlooked in most, if not all, OUs in developing Asia: internal academics from the humanities and social sciences, such as English, psychology, history, political science, and sociology. Their disciplinary training equips them with a firm grounding to critically engage with the ethical, social, and political dimensions of AI, complementing the more technical approaches. Precisely because the field of ODDE is not always accommodative of discursive scholarship that diverges from the empirical sciences, OUs in developing Asia seeking to advance humanistic AIED should do more than offer verbal support. They should also actively encourage and fund contributions from these academics through sustainable, institutionally embedded schemes.
4.4.4 Establishing an interdisciplinary centre for humanistic AIED
OUs in developing Asia may also consider establishing a dedicated interdisciplinary centre within their institution to promote humanistic AIED. One such example is OUM’s recently established Centre for Digital Education Futures (CENDEF). This centre could function as an intellectual hub for local and international scholars to collaborate on humanistic ODDE and AI in ODDE projects. Centre initiatives would be critically cognisant of the vulnerability of developing countries to AI colonialism and the unique constraints faced by leanly-resourced Asian OUs. These constraints include avoiding costly, ineffective technological investments and maintaining the imperative of breaking the iron triangle of access, cost, and quality in widening higher education access.
Such a centre would also be valuable in organising dialogue sessions between internal and external computing experts, critical scholars of ODDE and AI in ODDE, OU leaders, the student body, and other stakeholders. These sessions would facilitate a deeper understanding of each other’s nuanced perspectives, aiming to achieve consensus and align the OU’s AI aspirations with an ethics of care.
4.4.5 Closing notes: suggestions, not prescriptions
Having outlined several possible directions for humanistic AIED, it is important to pause and clarify how these suggestions should be interpreted and used. The suggestions offered are neither exhaustive nor intended for direct adoption by all OUs in the region; they serve rather as ideational launchpads for further exploration and adaptation to individual OU circumstances and needs. While these findings have limitations and may not apply universally, they nonetheless provide sufficient grounds to encourage OUs in developing Asia to adopt a more critical and reflective approach to their AI aspirations, and to temper those aspirations, if necessary.
5. Conclusion
This paper began with the premise that the human dimension is largely neglected in AIED research and practice, and that OUs in developing Asia, although leanly-resourced, are in fact well-positioned to fill this gap and benefit significantly from it. It argued that the foregoing is especially true given that AIED is a generational investment, and that insights gained from investing in humanistic AIED can help these institutions anticipate and mitigate potential adverse consequences. Another term for what OUs in developing Asia stand to acquire from investing in the human dimension of AIED is the requisite “critical AI literacy” to align the AIED technologies they seek to deploy with broader values of care and the common good.
Having thus cleared the ground, the paper argued that, although the engineering mindset dominates AIED and makes aligning AIED with humanistic values a formidable task, this challenge can potentially be mitigated by engaging the OU student body. It posited that engagement with and input from OU students as key stakeholders may help persuade the OUs to orientate towards a more humancentric approach to AIED. The paper then detailed an experiment using MEBS to solicit student insights on critical AI literacy in response to the three guiding research questions.
Amidst the “intense hyperbole and exaggerated claims” (Selwyn, 2022, p. 621) that are being made about AI and oversold to education audiences, a key lesson from this MEBS study is that, while there is much at stake and much that may be perilous, in the OUs’ pursuit of AIED, there is also much that the minority proponents of critical AI literacy may do to nudge the AIED trajectory of their respective OUs into closer alignment with undervalued humanistic values. And so, with this paper, we hope to plant interventionist seeds among our OU peers across developing Asia, those who have felt gatekept from AIED conversations, laboured in the shadow of AI technicists pursuing “what is technically possible (rather than what is socially desirable)” (2022, p. 625), and become increasingly concerned about AIED being promoted as a panacea for educational shortcomings, rather than being soberly apprehended as “a site of competing values, interests, agenda, and ideologies” (2022, p. 625).

