Skip to Main Content
Purpose

Academic cheating, particularly in unsupervised digital environments, is a persistent challenge. This study examines the effectiveness of interdisciplinary and transdisciplinary teaching approaches combined with peer assessment in mitigating academic dishonesty.

Design/methodology/approach

Over one semester, an experimental group (N = 111) applied the proposed methods, while a control group (N = 55) followed traditional practices. Peer assessment was integrated after midterm assignments to evaluate dishonesty levels and work replicability using a reliable scale. Surveys compared dishonest behaviors between the two groups.

Findings

ANOVA results reveal that (1) academic dishonesty was significantly lower in the experimental group. (2) Peer assessment showed that experimental group students demonstrated greater creativity, synthesis and citation accuracy. (3) Statistical analysis confirmed significant differences between groups (Sig. < 0.05).

Practical implications

This study offers valuable insights for educators and higher education institutions to enhance academic integrity in digital learning through innovative teaching and assessment strategies, supporting policy improvements in digital education.

Social implications

A significant implication is the need to innovate teaching and assessment practices by integrating interdisciplinary and transdisciplinary teaching and peer assessment solutions.

Originality/value

This research presents a novel experimental approach to reducing academic dishonesty in unsupervised digital environments, offering practical, resource-efficient solutions that promote transparency, self-monitoring and mutual accountability.

Academic cheating has persisted globally, evolving into increasingly sophisticated forms despite efforts by researchers and educators. It remains a challenge across all education levels, from elementary to postgraduate studies.

In the digital age, while technology enhances education, it simultaneously facilitates academic dishonesty (Turner and Uludag, 2013). Cheating in higher education is prevalent worldwide (Tierney and Sabharwal, 2017) and can be categorized into two types: individual and group cheating. Individual cheating includes using unauthorized materials, accessing devices, or viewing answers prior to exams (Holden et al., 2020). Group cheating involves collaboration, such as impersonation, remote computer control, or technological communication (Holden et al., 2020; Vegendla and Sindre, 2019). Impersonation may take forms like hiring someone to take an exam, voice modulation, facial impersonation, or remote control (VegendlaSindre).

To address academic dishonesty, educators have implemented solutions grouped into three categories. The first focuses on increasing penalties for cheating and reducing motivation through better regulations and learning environments (Srikanth and Asmatulu, 2014; Moten et al., 2013). Enhancing instruction quality to deepen students’ understanding also limits cheating (Amigud). The second employs technological tools, such as biometric authentication and online proctoring, to prevent cheating during exams (VegendlaSindre). The third emphasizes innovative assessment designs, such as inference-based and open-ended questions, as well as peer assessment, particularly effective in MOOCs where supervision is minimal (Nguyen et al., 2020; Hoang et al., 2020, 2022).

Despite these measures, academic cheating continues to evolve alongside advancements in digital education technology. Promoting transparency, fairness, accountability, and integrity in students' learning activities remains critical.

This study introduces a novel approach integrating peer, interdisciplinary, and transdisciplinary assessments to combat academic dishonesty in digital environments. As the first to experiment with these three approaches in combination, the study addresses the following research questions:

RQ1.

Can interdisciplinary, transdisciplinary, and peer assessments effectively reduce academic cheating?

RQ2.

What recommendations can be drawn for educators and educational practices?

Experimental results reveal promising outcomes, offering insights into mitigating academic cheating in the digital education landscape. The findings present an innovative assessment framework that contributes to reducing cheating while providing practical recommendations for improving teaching and assessment practices.

Interdisciplinary (ID) and transdisciplinary (TD) approaches are central to fostering collaboration across multiple academic fields (Stock and Burton, 2011; Nicolescu, 2012). ID focuses on integrating knowledge from different disciplines while respecting the principles of each, addressing shared issues (Oliveira et al., 2019; Tolk et al., 2021). In contrast, TD goes beyond these boundaries, creating innovative frameworks that synthesize knowledge to tackle complex, often societal, challenges (Klein, 2018). While some research emphasizes ID’s integration of knowledge (Collin, 2009), others highlight the collaborative nature and shared goals of TD (Cribb, 2017), particularly its focus on societal relevance and solving critical issues (Pohl, 2011). The study considers these two approaches as complementary in educational settings. At the university level, teaching using ID and TD often involves collaborative tasks, group discussions, and gathering student outputs, utilizing tools like Google Meet or Padlet. The assessment practices incorporate peer evaluations, self-assessments, and teacher evaluations, aiming to foster integrated learning outcomes through these interdisciplinary approaches.

Academic cheating refers to actions that intentionally violate academic integrity, with behaviors such as plagiarism, copying, and falsifying work (Chiang et al., 2022). Academic integrity, the ethical framework guiding behavior in educational contexts, is critical to maintaining educational standards and credibility (Tauginienė). Cheating undermines education quality, impedes skill development, and hinders the achievement of educational goals (Mulisa and Ebessa, 2021). Furthermore, research has linked academic dishonesty to unethical behaviors later in life, including in the workplace (Krou et al., 2021).

A significant cause of academic dishonesty is the perceived greater benefits of cheating compared to the risks involved during assessments (Lancaster and Clarke, 2017). Cheating can stem from various factors: teacher-related issues (Maeda, 2019), institutional policies that are lenient or inadequately enforced (Moten et al., 2013), internal motivations, such as personal traits and competition (Moten et al., 2013). In a study by Choi (2019), 28 forms of cheating were identified, with copying assignments and plagiarism being the most prevalent. Digital plagiarism has become an increasing concern, especially as students view it as an acceptable shortcut, even when they acknowledge the importance of maintaining academic integrity (Blau and Eshet-Alkalai, 2017). The emergence of AI tools like ChatGPT has further complicated this issue, as students can now effortlessly generate assignment content (Oravec, 2023). Factors influencing cheating also include peer behavior perceptions and individual traits, which can significantly impact academic integrity (Hendy et al., 2021).

To combat academic dishonesty in the digital age, researchers and educators have explored various prevention strategies. Researchers emphasize the increasing reliance on AI technologies to monitor student behavior and deter cheating. Meanwhile, Heriyati and Ekasari (2020) argue that while ethical theories are important, they are insufficient as regulatory tools. They suggest creating environments that discourage dishonesty and limit opportunities for cheating. Raising awareness about academic integrity has been suggested as a sustainable approach to preventing cheating, as it promotes ethical judgment and fairness among students (Prashar; Yazici et al., 2011). Research has shown that facilitating student discussions about academic integrity can reduce dishonest behaviors.

Technological solutions have also been proposed. For example, blockchain technology, online proctoring, and plagiarism detection systems are gaining traction for their potential to curb cheating (Souza-Daw and Ross, 2021). However, these methods are generally more focused on deterrence rather than fostering intrinsic qualities like curiosity and ethical engagement in learning. Techniques such as IP address tracking, browser tab locking, and biometric identification have also been implemented to monitor cheating in real time (Moten et al., 2013).

Despite the focus on punitive measures, studies have shown that early intervention in traditional classroom settings can prevent academic dishonesty (Hendy et al., 2021). Students are often deterred from cheating to avoid severe penalties (Miller et al., 2011), and stricter penalties have been shown to improve attitudes towards academic integrity (Chirikov). However, punitive approaches are typically effective only in the short term, necessitating the need for broader educational interventions to reshape perceptions about cheating (Blau and Eshet-Alkalai, 2017).

Peer assessment, as defined by Hoang et al. (2022), involves students evaluating each other’s work and is an important strategy in assessment for learning. It fosters collaborative learning and improves outcomes. Morris et al. (2023) emphasized that peer feedback, particularly in seminar series, enhances self-directed learning for pre-service teachers. However, these teachers may need specific training to provide effective feedback. Faulconer and Griffith (2025) found a link between learner interaction, discussion records, and better learning outcomes, which informs online course design. Additionally, Attiogbe et al. (2023) showed that integrating feedback mechanisms into online courses boosts learning results. Tongtummachat et al. (2024) also demonstrated that formative and summative assessments significantly improve teaching and learning quality.

These studies underline the importance of peer assessment in enhancing education. However, there is a research gap in integrating instructional solutions into formative assessments to reduce academic dishonesty, improve transparency, and raise training quality. The increasing prevalence of AI tools, such as ChatGPT, has introduced new challenges for academic integrity, particularly in unsupervised learning environments. Real-world, interdisciplinary, and transdisciplinary tasks can foster creativity and broader thinking among students, providing a potential solution to academic dishonesty. Early research shows that combining ID, TD, and peer assessment strategies can effectively reduce academic cheating, particularly in digital contexts. Preliminary findings suggest that these integrated approaches positively impact teaching, learning, and assessment practices in contemporary education.

This study employed an empirical research method, including a survey and quantitative evaluation conducted after the experimental process. The focus was on evaluating students with similar backgrounds, learning conditions, and entry-level knowledge to analyze changes in their attitudes and performances when using the proposed assessment approach with identical learning content.

The sample consisted of 166 second-year students majoring in elementary education and elementary technology education, divided into three classes. Two classes (111 students, average age 19.47) formed the experimental group, while one class (55 students, average age 19.53) served as the control group, as described in Table 1. The students in all classes had consistent academic quality based on entrance exam results.

Table 1

Demographic profile of the students

Demographic variablesControl group (N = 55)Experimental group (N = 111)
GenderMale712
Female4899
Mean age 19.5319.47
First year study result (Mean score) 3.453.44

Source(s): The authors

The primary education discipline was selected for its interdisciplinary nature, integrating mathematics, physics, literature, history, geography, natural sciences, and social sciences, making it suitable for the experimental design. Students were chosen from the same discipline, academic year, and institution to ensure similar academic levels and backgrounds, resulting in a maximum sample size of 166.

Both groups participated in a 15-week course on the Foundations of Natural Sciences in elementary education (July–November 2023). This course was chosen for its interdisciplinary content, including topics such as human health, plants and animals, fungi, bacteria, substances, energy, organisms, the environment, and populations, providing a rich foundation for teaching and assessment activities through interdisciplinary, transdisciplinary, and peer assessment approaches.

This study followed the experimental approach established by Hoang et al. (2022) to design and implement the assessment process, which consisted of the five stages shown in Figure 1.

  • (1)

    Designing and Teaching with Interdisciplinary (ID) and Transdisciplinary (TD) Approaches

Figure 1

Experimental process

Figure 1

Experimental process

Close modal

Teachers developed teaching content based on ID and TD methodologies, synthesizing information from various disciplines and structuring lessons to align with the subject matter.

  • (2)

    Organizing group assignments

Students collaborated on group assignments incorporating ID and TD approaches in an unsupervised setting, using diverse information sources. Groups evaluated peer work to encourage collaborative learning.

  • (3)

    Organizing midterm individual assignments

  • Teachers assigned ID- and TD-based individual tasks to promote independent work and facilitate both self-assessments and peer evaluations. Teachers also participated in evaluating these assignments and peer assessment results.

  • (3.1)

    Control Group (CG): Students engaged in traditional teaching and assessment methods, including multiple-choice, essays, practical tasks, and projects, from weeks 1–15. Group assignments were conducted in the 4th week, followed by a midterm individual essay assignment in the 8th week. The course grade was calculated as:

  • (3.2)

    Experimental Group (EG): Students participated in ID- and TD-oriented teaching and assessment activities from weeks 1–15. In the 4th week, students completed a group assignment (choosing 1 out of 12 interdisciplinary tasks). In the 8th week, they completed a midterm individual essay (choosing 1 out of 6 tasks) with integrated ID and TD approaches. The course grade was calculated as:

At the semester’s end, all students took a final exam with a weighting factor of 50%, which is outside this study’s scope.

  • (4)

    Peer assessment process

Following the 8th-week midterm essays, both groups underwent a week-long peer assessment as part of formative evaluation. Students critically reviewed their peers’ essays, providing feedback and identifying academic dishonesty, while using external resources to improve evaluation quality.

Using the peer assessment method by Hoang et al. (2022, 2020), each student anonymously assessed three peers' essays. This process fostered self-reflection and enhanced academic integrity awareness. For the experimental group, instructors reviewed peer assessments to assign reward or penalty scores factored into students’ grades. The control group, however, was unaware of the peer assessment process beforehand, leading to differing attitudes toward evaluation.

  • (5)

    Survey Stage

In the 15th week, before the final exam, students completed a Google Forms survey. The survey collected data on their self-awareness of learning tasks, assessment methods, and peer assessment experiences, particularly regarding academic integrity. To ensure objectivity, terms explicitly linked to academic misconduct were avoided, encouraging honest and unbiased feedback.

This research highlights the integration of ID and TD approaches in teaching and assessment to promote academic integrity and minimize dishonesty through collaboration and reflection.

To evaluate the proposed solution, a 5-point Likert scale with 19 criteria was developed, based on academic misconduct behaviors from McClung and Schneider (2015) and Tauginienė et al. (2019). The criteria, shown in Table 2, were divided into two groups: C1.X (X = 1.11) assessed perceptions of assessment tasks designed with ID and TD approaches, while C2.Y (Y = 1.11) evaluated classmates' assignments after peer assessment.

Table 2

Scale for evaluating students’ perceptions of teaching methods, assessment, and academic dishonesty (Very Low, Low, Moderate, High, Very High)

Criteria
 Subscale 1
C1.2Evaluate the level of information copying for the assignments
C1.3Evaluate the level of using AI (Chatbox, ChatGPT, etc.) to complete the assignments
C1.5Evaluate the level of copying materials without citation to complete the assignments
C1.6Evaluate the ability to copy materials without the need for much critical thinking to complete the assignments
C1.7Evaluate the level of rote memorization to complete the assignments
C1.8Evaluate the level of readily available answers on the internet, AI (Chatbox, ChatGPT, etc.), and other sources to complete the assignments
C1.9Evaluate the level of support provided by using keywords in Google to complete the assignments
C2.2Evaluate the level of copying behavior among the peers that you have assessed
C2.7Evaluate the extent to which your peers may rely on AI tools to complete the assignments
 Subscale 2
C1.1Evaluate the level of understanding of oneself in academic integrity/ethics
C1.4Evaluate the level of synthesizing information from various sources to complete the assignments
C1.10Evaluate the ability to use critical thinking skills (analysis, comparison, generalization, abstraction, etc.) to complete the assignments
C1.11Evaluate the level of creativity of oneself and your peers when completing assignments
C2.1Evaluate the reliability of the data in the assignments of the peers that you have assessed
C2.3Evaluate the level of copying with proper source citation of the peers that you have assessed
C2.4Evaluate the accuracy of the content copied by the peers that you have assessed
C2.5Evaluate the ability to analyze and synthesize in order to complete assignments of the peers that you have assessed
C2.6Evaluate the creativity in completing assignments of the peers that you have assessed
C2.8Evaluate the ability to integrate multiple disciplines to solve assigned problems of the peers that you have assessed

Source(s): The authors

To enhance reliability, the criteria were randomly arranged with varying value orientations to prevent careless responses. The scale was split into two subscales for analysis: in Subscale 1, higher values indicate lower impact; in Subscale 2, higher values reflect greater impact, ensuring consistency in results (Table 2).

Reliability testing using IBM SPSS Statistics 26, with data from 166 students, confirmed high reliability. Cronbach’s alpha scores were 0.872 for Subscale 1 (Table 3) and 0.893 for Subscale 2 (Table 4), with all items meeting reliability conditions (Tables 5 and 6).

Table 3

Reliability statistics of subscale 1

Cronbach’s alphaN of items
0.8729

Source(s): The authors

Table 4

Reliability statistics of subscale 2

Cronbach’s alphaN of items
0.89310

Source(s): The authors

Table 5

Item-total statistics of subscale 1

Scale mean if item deletedScale variance if item deletedCorrected item-total correlationCronbach’s alpha if item deleted
C1.221.2534.3840.5500.863
C1.321.2833.1470.5930.859
C1.521.5231.7420.6640.853
C1.621.7631.6990.7350.846
C1.721.0733.2990.6460.855
C1.821.2532.0300.7390.846
C1.920.9234.3930.5250.865
C2.220.9034.2450.6230.857
C2.720.9535.5300.4140.875

Source(s): The authors

Table 6

Item-total statistics of subscale 2

Scale mean if item deletedScale variance if item deletedCorrected item-total correlationCronbach’s alpha if item deleted
C1.127.8835.7670.5900.887
C1.428.0636.8080.5980.885
C1.1028.2233.6290.7460.875
C1.1128.2934.2800.7490.874
C2.128.2337.0020.6470.882
C2.328.3438.4190.4930.892
C2.428.3237.2610.6060.885
C2.528.0337.9320.6540.883
C2.628.2736.2570.6530.881
C2.828.3037.5670.6470.883

Source(s): The authors

Exploratory Factor Analysis (EFA) validated the scale structure. The KMO value of 0.902 and significant Bartlett’s test (p < 0.05) supported the data’s suitability for analysis (Table 7). These results confirm the scale’s reliability and applicability to the study.

Table 7

KMO and Bartlett’s test

Kaiser-Meyer-Olkin measure of sampling Adequacy0.902
Bartlett’s Test of SphericityApprox. Chi-Square1606.573
Df171
Sig0.000

Source(s): The authors

Addressing ethical issues in AI use within education—particularly transparency, fairness, and accountability in digital learning—is both meaningful and timely. The findings demonstrate that the proposed solution provides clear advantages over traditional teaching and assessment practices.

To evaluate effectiveness, a one-way ANOVA compared the CG and EG across various criteria. Variance homogeneity was tested using the Levene statistic (Table 8). If Sig. < 0.05, indicating unequal variances, the Welch test (Table 9) was applied; otherwise, the F test (Table 10) was used.

Table 8

Test of homogeneity of variances (C1.1)

Levene statisticdf1df2Sig
6.53511640.011

Source(s): The authors

Table 9

Robust tests of equality of means (C1.1)

Statisticadf1df2Sig
Welch28.5671128.2190.000

Source(s): The authors

Table 10

ANOVA test (C1.1)

Sum of squaresDfMean squareFSig
Between Groups25.324125.32425.0660.000
Within Groups165.6881641.010  
Total191.012165   

Source(s): The authors

For criterion C1.1, the Levene test showed Sig. = 0.011 < 0.05, so the Welch test was applied (Table 9), yielding Sig. = 0.000 < 0.05. This indicates a significant improvement in the, EG’s understanding of academic integrity and ethics compared to the CG. Similarly, for criterion C1.2, with Sig. = 0.490 > 0.05 in the Levene test (Table 11), the F test was used (Table 12), resulting in Sig. = 0.000 < 0.05. This demonstrates a significant reduction in information copying among the EG.

Table 11

Test of homogeneity of variances (C1.2)

Levene statisticdf1df2Sig
0.47911640.490

Source(s): The authors

Table 12

ANOVA test (C1.2)

Sum of squaresdfMean squareFSig
Between Groups16.369116.36919.2890.000
Within Groups139.1791640.849  
Total155.548165   

Source(s): The authors

The results for all 19 items are summarized in Table 13, including the mean, standard deviation, and relevant statistics for both the CG and EG. A homogeneity test (Table 14) identified five items (C1.1, C1.3, C1.4, C1.6, C2.8) with Sig. < 0.05, indicating unequal variances. For these items, the Welch test (Table 15) was applied, and all showed significant differences between CG and EG (Sig. < 0.05).

Table 13

Descriptives

NMeanStd. DeviationStd. Error95% confidence interval for meanMinimumMaximum
Lower BoundUpper Bound
C1.1CG552.890.8750.1182.653.1315
EG1113.721.0630.1013.523.9215
Total1663.451.0760.0843.283.6115
C1.2CG553.051.0080.1362.783.3315
EG1112.390.8760.0832.222.5515
Total1662.610.9710.0752.462.7615
C1.3CG553.021.0090.1362.753.2915
EG1112.371.0350.0982.172.5615
Total1662.581.0680.0832.422.7515
C1.4CG553.000.8390.1132.773.2325
EG1113.400.9660.0923.213.5815
Total1663.270.9420.0733.123.4115
C1.5CG553.051.0960.1482.763.3515
EG1111.980.9910.0941.802.1714
Total1662.341.1420.0892.162.5115
C1.6CG552.871.1390.1542.563.1815
EG1111.720.7770.0741.571.8714
Total1662.101.0600.0821.942.2615
C1.7CG553.600.8300.1123.383.8215
EG1112.400.7890.0752.252.5415
Total1662.800.9820.0762.642.9515
C1.8CG553.270.8910.1203.033.5115
EG1112.290.9180.0872.122.4615
Total1662.611.0190.0792.462.7715
C1.9CG553.400.8070.1093.183.6215
EG1112.721.0200.0972.532.9115
Total1662.951.0050.0782.793.1015
C1.10CG552.711.1650.1572.393.0215
EG1113.301.0410.0993.103.4915
Total1663.101.1150.0872.933.2715
C1.11CG552.710.9560.1292.452.9715
EG1113.201.0520.1003.003.4015
Total1663.041.0440.0812.883.2015
C2.1CG552.800.9510.1282.543.0615
EG1113.240.7770.0743.103.3915
Total1663.100.8610.0672.963.2315
C2.2CG553.490.7170.0973.303.6825
EG1112.690.8610.0822.532.8615
Total1662.960.8970.0702.823.1015
C2.3CG552.650.7990.1082.442.8715
EG1113.150.8650.0822.993.3215
Total1662.990.8740.0682.853.1215
C2.4CG552.580.8960.1212.342.8215
EG1113.220.7910.0753.073.3615
Total1663.010.8770.0682.873.1415
C2.5CG552.960.7440.1002.763.1625
EG1113.460.6980.0663.333.5925
Total1663.300.7490.0583.183.4125
C2.6CG552.691.0160.1372.422.9715
EG1113.240.8440.0803.083.4015
Total1663.060.9390.0732.923.2015
C2.7CG553.290.9940.1343.023.5615
EG1112.730.9810.0932.552.9115
Total1662.921.0170.0792.763.0715
C2.8CG552.670.8830.1192.432.9115
EG1113.210.6890.0653.083.3425
Total1663.030.7970.0622.913.1515

Source(s): The authors

Table 14

Test of homogeneity of variances

Levene statisticdf1df2Sig
C1.16.53511640.011
C1.20.47911640.490
C1.35.26011640.023
C1.49.29511640.003
C1.50.08711640.768
C1.64.29611640.040
C1.70.12711640.722
C1.80.81111640.369
C1.93.57011640.061
C1.100.85211640.357
C1.110.46511640.496
C2.11.22111640.271
C2.21.12111640.291
C2.30.01111640.915
C2.42.04111640.155
C2.52.98211640.086
C2.62.48011640.117
C2.70.04711640.829
C2.86.90211640.009

Source(s): The authors

Table 15

Robust tests of equality of means

Statisticadf1df2Sig
C1.1Welch28.5671128.2190.000
C1.2Welch17.544195.4080.000
C1.3Welch14.9461110.2750.000
C1.4Welch7.4141122.2990.007
C1.5Welch37.505198.6580.000
C1.6Welch45.701179.6480.000
C1.7Welch79.8591103.0960.000
C1.8Welch43.9591110.7040.000
C1.9Welch21.7481132.4540.000
C1.10Welch10.041197.6380.002
C1.11Welch9.0011117.4440.003
C2.1Welch8.983190.7960.004
C2.2Welch39.6631126.9830.000
C2.3Welch13.5531115.8800.000
C2.4Welch19.882196.6450.000
C2.5Welch16.9971101.7860.000
C2.6Welch12.109191.9970.001
C2.7Welch11.8241106.5430.001
C2.8Welch15.478187.5890.000

Source(s): The authors

For the remaining 14 items, where Sig. > 0.05 in the homogeneity test, ANOVA was conducted (Table 16). The results (Table 17) revealed Sig. < 0.05 for all 19 items, confirming significant differences across all criteria. Among these, criteria C1.5, C1.8, C1.9, C2.2, and C2.7 demonstrated particularly notable differences, alongside other items such as C1.2, C1.7, C1.10, C1.11, C2.1, C2.3, C2.4, C2.5, and C2.6.

Table 16

ANOVA test

Sum of squaresDfMean squareFSig
C1.1Between Groups25.324125.32425.0660.000
Within Groups165.6881641.010  
Total191.012165   
C1.2Between Groups16.369116.36919.2890.000
Within Groups139.1791640.849  
Total155.548165   
C1.3Between Groups15.482115.48214.6900.000
Within Groups172.8381641.054  
Total188.319165   
C1.4Between Groups5.77915.7796.7430.010
Within Groups140.5591640.857  
Total146.337165   
C1.5Between Groups42.308142.30840.1530.000
Within Groups172.8001641.054  
Total215.108165   
C1.6Between Groups48.808148.80858.6620.000
Within Groups136.4511640.832  
Total185.259165   
C1.7Between Groups53.278153.27882.6180.000
Within Groups105.7591640.645  
Total159.036165   
C1.8Between Groups35.641135.64143.0800.000
Within Groups135.6841640.827  
Total171.325165   
C1.9Between Groups16.970116.97018.6100.000
Within Groups149.5421640.912  
Total166.512165   
C1.10Between Groups12.724112.72410.8390.001
Within Groups192.5351641.174  
Total205.259165   
C1.11Between Groups8.79818.7988.4390.004
Within Groups170.9851641.043  
Total179.783165   
C2.1Between Groups7.22517.22510.2830.002
Within Groups115.2321640.703  
Total122.458165   
C2.2Between Groups23.374123.37435.0610.000
Within Groups109.3311640.667  
Total132.705165   
C2.3Between Groups9.14319.14312.8340.000
Within Groups116.8331640.712  
Total125.976165   
C2.4Between Groups14.801114.80121.6360.000
Within Groups112.1931640.684  
Total126.994165   
C2.5Between Groups9.04119.04117.7590.000
Within Groups83.4951640.509  
Total92.536165   
C2.6Between Groups11.220111.22013.7130.000
Within Groups134.1781640.818  
Total145.398165   
C2.7Between Groups11.582111.58211.9280.001
Within Groups159.2371640.971  
Total170.819165   
C2.8Between Groups10.506110.50618.2630.000
Within Groups94.3431640.575  
Total104.849165   

Source(s): The authors

Table 17

Summary of Sig. Levene, Sig. Welch, and Sig. ANOVA test results

ItemsSig. LeveneSig. WelchSig. ANOVA
C1.10.0110.0000.000
C1.20.4900.0000.000
C1.30.0230.0000.000
C1.40.0030.0070.010
C1.50.7680.0000.000
C1.60.0400.0000.000
C1.70.7220.0000.000
C1.80.3690.0000.000
C1.90.0610.0000.000
C1.100.3570.0020.001
C1.110.4960.0030.004
C2.10.2710.0040.002
C2.20.2910.0000.000
C2.30.9150.0000.000
C2.40.1550.0000.000
C2.50.0860.0000.000
C2.60.1170.0010.000
C2.70.8290.0010.001
C2.80.0090.0000.000

Source(s): The authors

We analyzed group performance based on subscales outlined in Table 13. The survey consisted of two subscales to assess different aspects of performance.

For subscale 1, several criteria (C1.2, C1.3, C1.5, C1.6, C1.7, C1.8, C1.9, C2.2, C2.7) had higher mean scores in the control group compared to the experimental group. This indicates that students in the control group relied more heavily on information copying, AI tools, rote memorization, and easily accessible internet answers. They also exhibited higher levels of peer copying and dependence on Google searches for completing assignments. These behaviors were more prevalent during traditional formative assessment activities. In contrast, students in the experimental group engaged in interdisciplinary learning tasks requiring creativity, integrative thinking, and application of non-readily available information, showcasing innovative teaching and assessment methods.

For subscale 2, the experimental group outperformed the control group in criteria (C1.1, C1.4, C1.10, C1.11, C2.1, C2.3, C2.4, C2.5, C2.6, C2.8). These results highlight the experimental group’s stronger understanding of academic integrity, enhanced creativity, proper source citation, and superior critical thinking and analytical skills. Students also demonstrated better proficiency in integrating interdisciplinary knowledge and synthesizing information to complete assignments.

These findings suggest that interdisciplinary and transdisciplinary task designs promoted higher-order thinking, analysis, and creativity while requiring students to synthesize diverse resources. Peer assessment activities further reinforced these competencies, allowing students to identify academic integrity violations and improve their awareness. This highlights the role of teaching and assessment practices in shaping students’ behaviors, showing that academic dishonesty is influenced by environmental factors, not solely intrinsic motivations.

In conclusion, the experimental group demonstrated superior performance in academic integrity, creativity, and critical thinking. They excelled in analyzing, synthesizing, and citing materials properly while adopting innovative approaches to assignments, reflecting higher academic competence and ethical behavior.

This study explored the effectiveness of ID and TD teaching combined with peer assessment in reducing academic cheating in digital learning environments. The results revealed that students in the CG, following traditional methods, engaged in more cheating behaviors compared to the EG.

As shown in Table 13, CG students relied heavily on copying information, using AI tools without citation, and conducting superficial online searches. In contrast, EG students, taught through ID and TD approaches, integrated information, cited sources properly, and demonstrated greater accountability. Peer assessment further revealed that CG students recognized widespread copying behaviors among their peers, which reinforced academic misconduct.

AI tools like ChatGPT have heightened risks of plagiarism, echoing concerns from previous studies (Sullivan et al., 2023; Krou et al., 2021). These tools enable students to complete assignments with minimal effort or critical engagement. However, this study confirms that ID and TD teaching methods are more effective in curbing academic dishonesty by promoting critical thinking and creative problem-solving.

Peer assessment played a crucial role in raising awareness of academic ethics among EG students. Through this process, students evaluated peers’ work for accuracy, reliability, and proper citations, fostering a deeper understanding of academic integrity. Without explicit instruction on ethics, EG students still developed these skills, as indicated by improved outcomes in critical thinking, creativity, and interdisciplinary problem-solving.

The statistical results in Tables 13–17 show that EG students achieved significantly better academic integrity outcomes, with fewer instances of plagiarism and more thorough citation practices. Pre-survey guidance on evaluation criteria further supported accurate peer assessment results, enhancing students' understanding of academic integrity through practice.

These findings demonstrate that combining ID and TD methods with peer assessment effectively reduced academic dishonesty in unsupervised online environments. They align with previous studies (Hoang et al., 2020, 2022; Hsu et al., 2020) and highlight the potential of innovative teaching strategies to improve learning outcomes. This study provides strong evidence supporting research question 1 (RQ1).

Notably, this is one of the first experimental studies to examine the combined impact of ID, TD, and peer assessment on preventing cheating in digital learning contexts. Unlike prior research that focused on designing challenging assessments or emphasizing digital literacy (Nguyen et al., 2020; Dianova and Schultz, 2023), this study demonstrates the effectiveness of these approaches in practice, offering insights that address research question 2 (RQ2).

However, successful implementation of ID and TD methods requires educators to possess a solid understanding of these approaches. Training pre-service teachers and providing ongoing professional development for in-service educators are essential for ensuring effective application, as suggested by Dianova and Schultz (2023). Future studies should extend this research to other disciplines and universities to strengthen its findings and provide broader recommendations for educators.

This study assessed the impact of teaching and formative assessment methods on academic integrity. The experimental group, incorporating ID, TD, and peer assessment, showed better understanding of academic integrity, such as proper citation and creative application of interdisciplinary knowledge, compared to the control group using traditional methods. The latter exhibited higher levels of academic misconduct, such as plagiarism and misuse of available digital tools.

The study provides insights for educators and policymakers to adopt ID, TD, and peer assessment methods to enhance teaching and assessment practices. Training for both educators and students in these methods is essential to improve assessment competency, particularly in the digital learning context. While the research focused on elementary education students at a teacher training university, limiting its generalizability, future studies should include diverse fields and institutions to evaluate the broader applicability of these findings.

A significant implication is the need to innovate teaching and assessment practices by integrating interdisciplinary solutions. Establishing conducive learning environments and educational policies can advance higher education by promoting practical application, accountability, and equity in digital settings. Formative assessment methods emphasizing active engagement, interaction, and peer feedback encourage self-directed learning. However, successful implementation requires lecturers to invest substantial time and effort in developing necessary competencies for these approaches.

This study is the first to systematically explore integrating ID, TD, and peer assessment to address academic dishonesty in digital higher education. While traditional measures like raising awareness and enforcing penalties remain crucial, this research provides preliminary evidence that these innovative assessment methods can effectively reduce cheating. Further studies should explore additional factors and test these approaches in various unsupervised online environments. In the digital era, preventing academic dishonesty demands a multifaceted approach, equipping educators with diverse strategies to enhance education quality and integrity in digital learning.

This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 503.01–2021.14.

Data availability: Data will be made available on request.

Confict of interest: The researchers declare no conflict of interest for this paper.

Amigud
,
A.
and
Lancaster
,
T.
(
2019
), “
246 reasons to cheat: an analysis of students' reasons for seeking to outsource academic work
”,
Computers and Education
, Vol. 
134
No. 
June 2019
, pp. 
98
-
107
, doi: .
Attiogbe
,
E.J.K.
,
Oheneba-Sakyi
,
Y.
,
Kwapong
,
O.A.T.F.
and
Boateng
,
J.
(
2023
), “
Assessing the relationship between feedback strategies and learning improvement from a distance learning perspective
”,
Journal of Research in Innovative Teaching and Learning
, Vol. 
18
No. 
1
, pp. 
165
-
186
, doi: .
Blau
,
I.
and
Eshet-Alkalai
,
Y.
(
2017
), “
The ethical dissonance in digital and non-digital learning environments: does technology promotes cheating among middle school students?
”,
Computers in Human Behavior
, Vol. 
73
No. 
August 2017
, pp. 
629
-
637
, doi: .
Chiang
,
F.K.
,
Zhu
,
D.
and
Yu
,
W.
(
2022
), “
A systematic review of academic dishonesty in online learning environments
”,
Journal of Computer Assisted Learning
, Vol. 
38
No. 
4
, pp. 
907
-
928
, doi: .
Chirikov
,
I.
,
Shmeleva
,
E.
and
Loyalka
,
P.
(
2020
), “
The role of faculty in reducing academic dishonesty among engineering students
”,
Studies in Higher Education
, Vol. 
45
No. 
12
, pp. 
2464
-
2480
, doi: .
Choi
,
J.
(
2019
), “
Cheating behaviors and related factors at a Korean dental school
”,
Korean Journal of Medical Education
, Vol. 
31
No. 
3
, pp. 
329
-
249
, doi: .
Collin
,
A.
(
2009
), “
Multidisciplinary, interdisciplinary, and transdisciplinary collaboration: implications for vocational psychology
”,
International Journal for Educational and Vocational Guidance
, Vol. 
9
No. 
2
, pp. 
101
-
110
, doi: .
Cribb
,
A.
(
2017
), “
The challenge of integration
”,
Healthcare in Transition
, Vol. 
XXXVIII
No. 
1
, pp. 
129
-
152
, doi: .
Dianova
,
V.G.
and
Schultz
,
M.D.
(
2023
), “
Discussing ChatGPT's implications for industry and higher education: the case for transdisciplinarity and digital humanities
”,
Industry and Higher Education
, Vol. 
37
No. 
5
, pp. 
593
-
600
, doi: .
Faulconer
,
E.K.
and
Griffith
,
J.
(
2025
), “
Unveiling engagement patterns of Yellowdig users: analysis of learning behaviors in an online undergraduate course
”,
Journal of Research in Innovative Teaching and Learning
, Vol.
ahead-of-print No. ahead-of-print
, doi: .
Hendy
,
N.T.
,
Montargot
,
N.
and
Papadimitriou
,
A.
(
2021
), “
Cultural differences in academic dishonesty: a social learning perspective
”,
Journal of Academic Ethics
, Vol. 
19
No. 
1
, pp. 
49
-
70
, doi: .
Heriyati
,
D.
and
Ekasari
,
W.F.
(
2020
), “
A study on academic dishonesty and moral reasoning
”,
International Journal of Education
, Vol. 
12
No. 
2
, pp. 
56
-
62
, doi: .
Hoang
,
L.P.
,
Le
,
P.A.
,
Arch-Int
,
S.
and
Arch-Int.
,
N.
(
2020
), “Multidimensional assessment of open-ended questions for enhancing the quality of peer assessment in E-learning environments”, in
Hershey
(Ed.),
Learning and Performance Assessment: Concepts, Methodologies, Tools, and Applications
,
IGI Global
, pp. 
147
-
173
, doi: .
Hoang
,
L.P.
,
Le
,
H.T.
,
Tran
,
H.V.
,
Phan
,
T.C.
,
Vo
,
D.M.
,
Le
,
P.A.
,
Nguyen
,
N.T.
and
Pong-inwong
,
C.
(
2022
), “
Does evaluating peer assessment accuracy and taking it into account in calculating assessor's final score enhance online peer assessment quality?
”,
Education and Information Technologies
, Vol. 
27
No. 
2022
, pp. 
4007
-
4035
, doi: .
Holden
,
O.
,
Kuhlmeier
,
V.
and
Norris
,
M.
(
2020
), “
Academic integrity in online testing: a research review
”,
Front. Educ.
, Vol. 
6
No. 
2021
, pp. 
1
-
13
, doi: .
Hsu
,
T.C.
,
Chen
,
W.L.
and
Hwang
,
G.J.
(
2020
), “
Impacts of interactions between peer assessment and learning styles on students' mobile learning achievements and motivations in vocational design certification courses
”,
Interactive Learning Environments
, Vol. 
31
No. 
3
, pp. 
1351
-
1363
, doi: .
Klein
,
J.T.
(
2018
), “Learning in transdisciplinary collaborations: a conceptual vocabulary”, in
Fam
,
D.
,
Neuhauser
,
L.
and
Gibbs
,
P.
(Eds),
Transdisciplinary Theory, Practice and Education
,
Springer
,
Cham
, pp. 
11
-
23
, doi: .
Krou
,
M.R.
,
Fong
,
C.J.
and
Hoff
,
M.A.
(
2021
), “
Achievement motivation and academic dishonesty: a meta-analytic investigation
”,
Educational Psychology Review
, Vol. 
33
No. 
2
, pp. 
427
-
458
, doi: .
Lancaster
,
T.
and
Clarke
,
R.
(
2017
), “
Rethinking assessment by examination in the age of contract cheating
”,
Plagiarism across Europe and Beyond 2017, Conference Proceedings
, pp. 
215
-
228
,
available at:
https://academicintegrity.eu/conference/proceedings/2017/Lancaster_Rethinking.pdf
Maeda
,
M.
(
2019
), “
Exam cheating among Cambodian students: when, how, and why it happens
”,
Compare: A Journal of Comparative and International Education
, Vol. 
51
No. 
3
, pp. 
1
-
19
, doi: .
McClung
,
E.L.
and
Schneider
,
J.K.
(
2015
), “
A concept synthesis of academically dishonest behaviors
”,
Journal of Academic Ethics
, Vol. 
13
No. 
1
, pp. 
1
-
11
, doi: .
Miller
,
A.
,
Shoptaugh
,
C.
and
Wooldridge
,
J.
(
2011
), “
Reasons not to cheat, academic-integrity responsibility, and frequency of cheating
”,
The Journal of Experimental Education
, Vol. 
79
No. 
2
, pp. 
169
-
184
, doi: .
Morris
,
T.H.
,
Schön
,
M.
and
Drayson
,
M.C.
(
2023
), “
Reimagining online teacher education: combining self-directed learning with peer feedback for interaction and engagement
”,
Journal of Research in Innovative Teaching and Learning
, Vol.
ahead-of-print No. ahead-of-print
, doi: .
Moten
,
J.M.
 Jr
,
Fitterer
,
A.
,
Brazier
,
E.
,
Leonard
,
J.
,
Brown
,
A.
and
Texas
,
A.
(
2013
), “
Examining online college cyber cheating methods and prevention measures
”,
Electronic Journal of e-Learning
, Vol. 
11
No. 
2
, pp. 
139
-
146
,
available at:
https://files.eric.ed.gov/fulltext/EJ1012879.pdf
Mulisa
,
F.
and
Ebessa
,
A.D.
(
2021
), “
The carryover effects of college dishonesty on the professional workplace dishonest behaviors: a systematic review
”,
Cogent Education
, Vol. 
8
No. 
1
, pp. 
1
-
23
, doi: .
Nguyen
,
J.G.
,
Keuseman
,
K.j.
and
Humston
,
J.J.
(
2020
), “
Minimize online cheating for online assessments during COVID-19 pandemic
”,
Journal of Chemical Education
, Vol. 
97
No. 
9
, pp. 
3429
-
3435
, doi: .
Nicolescu
,
B.
(
2012
), “
The need for transdisciplinarity in higher education in a globalized world
”,
Transdisciplinary Journal of Engineering and Science
, Vol. 
3
No. 
December, 2012
, pp. 
11
-
18
, doi: .
Oliveira
,
T.M.
,
Amaral
,
L.
and
Pacheco
,
R.C.S.
(
2019
), “
Multi/inter/transdisciplinary assessment: a systemic framework proposal to evaluate graduate courses and research teams
”,
Research Evaluation
,
January
, Vol. 
28
No. 
1
, pp. 
23
-
36
, doi: .
Oravec
,
J.A.
(
2023
), “
Artificial intelligence implications for academic cheating: expanding the dimensions of responsible human-AI collaboration with ChatGPT and bard
”,
JI. of Interactive Learning Research
, Vol. 
34
No. 
2
, pp. 
213
-
237
,
available at:
https://philarchive.org/archive/ORAAII
Pohl
,
C.
(
2011
), “
What is progress in transdisciplinary research?
”,
Futures
, Vol. 
43
No. 
6
, pp. 
618
-
626
, doi: .
Prashar
,
A.
,
Gupta
,
P.
and
Dwivedi
,
Y.K.
(
2023
), “
Plagiarism awareness efforts, students' ethical judgment and behaviors: a longitudinal experiment study on ethical nuances of plagiarism in higher education
”,
Studies in Higher Education
, Vol. 
49
No. 
6
, pp. 
929
-
955
, doi: .
Souza-Daw
,
T.
and
Ross
,
R.
(
2021
), “
Fraud in higher education: a system for detection and prevention
”,
Journal of Engineering, Design and Technology
, Vol. 
21
No. 
3
, pp. 
637
-
654
, doi: .
Srikanth
,
M.
and
Asmatulu
,
R.
(
2014
), “
Modern cheating techniques, their adverse effects on engineering education and preventions
”,
International Journal of Mechanical Engineering Education
, Vol. 
42
No. 
2
, pp. 
129
-
140
, doi: .
Stock
,
P.
and
Burton
,
R.J.F.
(
2011
), “
Defining terms for integrated (Multi-Inter-Trans-Disciplinary)
”,
Sustainability ResearchSustainability
, Vol. 
3
No. 
8
, pp. 
1090
-
1113
, doi: .
Sullivan
,
M.
,
Kelly
,
A.
and
McLaughlan
,
P.
(
2023
), “
ChatGPT in higher education: considerations for academic integrity and student learning
”,
Journal of Applied Learning and Teaching
, Vol. 
6
No. 
1
, pp. 
31
-
40
, doi: .
Tauginienė
,
L.
,
Gaižauskaitė
,
I.
,
Razi
,
S.
,
Glendinning
,
I.
,
Sivasubramaniam
,
S.
,
Marino
,
F.
,
Cosentino
,
M.
,
Anohina-Naumeca
,
A.
and
Kravjar
,
J.
(
2019
), “
Enhancing the taxonomies relating to academic integrity and misconduct
”,
Journal of Academic Ethics
, Vol. 
17
No. 
4
, pp. 
345
-
361
, doi: .
Tierney
,
W.G.
and
Sabharwal
,
N.S.
(
2017
), “
Academic corruption: culture and trust in Indian higher education
”,
International Journal of Educational Development
, Vol. 
55
No. 
July 2017
, pp. 
30
-
40
, doi: .
Tolk
,
A.
,
Harper
,
A.
and
Mustafee
,
N.
(
2021
), “
Hybrid models as transdisciplinary research enablers
”,
European Journal of Operational Research
, Vol. 
291
No. 
3
, pp. 
1075
-
1090
, doi: .
Tongtummachat
,
T.
,
Jaree
,
A.
and
Akkarawatkhoosith
,
N.
(
2024
), “
Enhancement of teaching and learning quality through assessment for learning: a case in chemical engineering
”,
Journal of Research in Innovative Teaching and Learning
, Vol.
ahead-of-print No. ahead-of-print
, doi: .
Turner
,
S.W.
and
Uludag
,
S.
(
2013
), “
Student perceptions of cheating in online and traditional classes
”,
Proceedings - Frontiers in Education Conference
,
October 2013
,
FIE
, pp. 
1131
-
1137
, doi: .
Vegendla
,
A.
and
Sindre
,
G.
(
2019
), “Mitigation of cheating in online exams: strengths and limitations of biometric authentication”, in
Kumar
,
A.
(Ed.),
Biometric Authentication in Online Learning Environments
,
IGI Global Scientific Publishing
, pp. 
47
-
68
, doi: .
Yazici
,
A.
,
Yazici
,
S.
and
Erdem
,
M.S.
(
2011
), “
Faculty and student perceptions on college cheating: evidence from Turkey
”,
Educational Studies
, Vol. 
37
No. 
2
, pp. 
221
-
231
, doi: .
Published in Journal of Research in Innovative Teaching & Learning. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode

or Create an Account

Close Modal
Close Modal