The research aims to unravel the dynamics of academic integrity in the ChatGPT era by analyzing critical predictors such as personal best goals (PBG), academic competence and workplace stress. Furthermore, it examines how ChatGPT adoption acts as a moderating factor, potentially influencing the relationship between these predictors and academic integrity, offering a nuanced understanding of its impact in academic settings.
Relying on the social cognitive theory, the authors adopt a quantitative approach through an online survey to explore the impact of key variables – PBG, academic competence and workplace stress, as well as ChatGPT adoption on integrity – by analyzing data from responses collected through Academic Social Networking Sites.
The study found that PBG related positively to academic integrity among academic staff. However, workplace stress had a negative impact on academic integrity, while academic competence failed to report any effect. With the integration of ChatGPT adoption as a moderator into the model, the authors found the association between PBG and academic integrity altered to be negative. The ChatGPT adoption-moderated interactions of academic competence and workplace stress on academic integrity were significant.
The findings suggest that institutions should provide training on the ethical and effective use of artificial intelligence (AI) tools like ChatGPT to ensure they support rather than hinder academic integrity. Additionally, organizations should focus on stress management initiatives and fostering a balanced approach to personal goal setting to mitigate the potential negative impacts of ChatGPT adoption on ethical behavior.
This attempt is both timely and pioneering, addressing a clear gap in the current body of literature. As limited studies have examined the role of ChatGPT in academic settings, this work stands out as one of the earliest to investigate how educators and researchers incorporate OpenAI tools into their professional practices.
