Artificial intelligence (AI) is currently influencing many aspects of business and the lives of humans. Understanding how AI may be built to operate ethically and satisfy the expectations of stakeholders is critical. Ethical ramifications of AI were investigated by several pioneering academics in the domains of economics, law, ethics and philosophy. They also created a set of guiding principles and criteria for future research. Despite this, little study has been done on the connection between AI ethics and employee commitment (EC) and competitive advantage (CA). Through mediation of the impact of responsible innovation (RI) and employee motivation (EM), this study aimed to explore the relationship between AI ethics and EC and CA.
Survey data from 206 respondents were evaluated using Process macro version 3.4 in SPSS 23 and AMOS 21.0. The study findings showed that the ethics of AI are important for boosting EC and the organization's ability to remain competitive, and a significant connection between them was discovered. Additionally, both RI and EM served as mediators. Parallel mediation has been tested.
AI ethics predicts RI and EM, which in turn drives EC and organization's CA. Furthermore, statistical significance was shown for both direct and indirect impacts.
Based on the study results, theoretical and practical consequences are examined. This study is the first to explore the interaction between AI ethics, EM, EC, CA and RI.
This piece of work is novel and one of a kind. The conceptual framework has not been studied earlier. The interplay of variables such as ethics of AI, CA, EC, RI and EM is novel.
Introduction
Artificial Intelligence (AI), including technologies such as robotic process automation and neural networks, enables real-time decision-making that significantly impacts business operations and human activities. Ensuring the ethical design of AI is essential to meet stakeholder expectations and comply with regulatory requirements. Since 2011, Pariser has warned of the risks posed by search engine algorithms – for example, Google's tendency to show users what they want rather than what they need. Similarly, Attard-Frost et al. (2023) reviewed 47 AI ethics guidelines, highlighting the need for consistency and enforceability in corporate settings. A common parallel is nuclear technology, which, depending on its application, may either power or destroy cities. Similarly, the practice of “ethical hacking” demonstrates how hacking, which is typically viewed as a socially repugnant activity, may be beneficial when used to find and address security flaws. These examples show how, despite its propensity for abuse, AI may be repurposed for innovative socially beneficial applications when ethical norms are followed. Crucially, this emphasizes the necessity of differentiating between innovation spurred by curiosity and the temptation for “quick gains” through unethical means.
Prior research connects employee commitment (EC), motivation, responsible innovation (RI) and competitive advantage (CA) within the context of AI. EC, which helps reduce withdrawal behaviors, reflects loyalty to the employer (Akintayo, 2010; Irefin and Mechanic, 2014). Employee motivation (EM) aligns personal goals with organizational objectives, while RI emphasizes future-oriented ethical stewardship (Stilgoe, 2013; Stilgoe et al., 2020). CA arises from delivering superior value at lower costs, driven by the unique characteristics of resources (Barney, 1995; Wang et al., 2011).
This study examines the relationships between AI ethics, EC and CA, focusing on the mediating roles of RI and EM. It introduces these mediators as key mechanisms for strengthening both commitment and competitive edge. As the first study of its kind among Indian information technology (IT) professionals, it addresses a critical gap by exploring how these dynamics vary across individual traits and workplace environments.
Theoretical framework
Ethics of AI
Boddington (2017) outlines key ethical challenges in AI, including fairness and accountability in decision-making, especially amid rapid technological advancement. Vallverdú and Casacuberta (2009) emphasize concerns around societal disruption and the distribution of responsibility. Principles such as safety and privacy are central to responsible AI development (Kurzweil, 2005; Kurzweil Network, 2017; Ghotbi et al., 2022). Forecasting AI's impact on employment remains complex, with repetitive roles being more vulnerable than creative ones (Nilsson, 1985; Marchant et al., 2014; Bessen, 2016; Wallach and Marchant, 2018). Xue and Pang (2022) propose a governance framework to ensure ethical AI (EAI) implementation. The influence of AI, akin to nuclear energy or hacking, is contingent upon intent and application; that which may be detrimental can transform into beneficial when responsibly directed. This highlights the necessity of proactive governance to prevent innovation from deteriorating into exploitation.
Employee commitment
EC – often viewed as organizational loyalty – helps reduce absenteeism and turnover (Akintayo, 2010; Irefin and Mechanic, 2014). Highly committed employees are more adaptable and contribute to organizational effectiveness by enhancing job satisfaction (Lo et al., 2009).
Employee motivation
Motivation fosters employee engagement, innovation and organizational performance (Nelson and DeBacker, 2008; Berman et al., 2010). It is shaped by individual goals and values, making it essential for managers to understand personal drivers (Kamery, 2004; Burton, 2012). When aligned with intrinsic values, motivation has a stronger and more sustained impact (Deci, 1976).
Responsible innovation (RI)
RI advocates for ethically managing science and innovation to address global challenges such as climate change and resource depletion (Owen et al., 2013). It highlights the importance of anticipating societal impacts and incorporating values like privacy and security early in the innovation process (Stilgoe, 2013; Stilgoe et al., 2020; Halme and Korpela, 2014; Koops, 2015; Hartley et al., 2017; Gonzales-Gemio et al., 2020).
Competitive advantage
CA is derived from resources that are valuable, rare and difficult to replicate (Del Brio et al., 2007; Barney, 1995). It can be achieved through cost leadership or differentiation strategies (Ranko et al., 2008; Wang et al., 2011). EAI supports this advantage by enhancing trust through transparency and accountability (Olatoye et al., 2024). Porter and Linde (1995) focus on productivity as a growth driver, while Barney (1991) highlights the role of unique resources, such as skilled labor, in sustaining long-term competitiveness.
Hypothesis formulation
Ethics of AI and EC
Research on the relationship between AI ethics and EC is limited. Brougham and Haar (2017, 2018) found that increased AI awareness among New Zealand employees correlated with lower organizational commitment and career satisfaction, suggesting concerns over job displacement. However, effectively managing AI ethics by embedding transparency, fairness and accountability in AI systems can alleviate fears and positively impact EC (Bostrom, 2016). When organizations prioritize EAI, employees feel more secure and aligned with the company's values, leading to stronger commitment (Stahl, 2018; Wright and Schultz, 2018; Greenwood and Van Bruen, 2010). This paper posits that the ethical design and governance of AI might transform potential dangers into catalysts for trust, motivation and commitment, hence enhancing CA.
Therefore, this study proposes the following hypothesis:
Ethics of AI positively affect employee commitment.
Ethics of AI and CA
Research indicates that AI ethics contribute to CA. Companies develop AI ethics documents for various reasons, with CA being a key factor (Schiff et al., 2022). As a megatrend, AI aims to replicate human intelligence and provide a competitive edge (Eltweri, 2021). Daly et al. (2019) argue that AI ethics documents can guide trustworthy AI toward sustainability, growth and competitiveness. Additionally, Taçoğlu et al. (2019) suggested that a robust AI strategy enhances event forecasting, further boosting a business's CA. In sum, AI ethics strengthen an organization's competitive edge. The study proposes the following:
Ethics of AI positively affect competitive advantage.
Mediation of EM between EAI and CA
Studies show that EAI enhances EM, with privacy being a major concern. Akbar et al. (2023) found 82% of respondents saw privacy limits as key to accountability and fairness. Organizational factors like politics and information gaps can influence these issues, impacting EM in relation to EAI (Krijger, 2022). EM links EAI to CA, although traditional motivation theories (Deci, 1971; Maslow, 1973) often overlook this connection. AI enhances productivity but also reshapes job roles, presenting both opportunities and challenges for the workforce (Luhana et al., 2023). Motivated employees contribute more effectively to innovation and performance, enhancing CA. This means that EM influences CA, therefore serving as a mediator between EAI and CA in this study.
EM positively mediates the relationship between EAI and competitive advantage.
Mediation of EM between EAI and EC
Stahl (2018), Akbar et al. (2023) and Krijger (2022) highlight that ethical issues in AI negatively impact EM. Therefore, adherence to AI ethics can boost EM. This study examines the relationship between EAI and EC with EM as a mediator. Past research supports a positive link between EC and motivation (Tella, 2007), emphasizing that effective business operations depend on fostering strong EM and commitment. EAI improves motivation by creating a fair and respectful environment, thus indirectly boosting EC. The study proposes strategies to enhance motivation and commitment within this framework. In light of this, the study proposes the following.
EM positively mediates the relationship between EAI and employee commitment.
Mediation of RI between EAI and CA
Buhmann and Fieseler (2021, 2023) introduced a framework for RI in AI, focusing on harm prevention, ethics and governance and linking RI to EAI and CA. RI enhances competitive edge through inclusive participation, as seen in various sectors (Lees and Lees, 2017; Scholten and van der Duin, 2015). Hadj (2020) show RI's role in connecting corporate social responsibility with CA, while Chesbrough (2003) emphasizes diverse stakeholder input for innovation. Herrmann (2023) and Zhou et al. (2009) call for evolving frameworks incorporating ethical considerations and proactive strategies in AI. As a result, this study employs RI as a mediator between EAI and CA based on earlier investigations.
RI positively mediates the relationship between EAI and competitive advantage.
Mediation of RI between EAI and EC
Buhmann and Fieseler (2021, 2023) proposed a paradigm for RI in AI, focusing on harm prevention, morality and governance. RI emphasizes inclusivity and public value. Research links EAI with RI, highlighting its importance for SMEs seeking competitiveness, legitimacy and compliance (Guston et al., 2014; Brand and Block, 2019). Literature also explores how RI practices affect SME performance by considering factors like EC and relational marketing (Cruz-Cázares et al., 2020). RI practices not only improve innovation outcomes but also influence internal dynamics such as employee morale and commitment (Cruz-Cázares et al., 2020). By integrating EAI with RI, organizations can foster higher EC. In light of the literature, the study proposes the following hypothesis.
RI positively mediates the relationship between EAI and employee commitment.
Conceptual framework
On the basis of the literature and hypotheses mentioned above, the following conceptual framework, as shown in Figure 1, has been developed.
The diagram starts on the left with a box labeled “Ethical A I.” Two right arrows from this box lead to two vertically arranged boxes in the center labeled “Employee Motivation” and “Responsible Innovation.” Two rightward arrows from each central box lead to two vertically arranged boxes labeled on the far right, labeled “Competitive Advantage “and “Employee Commitment.” Two right arrows directly connect from “Ethical A I” to the boxes on the far right. The labels of the arrows connecting these boxes are as follows: The right arrow from “Ethical A I” to “Competitive Advantage” is labeled “H 2.” The right arrow from “Ethical A I” to “Employee Commitment” is labeled “H 1.” The right arrow from “Employee Motivation” to “Competitive Advantage” is labeled “H 3.” The right arrow from “Employee Motivation” to “Employee Commitment” is labeled “H 4.” The right arrow from “Responsible Innovation” to “Competitive Advantage” is labeled “H 5.” The upward arrow from “Responsible Innovation” to “Employee Commitment” is labeled “H 6”.Conceptual framework of the study. Source: Developed by authors based on a review of existing literature
The diagram starts on the left with a box labeled “Ethical A I.” Two right arrows from this box lead to two vertically arranged boxes in the center labeled “Employee Motivation” and “Responsible Innovation.” Two rightward arrows from each central box lead to two vertically arranged boxes labeled on the far right, labeled “Competitive Advantage “and “Employee Commitment.” Two right arrows directly connect from “Ethical A I” to the boxes on the far right. The labels of the arrows connecting these boxes are as follows: The right arrow from “Ethical A I” to “Competitive Advantage” is labeled “H 2.” The right arrow from “Ethical A I” to “Employee Commitment” is labeled “H 1.” The right arrow from “Employee Motivation” to “Competitive Advantage” is labeled “H 3.” The right arrow from “Employee Motivation” to “Employee Commitment” is labeled “H 4.” The right arrow from “Responsible Innovation” to “Competitive Advantage” is labeled “H 5.” The upward arrow from “Responsible Innovation” to “Employee Commitment” is labeled “H 6”.Conceptual framework of the study. Source: Developed by authors based on a review of existing literature
Research methodology
Sampling and data collection
This study uses a quantitative cross-sectional approach with descriptive and causal design to examine the relationships between AI ethics, CA and EC. It also investigates parallel mediation of EM and RI among Indian IT employees. This study's methodology recognizes that ethical hacking and the ethical use of AI are comparable in that, although hacking is socially repugnant, ethical hacking finds flaws in systems for positive ends. Analyzing AI ethics in the workplace also necessitates differentiating between innovation motivated by curiosity and attempts at short-term, exploitative profits. The operationalization of constructs and the understanding of model interactions were led by this framing.
Data were collected through purposive sampling from diverse age groups, positions, genders and educational backgrounds. A distributed system questionnaire yielded 224 responses, with 18 incomplete surveys discarded, resulting in a 90% response rate. To reduce common method bias, data were collected in two phases three weeks apart. The final sample of 206 meets the minimum requirements for model analysis, as recommended by Ding et al. (1995).
Before completing the survey, all respondents received a briefing document outlining key constructs – EAI, CA, EM, EC and RI – based on established literature. Illustrative examples were included, such as scenarios showing where EAI aligns with or opposes CA. The distinction between innovation and its application was explained through real-world analogies (e.g. innovation vs. use in nuclear science or AI). Although not all participants had direct innovation experience, their roles in IT implementation and decision-making ensured adequate conceptual understanding.
In accordance with the institutional research policy, this study did not require formal ethical clearance, as it involved minimal risk to participants. Data were collected through an anonymous online survey of adult respondents, and no identifiable or sensitive personal information was obtained. Participation was entirely voluntary, and informed consent was obtained from all participants prior to data collection.
Scale development
This study uses pre-existing scales from prior literature. AI ethics was measured using 12 items from an updated scale by Jang et al. (2022), with the sample item being, “My organization makes an effort to put AI technology to good use.” EM was assessed using a four-item scale from Shahzadi et al. (2014). A 10-item commitment scale, based on Meyer et al. (1990) and adjusted for the study's needs, was also employed. Additionally, the study used a five-item RI scale adapted from Verburg et al. (2020) and a five-item CA scale from Li et al. (2009). The comparison to ethical hacking, which highlights the need to operationalize concepts like EAI and RI to represent positive rather than exploitative actions, also had an impact on the inclusion of these scales.
Sample adequacy and factor analysis
Kaiser–Meyer–Olkin (KMO) test and Bartlett's test of sphericity were used to confirm sampling adequacy, as shown in Table 1, with a KMO value above or near 0.8 and a p-value below 0.001, indicating satisfactory constructs. Factor analysis with varimax rotation was conducted, and internal consistency was measured using Cronbach's alpha, with a threshold of 0.6. Results for factor loadings and Cronbach's alpha are provided in Table 2.
KMO and Bartlett's test
| Tests . | Values . | |
|---|---|---|
| Kaiser–Meyer–Olkin measure of sampling adequacy | 0.738 | |
| Bartlett's test of sphericity | Approx. Chi-Square | 294.946 |
| Df | 10 | |
| Sig. | 0.000 | |
| Tests . | Values . | |
|---|---|---|
| Kaiser–Meyer–Olkin measure of sampling adequacy | 0.738 | |
| Bartlett's test of sphericity | Approx. Chi-Square | 294.946 |
| Df | 10 | |
| Sig. | 0.000 | |
Rotated component matrix
| Variables . | Items . | Factor loading . | ||||
|---|---|---|---|---|---|---|
| 1 . | 2 . | 3 . | 4 . | 5 . | ||
| Ethics of AI (Cronbach's alpha = 0.906) | EAI1 | 0.586 | ||||
| EAI2 | 0.583 | |||||
| EAI3 | 0.589 | |||||
| EAI4 | 0.646 | |||||
| EAI5 | 0.565 | |||||
| EAI6 | 0.781 | |||||
| EAI7 | 0.744 | |||||
| EAI8 | 0.697 | |||||
| EAI9 | 0.669 | |||||
| EAI10 | 0.698 | |||||
| EAI11 | 0.742 | |||||
| EAI12 | 0.733 | |||||
| Employee motivation (Cronbach's alpha = 0.772) | EM1 | 0.571 | ||||
| EM2 | 0.541 | |||||
| EM3 | 0.557 | |||||
| EM4 | 0.627 | |||||
| Responsible innovation (Cronbach's alpha = 0.907) | RI1 | 0.798 | ||||
| RI2 | 0.813 | |||||
| RI3 | 0.765 | |||||
| RI4 | 0.760 | |||||
| RI5 | 0.689 | |||||
| Competitive advantage (Cronbach's alpha = 0.868) | CA1 | 0.647 | ||||
| CA2 | 0.724 | |||||
| CA3 | 0.582 | |||||
| CA4 | 0.579 | |||||
| CA5 | 0.590 | |||||
| Employee commitment (Cronbach's alpha = 0.884) | EC1 | 0.522 | ||||
| EC2 | 0.570 | |||||
| EC3 | 0.573 | |||||
| EC4 | 0.675 | |||||
| EC5 | 0.689 | |||||
| EC6 | 0.753 | |||||
| EC7 | 0.792 | |||||
| EC8 | 0.822 | |||||
| EC9 | 0.786 | |||||
| EC10 | 0.521 | |||||
| Variables . | Items . | Factor loading . | ||||
|---|---|---|---|---|---|---|
| 1 . | 2 . | 3 . | 4 . | 5 . | ||
| Ethics of AI (Cronbach's alpha = 0.906) | EAI1 | 0.586 | ||||
| EAI2 | 0.583 | |||||
| EAI3 | 0.589 | |||||
| EAI4 | 0.646 | |||||
| EAI5 | 0.565 | |||||
| EAI6 | 0.781 | |||||
| EAI7 | 0.744 | |||||
| EAI8 | 0.697 | |||||
| EAI9 | 0.669 | |||||
| EAI10 | 0.698 | |||||
| EAI11 | 0.742 | |||||
| EAI12 | 0.733 | |||||
| Employee motivation (Cronbach's alpha = 0.772) | EM1 | 0.571 | ||||
| EM2 | 0.541 | |||||
| EM3 | 0.557 | |||||
| EM4 | 0.627 | |||||
| Responsible innovation (Cronbach's alpha = 0.907) | RI1 | 0.798 | ||||
| RI2 | 0.813 | |||||
| RI3 | 0.765 | |||||
| RI4 | 0.760 | |||||
| RI5 | 0.689 | |||||
| Competitive advantage (Cronbach's alpha = 0.868) | CA1 | 0.647 | ||||
| CA2 | 0.724 | |||||
| CA3 | 0.582 | |||||
| CA4 | 0.579 | |||||
| CA5 | 0.590 | |||||
| Employee commitment (Cronbach's alpha = 0.884) | EC1 | 0.522 | ||||
| EC2 | 0.570 | |||||
| EC3 | 0.573 | |||||
| EC4 | 0.675 | |||||
| EC5 | 0.689 | |||||
| EC6 | 0.753 | |||||
| EC7 | 0.792 | |||||
| EC8 | 0.822 | |||||
| EC9 | 0.786 | |||||
| EC10 | 0.521 | |||||
Testing common method bias
The study addressed potential common method bias using Harman's single factor test, which revealed an explained variance of 26.636%, below the 50% threshold, indicating no significant bias in findings. This test further confirms that the study's methodological rigor is consistent with the ethical hacking analogy: it guarantees that results are not skewed by bias or single-source error, just like when a hidden vulnerability is discovered.
Data analysis
Data were examined with AMOS version 21.0 and SPSS 23 with Process macro version 3.4, which was used to analyze constructs' reliability, correlation and factor analysis (Arbuckle and Wothke, 2003). Through use of confirmatory factor analysis (CFA) in AMOS 21.0 and validity master, the scale's validity was examined (Byrne and Van de Vijver, 2010). Additionally, SPSS with PROCESS Macro (Model 4) and 5,000 bootstrapping with 95% confidence level were used to assess mediation (Hayes, 2013). According to the ethical hacking analogy that guided this investigation, the research was intended to reveal potential flaws, such as hidden biases or erroneous correlations, in addition to confirming statistical robustness, so that the results represent positive rather than harmful interpretations.
Demographic profile
The demographic breakdown of respondents based on gender, age, experience, education, job level and annual income is shown in Table 3.
Demographic characteristics of respondents
| Demographic variables . | Categories . | Percentage . |
|---|---|---|
| Gender | Male | 81.6 |
| Female | 18.4 | |
| Age | 22–32 | 91.7 |
| 32–42 | 5.3 | |
| 42–52 | 1.9 | |
| Above 52 | 1.0 | |
| Experience | Less than 5 years | 88.3 |
| 5–10 years | 5.8 | |
| 10–15 years | 3.4 | |
| Above 15 years | 2.4 | |
| Education | Undergraduate | 78.2 |
| Post graduate | 18.4 | |
| Others | 3.4 | |
| Job level | Junior | 54.4 |
| Middle | 34.0 | |
| Senior | 11.7 | |
| Annual income | Upto 5,00,000 | 15.0 |
| 5,00,000–10,00,000 | 29.6 | |
| 10,00,000–20,00,000 | 35.4 | |
| Above 20,00,000 | 19.9 |
| Demographic variables . | Categories . | Percentage . |
|---|---|---|
| Gender | Male | 81.6 |
| Female | 18.4 | |
| Age | 22–32 | 91.7 |
| 32–42 | 5.3 | |
| 42–52 | 1.9 | |
| Above 52 | 1.0 | |
| Experience | Less than 5 years | 88.3 |
| 5–10 years | 5.8 | |
| 10–15 years | 3.4 | |
| Above 15 years | 2.4 | |
| Education | Undergraduate | 78.2 |
| Post graduate | 18.4 | |
| Others | 3.4 | |
| Job level | Junior | 54.4 |
| Middle | 34.0 | |
| Senior | 11.7 | |
| Annual income | Upto 5,00,000 | 15.0 |
| 5,00,000–10,00,000 | 29.6 | |
| 10,00,000–20,00,000 | 35.4 | |
| Above 20,00,000 | 19.9 |
Results
Table 4 reports the results of descriptive statistics analysis and correlation between variables. All of the variables were determined to be statistically significant and correlated. The comparison of ethical hacking in separating beneficial from detrimental uses is further supported by these relationships, which, although positive, also highlight the necessity of differentiating between innovation that adds sustainable value and the “quick gain” impacts of misuse.
Descriptive statistics and inter-correlations among variables
| S. No. . | Variables . | M . | SD . | EAI . | EM . | EC . | RI . | CA . |
|---|---|---|---|---|---|---|---|---|
| 1. | Ethics of AI | 3.86 | 0.72 | 1 | ||||
| 2. | EM | 4.19 | 0.56 | 0.430 | 1 | |||
| 3. | EC | 3.50 | 0.70 | 0.342 | 0.366 | 1 | ||
| 4. | RI | 4.11 | 0.74 | 0.431 | 0.538 | 0.314 | 1 | |
| 5. | CA | 3.73 | 0.75 | 0.429 | 0.354 | 0.428 | 0.596 | 1 |
| S. No. . | Variables . | M . | SD . | EAI . | EM . | EC . | RI . | CA . |
|---|---|---|---|---|---|---|---|---|
| 1. | Ethics of AI | 3.86 | 0.72 | 1 | ||||
| 2. | EM | 4.19 | 0.56 | 0.430 | 1 | |||
| 3. | EC | 3.50 | 0.70 | 0.342 | 0.366 | 1 | ||
| 4. | RI | 4.11 | 0.74 | 0.431 | 0.538 | 0.314 | 1 | |
| 5. | CA | 3.73 | 0.75 | 0.429 | 0.354 | 0.428 | 0.596 | 1 |
Model fit and validity
Goodness-of-fit indices assessed overall model fit in the research model using CFA with AMOS 21 (Schreiber, 2008). Results are summarized in Table 5.
FIT statistics of the model
| Model fit . | Model statistics . | Cut-off criteria . |
|---|---|---|
| CMIN | 2321.543 | |
| DF | 852 | |
| CMIN/DF | 2.725 | ≤3 (Hair et al., 2010) |
| GFI | 0.616 | ≥0.8 (Homburg and Baumgartner, 1995) |
| PGFI | 0.555 | ≥0.5 (Wu et al., 2009) |
| CFI | 0.712 | ≥0.9 (Hair et al., 2010) |
| TLI | 0.694 | ≥0.90 (Byrne, 2013) |
| RMSEA | 0.072 | ≤0.08 (Steiger, 1990) |
| Model fit . | Model statistics . | Cut-off criteria . |
|---|---|---|
| CMIN | 2321.543 | |
| DF | 852 | |
| CMIN/DF | 2.725 | ≤3 (Hair et al., 2010) |
| GFI | 0.616 | ≥0.8 (Homburg and Baumgartner, 1995) |
| PGFI | 0.555 | ≥0.5 (Wu et al., 2009) |
| CFI | 0.712 | ≥0.9 (Hair et al., 2010) |
| TLI | 0.694 | ≥0.90 (Byrne, 2013) |
| RMSEA | 0.072 | ≤0.08 (Steiger, 1990) |
Note(s): CMIN = Minimum discrepancy function; DF = degrees of freedom; GFI = goodness-of-fit index; PGFI = parsimony goodness-of-fit index; CFI = comparative fit index; TLI = Tucker-Lewis index and RMSEA = root mean square error of approximation
The study confirmed both convergent and discriminant validity, with composite reliability > 0.6 and AVE>0.5, and the square root of AVEs exceeding correlations with other constructs, as shown in Table 6.
Convergent and discriminant validity statistic of variables
| . | CR . | AVE . | MSV . | MaxR(H) . | CA . | EAI . | EM . | EC . | RI . |
|---|---|---|---|---|---|---|---|---|---|
| CA | 0.871 | 0.535 | 0.372 | 0.894 | 0.732 | ||||
| EAI | 0.909 | 0.562 | 0.243 | 0.924 | 0.460 | 0.680 | |||
| EM | 0.784 | 0.581 | 0.434 | 0.797 | 0.339 | 0.493 | 0.675 | ||
| EC | 0.879 | 0.555 | 0.286 | 0.909 | 0.535 | 0.361 | 0.336 | 0.596 | |
| RI | 0.908 | 0.665 | 0.394 | 0.916 | 0.610 | 0.493 | 0.628 | 0.308 | 0.816 |
| . | CR . | AVE . | MSV . | MaxR(H) . | CA . | EAI . | EM . | EC . | RI . |
|---|---|---|---|---|---|---|---|---|---|
| CA | 0.871 | 0.535 | 0.372 | 0.894 | 0.732 | ||||
| EAI | 0.909 | 0.562 | 0.243 | 0.924 | 0.460 | 0.680 | |||
| EM | 0.784 | 0.581 | 0.434 | 0.797 | 0.339 | 0.493 | 0.675 | ||
| EC | 0.879 | 0.555 | 0.286 | 0.909 | 0.535 | 0.361 | 0.336 | 0.596 | |
| RI | 0.908 | 0.665 | 0.394 | 0.916 | 0.610 | 0.493 | 0.628 | 0.308 | 0.816 |
Results of direct effects
Table 7 displays the direct impact of all five constructs. EAI directly affects EM (β = 0.329, p < 0.000), CA (β = 0.162, p < 0.008), EC (β = 0.212, p < 0.021) and RI (β = 0.474, p < 0.000) in a favorable manner. However, it is noted that EM has no effect on CA (β = 0.089, p < 0.280) and EC (β = 0.172, p < 0.179). Additionally, it was discovered that RI had a positive impact on CA (β = 0.381, p < 0.000) but no effect on EC (β = 0.139, p < 0.093). Similar to how ethical hacking turns a potentially dangerous behavior into one that fortifies systems, these direct impacts demonstrate how ethical governance of AI might steer curiosity and innovation toward beneficial results rather than short-term opportunism.
Results of direct effects
| Relationships . | Β . | se . | CR . | p . | Decision . |
|---|---|---|---|---|---|
| EAI → EC (H1) | 0.212 | 0.092 | 2.311 | 0.021 | Accepted |
| EAI → CA (H2) | 0.162 | 0.061 | 2.665 | 0.008 | Accepted |
| EAI → EM | 0.329 | 0.064 | 5.125 | 0.000 | Accepted |
| EAI → RI | 0.474 | 0.075 | 6.283 | 0.000 | Accepted |
| EM → CA | 0.089 | 0.083 | 1.081 | 0.280 | Not accepted |
| EM → EC | 0.172 | 0.128 | 1.344 | 0.179 | Not Accepted |
| RI → CA | 0.381 | 0.071 | 5.374 | 0.000 | Accepted |
| RI → EC | 0.139 | 0.083 | 1.681 | 0.093 | Not Accepted |
| Relationships . | Β . | se . | CR . | p . | Decision . |
|---|---|---|---|---|---|
| EAI → EC (H1) | 0.212 | 0.092 | 2.311 | 0.021 | Accepted |
| EAI → CA (H2) | 0.162 | 0.061 | 2.665 | 0.008 | Accepted |
| EAI → EM | 0.329 | 0.064 | 5.125 | 0.000 | Accepted |
| EAI → RI | 0.474 | 0.075 | 6.283 | 0.000 | Accepted |
| EM → CA | 0.089 | 0.083 | 1.081 | 0.280 | Not accepted |
| EM → EC | 0.172 | 0.128 | 1.344 | 0.179 | Not Accepted |
| RI → CA | 0.381 | 0.071 | 5.374 | 0.000 | Accepted |
| RI → EC | 0.139 | 0.083 | 1.681 | 0.093 | Not Accepted |
Results of indirect effects
The study analyzed direct and indirect effects of EAI on EC and CA through mediation analysis. The results are shown in Table 8 that EAI had a significant direct impact on EC (β = 0.2193, 95% CI: 0.0846, 0.3541) and CA (β = 0.3356, 95% CI: 0.2139, 0.4972) via EM. After accounting for RI mediation, EAI's direct effect on EC (β = 0.2459, 95% CI: 0.1090, 0.3827) and CA (β = 0.2212, 95% CI: 0.0962, 0.3463) remained significant, supporting hypotheses H1 and H2. EM also had a statistically significant mediating effect on EC (H3: β = 0.0933, 95% CI: 0.0292, 0.1678) and CA (H5:β = 0.1120, 95% CI: 0.0322, 0.2166), confirming hypotheses 3 and 5. RI's mediation was significant for EC (H4: β = 0.0855, 95% CI: 0.0276, 0.1587) and CA (H6: β = 0.0625, 95% CI: 0.1211, 0.3647), validating the mediating role of RI. All results were statistically significant, supporting each hypothesis.
Results of specific indirect effects
| Relationships . | H . | Effect . | Boot SE . | Boot LLCI . | Boot ULCI . | Decision . |
|---|---|---|---|---|---|---|
| EAI → EM → CA | H3 | 0.0933 | 0.0355 | 0.0292 | 0.1678 | Accepted |
| EAI → RI → EC | H4 | 0.0855 | 0.0341 | 0.0276 | 0.1587 | Accepted |
| EAI → EM → EC | H5 | 0.1120 | 0.0460 | 0.0322 | 0.2116 | Accepted |
| EAI → RI → CA | H6 | 0.2276 | 0.0625 | 0.1211 | 0.3647 | Accepted |
| Relationships . | H . | Effect . | Boot SE . | Boot LLCI . | Boot ULCI . | Decision . |
|---|---|---|---|---|---|---|
| EAI → EM → CA | H3 | 0.0933 | 0.0355 | 0.0292 | 0.1678 | Accepted |
| EAI → RI → EC | H4 | 0.0855 | 0.0341 | 0.0276 | 0.1587 | Accepted |
| EAI → EM → EC | H5 | 0.1120 | 0.0460 | 0.0322 | 0.2116 | Accepted |
| EAI → RI → CA | H6 | 0.2276 | 0.0625 | 0.1211 | 0.3647 | Accepted |
Results of parallel mediation
When competitive advantage (CA) is dependent variable
As presented in Table 9, the total effect of AI ethics on CA was significant, with RI mediating the relationship, whereas the indirect path through EM was not significant.
Results of parallel mediation when CA is dependent variable
| . | Effect . | Se . | Boot LLCI . | Boot ULCI . | Significance . |
|---|---|---|---|---|---|
| Total Effect | 0.2241 | 0.0667 | 0.1099 | 0.3708 | Significant |
| EAI → RI → CA | 0.2304 | 0.0658 | 0.1158 | 0.3711 | Significant |
| EAI → EM → CA | −0.0063 | 0.0292 | −0.0673 | 0.0487 | Not significant |
| . | Effect . | Se . | Boot LLCI . | Boot ULCI . | Significance . |
|---|---|---|---|---|---|
| Total Effect | 0.2241 | 0.0667 | 0.1099 | 0.3708 | Significant |
| EAI → RI → CA | 0.2304 | 0.0658 | 0.1158 | 0.3711 | Significant |
| EAI → EM → CA | −0.0063 | 0.0292 | −0.0673 | 0.0487 | Not significant |
When employee commitment (EC) is dependent variable
As shown in Table 10, the total effect of AI ethics on EC was significant, with EM emerging as a significant mediator, while the indirect effect through RI was not significant.
Results of parallel mediation when EC is dependent variable
| . | Effect . | Se . | Boot LLCI . | Boot ULCI . | Significance . |
|---|---|---|---|---|---|
| Total Effect | 0.1378 | 0.0504 | 0.0518 | 0.2462 | Significant |
| EAI → RI → EC | 0.0454 | 0.0322 | −0.0092 | 0.1172 | Not significant |
| EAI → EM → EC | 0.0924 | 0.0459 | 0.0110 | 0.1900 | Significant |
| . | Effect . | Se . | Boot LLCI . | Boot ULCI . | Significance . |
|---|---|---|---|---|---|
| Total Effect | 0.1378 | 0.0504 | 0.0518 | 0.2462 | Significant |
| EAI → RI → EC | 0.0454 | 0.0322 | −0.0092 | 0.1172 | Not significant |
| EAI → EM → EC | 0.0924 | 0.0459 | 0.0110 | 0.1900 | Significant |
Discussion
This paper examined the relationships between AI ethics, CA and EC among IT employees in India, focusing on the mediating roles of RI and EM. Results showed a significant positive link between AI ethics and EC (H1), suggesting that strong AI ethics enhance loyalty, consistent with prior research (Brougham and Haar, 2017, 2018; Brendel et al., 2021). AI ethics also positively affected CA (H2), indicating that organizations with strong AI ethics gain a competitive edge, aligning with studies by Gottschalg and Zollo (2007), Eltweri (2021), Schiff et al. (2022), Daly et al. (2019) and Taçoğlu et al. (2019). The study highlighted how AI technologies impact meaningful work, emphasizing the need for policies that preserve human dignity amid AI integration (Müller and Bostrom, 2016; Bankins and Formosa, 2023). This is consistent with the ethical hacking analogy: integrating AI ethics guarantees that innovation and curiosity are directed toward long-term resilience rather than opportunistic or exploitative profits, just as ethical hacking finds weaknesses to strengthen systems.
EM and RI were found to mediate the relationships significantly. EM mediated the link between AI ethics and CA (H3), showing that motivating employees enhances both AI ethics and competitiveness. RI also mediated this link (H5), confirming that strong AI ethics promote RI and boost competitiveness. Additionally, motivated employees are more committed (H4), and RI strengthens the relationship between AI ethics and EC (H6), highlighting RI's role in fostering loyalty and upholding ethical standards. The parallel mediation analysis demonstrated that RI and EM significantly impact both CA and EC. EAI leverages these factors to enhance organizational outcomes, underscoring the multifaceted benefits of EAI practices.
Implications
Theoretical implications
This study advances research by exploring the impact of AI ethics on EC and CA among IT employees in India. It uncovers new relationships, including how AI ethics influence both CA and EC, and highlights the roles of EM and RI in these dynamics. Through two mediation analyses, the study confirms the mediating roles of RI and EM, which were previously studied separately. These findings provide novel insights into how AI ethics shape organizational outcomes, contributing to human resource research and paving the way for future studies. Through the use of the ethical hacking analogy to frame AI ethics, this work advances theory by elucidating how ethical curiosity can promote innovation and fortify trust, whereas opportunism or abuse undermines these connections.
Practical implications
This study offers practical insights by integrating EM and RI to illustrate how EAI influences CA and EC in the IT sector. It highlights the significant role of AI ethics in boosting both EC and CA by identifying EAI, EM and RI as key drivers. Ethical considerations such as bias, transparency and privacy are crucial factors influencing these outcomes. Trotta et al. (2023) also emphasized the importance of effective governance and ethical guidelines in leveraging AI for better decision-making and innovation. They advocated for stakeholder collaboration to ensure responsible AI development. The company's ethical approach to AI positively impacts its CA, with RI and EM enhancing these effects (Khan et al., 2023). Using careful oversight to identify hazards, guard against abuse and make sure curiosity-driven innovation leads to long-term rather than short-term gains, managers can see EAI techniques as being similar to ethical hacking. The authors recommend strengthening RI, EM and AI ethics to improve EC and competitive edge.
Limitations and future research scope
While the study highlights the significance of EAI, EM, EC, CA and RI, its three-week data collection window limits the ability to infer long-term causal relationships. Future research should adopt longitudinal or quasi-experimental designs to track these dynamics over time. Moreover, the sample, limited to Indian IT professionals, restricts the generalizability of findings. Future studies are encouraged to test this model across different industries, regions and cultural contexts. Exploring moderating variables such as gender, organizational type or technological maturity could also provide deeper insights and broaden the applicability of findings.
Ethical considerations
The study did not require formal ethical clearance as per the policy of the institution, as it involved voluntary participation of adult respondents through an anonymous online questionnaire with no identifiable or sensitive personal data collected. Informed consent was obtained from all participants before data collection.

