Artificial Intelligence (AI) has transcended the tech sector, garnering significant attention from traditional industries, governments, and society as a whole. While AI promises substantial productivity gains, accelerated innovation, and enhanced decision-making, concerns about its environmental, societal, and governance implications have grown. This article addresses the dichotomy of AI as both a transformative opportunity and a potential threat, seeking to provide a balanced, evidence-based perspective on the benefits and costs of AI.
We assess AI through Environmental, Social, and Governance (ESG) dimensions. This approach enables a nuanced evaluation of the benefits (e.g., efficiency, drug development, disaster management) and costs (e.g., social, legal, and democratic challenges) associated with the adoption of AI. By adopting this structured approach, we aim to facilitate the responsible and sustainable deployment of AI across industries.
Our objective is to support evidence-based decision-making among diverse stakeholders, including regulatory bodies, organizational leaders, and individuals. By contextualizing AI within broader economic and societal systems, this research contributes to a deeper understanding of AI’s complex role in shaping the future. Ultimately, this framework promotes a rational, multifaceted examination of AI, moving beyond utopian or dystopian narratives to inform the judicious integration of AI into our global landscape.
1. Introduction
Artificial intelligence (AI) is on everybody’s minds nowadays. AI is often defined as a machine’s capability to mimic the judgment and behavior of a human expert (Csaszar and Steinberger, 2022) that we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem solving, decision-making, and even demonstrating creativity (Oehmichen et al., 2023; Rai et al., 2019). Even for European banks, the term AI has become the most frequently used term in annual reports, overtaking the term ESG – the integration of environmental, social, and governance issues (Pollman, 2022) – which held the top spot for the last six years, according to Bloomberg analysts (Arons and Durand, 2025). However, given the latest discussion about potential biases in AI systems (Maslej et al., 2025) and the immense energy consumption of AI models (Maslej et al., 2023), we know that these two influential terms (AI and ESG) are inherently intertwined constructs that need to be considered in combination, rather than in isolation. Thus, we combine the two most influential terms of recent years in a comprehensive framework to discuss the benefits and costs of artificial intelligence concerning the environmental (E), social (S), and governance (G) dimensions.
Assessing AI through an ESG lens is no longer just a topic with enormous relevance for technology firms; its importance has now also spread to traditional industries, regulatory institutions, and the entire society. Technology firms such as OpenAI, Google, and DeepSeek compete to build the strongest and most efficient models (Hammond, 2025; Kharpal, 2025). Firms like Coca-Cola, UBS, Duolingo, and Klarna are experimenting with AI to disrupt and transform their businesses (The Editorial Board, 2025). At the same time, governments are seeking to strengthen their countries’ national AI competitiveness (The Economist, 2024a), integrate AI into their military capabilities (The Economist, 2019), and develop regulatory frameworks to govern this rapidly evolving technology (The Economist, 2024b). Furthermore, tech leaders such as Marc Zuckerberg emphasize AI’s potential to make our lives better in the future (Davenport et al., 2020). As a result, the topic has also gained substantial momentum among academics. Previous research has demonstrated that AI has the potential to deliver substantial productivity gains across various industries, including financial services (Fedyk et al., 2022), electronics (Yang, 2022), and general manufacturing (Czarnitzki et al., 2023). Beyond efficiency, AI also promises broader societal benefits such as accelerated drug development (Lou and Wu, 2021), enhanced prediction and management of natural disasters like wildfires (Hyseni, 2024), and improved decision-making (Hutzschenreuter and Lämmermann, 2025).
Regardless of AI’s positive potential, the public and academics increasingly question whether AI represents the great promise or the gravest threat to humanity (see, e.g., Raisch and Krakowski, 2021 for a citation of theoretical physicist Stephen Hawking). Tech leader Elon Musk has long warned that AI poses a danger (Davenport et al., 2020; Metz, 2018). Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chairman of the state’s expert committee on AI governance, also reckon that AI may threaten the human race” (The Economist, 2024c, p. 43). In parallel, research has identified deeper social, legal, organizational, and democratic challenges associated with AI (Chhillar and Aguilera, 2022). These perspectives underscore that AI is not a neutral tool but a complex and deeply embedded construct within economic and societal systems. Accordingly, our goal is neither to portray AI as the unrestricted Messiahs-like cure to all our problems (as tech leaders tend to do these days) nor to construct a dystopian narrative of societal collapse. Instead, this article seeks to offer a rich and rational picture of both the potential benefits and the potential costs that AI can have for the ESG dimensions.
By adopting this approach, we pursue a clear and analytically grounded objective: to enable evidence-based decision-making regarding the appropriate use of AI. This imperative extends across multiple stakeholder groups, including regulatory bodies, organizational leaders, and individuals in their capacities as citizens, consumers, and employees. To support such deliberation, we propose a cost-benefit assessment framework structured along the dimensions of environmental, social, and governance (ESG) criteria. This framework aims to enable a comprehensive and context-sensitive evaluation of AI applications, thereby promoting more responsible and sustainable deployment across sectors.
AI is a new challenge for all stakeholders. From a within-firm perspective, AI can bolster productivity by, for example, improving the speed and quality of financial service employees (Davenport and Bean, 2024). From a between-firm perspective, it can have severe consequences for the firm’s competitiveness (Hanelt et al., 2025; Kemp, 2024). Finally, AI shapes the societies in which firms operate, for example, by affecting the climate, influencing political structures, and impacting individuals’ health. The key features that distinguish AI from previous (information) technology, are its autonomy, learning and inscrutability; AI can act and decide autonomously – if needed without human agency, it can learn – from humans but also from itself, and it is inscrutable as it lacks opacity, transparency, and explainability for the human involved (Oehmichen et al., 2023). These features will help us throughout the paper to understand AI’s particularities and especially the distinctness of its ESG costs.
Importantly, not all ESG-related costs of AI mentioned in this paper result from opportunistic actors, nor are all benefits the result of effective innovation management; most are not. Many of these outcomes stem from what have been termed “revenge effects” of technology: unintended consequences that, while unforeseen, can have significant and often harmful societal impacts. History provides many examples of these effects: Nobel intended to offer explosives for mining; Gutenberg’s intention was book printing and not threatening the Catholic Church; and fridge producers and other industrial firms using Chlorofluorocarbons (CFC) did not intend to demolish (Suleyman and Bhaskar, 2024, p. 35). Especially, these revenge effects are important to understand, as they provide us with the opportunity to draw some important conclusions about how to integrate AI into our future lives. We will build on this paradox when closing our paper with a practical research agenda, which consists of a list of relevant questions that we recommend researchers, as well as practitioners, pursue next. This practical research agenda centers on technology firms driving recent advancements in AI, examining their operations within a multifaceted context defined by three key parameters across macro, meso, and micro levels: the regulatory environment, competitive landscape, and corporate governance actors. By synthesizing these contextual factors with observed ESG benefits and costs, we derive a practical research agenda that underscores the need to reassess corporate governance antecedents of E, S, and G aspects for AI firms, ultimately identifying critical questions for future research and practice.
Our study makes several important contributions to existing literature. First, by applying the ESG framework, we offer a comprehensive and interdisciplinary analysis of the multifaceted benefits and costs associated with the AI integration into daily lives. The ESG framework provides a systematic lens for evaluating the sustainability and ethical dimensions of AI. The Environmental dimension considers the ecological footprint of AI, such as energy consumption and e-waste. The Social dimension examines societal effects, including job displacement and privacy concerns. Governance, meanwhile, scrutinizes the regulatory and ethical frameworks guiding AI development. This nuanced approach is key since it illuminates the intricate complexity of AI’s far-reaching consequences, highlighting the often-overlooked interdependence of its impacts. For instance, AI-driven efficiency gains in the environmental dimension (e.g., optimized resource allocation) might inadvertently trigger social costs (e.g., job displacement due to automation).
Second, this study aims to contribute to the development of more informed, balanced decision-making frameworks. By elucidating the potential trade-offs of AI across ESG dimensions, our research seeks to equip stakeholders (including policymakers, industry leaders, and the general public) with a valuable decision-making aid. Our practical research agenda connects academic insight with real-world decision-making needs by mapping out key contextual variables—regulatory environments, competitive dynamics, and corporate governance structures—and linking these to ESG outcomes. Hence, this study aims to facilitate more nuanced deliberations on both the deployment of AI technologies and the development of effective regulatory strategies. For example, policymakers could utilize these insights to develop regulations that foster AI innovation while safeguarding social welfare and environmental sustainability.
Third, on a more aspirational level, it is our sincere hope that this study will inspire and motivate our readers to engage in a constructive, forward-looking dialogue. We envision this discourse as a foundational step in collectively shaping the desired future of our society regarding the role of AI. Specifically, we call researchers and practitioners to perceive our practical research agenda as a living agenda for responsible AI governance. We encourage our readers to also build this discourse on honest reflection about which areas we explicitly want to focus on the human side.
The remainder of this paper is structured as follows: For each of the three ESG dimensions—environmental, social, and governance—we present recent insights from academia and the practice. For each dimension, we begin by outlining the potential benefits of AI, followed by a discussion of its potential costs. Table 1 provides a concise overview of the identified ESG-related benefits and costs associated with AI. We conclude by outlining a practice-oriented research agenda.
Overview of ESG benefits and costs
| Environmental | Social | Governance | |||
|---|---|---|---|---|---|
| Benefits | Costs | Benefits | Costs | Benefits | Costs |
|
|
|
|
|
|
| Environmental | Social | Governance | |||
|---|---|---|---|---|---|
| Benefits | Costs | Benefits | Costs | Benefits | Costs |
Description & Analysis Understanding of the Earth Climate prediction Bio diversity Optimization & Innovation Energy efficiency Reduction of waste Efficient resource consumption | Energy consumption Quicker depletion of finite energy resources Increasing CO2emission Consumption of other resources Water use Rare earths | Nourished economic systems Productivity Growth of wealth Advanced political structures Democracy Improved health Improved diagnostics Innovative pharmaceutical products | Economic costs Employment Wealth and wealth distribution Harmful political and societal structures Discrimination Reduced privacy Polarization Aggravated health Loneliness Dreary and devaluating jobs Emotional harm | Better advice More information for decision making No trade-off between decision speed and quality Efficient control Transparency Improved inter-organizational governance | Bad advice Reduced decision quality Less trust Missing ethical foundation for decision making Inefficient control Power unbalance Unclear accountability |
2. Environmental
The environmental dimension of ESG (the “E”) covers issues such as excessive resource consumption and antecedents of climate change (Pollman, 2022). The role of AI is inherently double-edged. On the one hand, AI can help drive innovation in crucial new technologies, thereby mitigating the potential threats of climate change. On the other hand, AI can also critically amplify threats of climate change due to its enormous energy and resource consumption.
2.1 Benefits
Researchers emphasize the great positive impact that AI can have on the environment, as it helps tackle major issues such as climate change (Posner and Fei-Fei, 2020). The potential benefits of AI for the environment are numerous. In practice, three core capabilities of AI are commonly recognized as particularly valuable for advancing sustainability goals: Measuring complex systems, predicting outcomes, and optimizing system performance (Hyseni, 2024). Following this structure, we will, hence, cluster AI’s environmental benefits into:
improved description and analyses of the earth, its climate, and biodiversity; and
advancements in environmental optimization and innovation.
(1) Description and analysis: Environmental issues can only be addressed effectively if they are properly identified and their scope accurately captured. In this regard, AI can offer great support. Recent reports demonstrate how AI is enhancing foundation models in scientific research (Maslej et al., 2025), with the potential for substantial positive environmental impact. A notable example happened in 2024, when Oak Ridge National Lab introduced the climate science model ORBIT. ORBIT is the largest AI-driven Earth system model, enabling the most accurate climate predictions to date (Wang et al., 2024). In a similar domain, the large-scale foundation model Aurora has demonstrated significant improvements in forecasting air quality, ocean waves, and high-resolution weather. Compared to previous models, Aurora offers substantially better performance at a lower operational cost, making AI-driven Earth system modeling more accessible and affordable (Bodnar et al., 2024). Another advancement comes from Google, DeepMind’s GenCast, an AI-powered model capable of predicting 15-day weather forecasts. Its rapid prediction capabilities have practical applications in disaster response, renewable energy planning, and agriculture. Complementing these broad-scale models, Google has also developed FireSat, a satellite-based wildfire detection system. Unlike general climate models, FireSat focuses on a specific use case: detecting even small wildfires within 20 minutes by analyzing real-time satellite imagery and environmental data using AI (Maslej et al., 2025; Price and Willson, 2024).
Hence, AI holds significant potential for enhancing climate projections and mitigating natural disasters, such as wildfires (Hyseni, 2024). When combined with satellite-based Earth observation data, machine learning algorithms can enhance climate and Earth system models used for climate projections, helping to sustainably develop the energy, aviation, transportation, or other sectors. These algorithms enhance both the speed and accuracy of projections, thereby helping to overcome the limitations of traditional climate models, including systematic errors and inaccuracies. Recent advances demonstrate that integrating machine learning can significantly increase model resolution and predictive accuracy (Eyring et al., 2024; Jung and Eyring, 2024).
Furthermore, the World Economic Forum highlights several AI-enabled tools that are already helping to tackle climate change. For example, researchers at the University of Leeds have developed an AI model that leverages satellite imagery to measure the size and changes of icebergs much faster than humans can, thereby helping to estimate how much meltwater is released into the oceans—a process intensified by global warming, The Scottish company Space Intelligence used AI to monitor carbon storage in forests, track deforestation rates, and assess their impact on the climate. The Brazilian company Sipremo utilizes AI to predict the timing, location, and type of climate disasters, enabling businesses and governments to prepare for these disasters (Masterson, 2024).
(2) Environmental optimization and innovation: Building on the great potential for AI for product innovation (Babina et al., 2024), we want to emphasize three areas of environmental optimization and innovation that AI advances significantly: Energy efficiency, waste reduction, and efficient resource consumption.
Many current industrial processes are associated with high energy consumption levels. Contributing factors include outdated machines, inefficient buildings, and energy intense transportation. In all these dimensions, AI can play a transformative role. For instance, AI can reduce costs by creating carbon calculators, which assist companies in tracking emissions, and advancing toward their net-zero goals (Nowack, 2024). Early evidence suggests that AI can reduce the energy consumption of buildings by up to 30% (Winston, 2024). Moreover, AI is already being utilized to optimize energy operations in various areas, including oil exploration, grid reliability, renewable energy integration, and leak detection. It also accelerates the discovery and testing of new energy technologies, including solar materials, battery systems, and carbon capture methods. However, despite this potential, most energy-sector funding still bypasses AI-driven ventures and lacks sufficient commercialization support (Cozzi et al., 2024). Across various sectors, including manufacturing, transportation, and buildings, AI can reduce both energy consumption and emissions. Overall, AI-led efficiency gains could contribute to emission reductions of up to 5% (Cozzi et al., 2024).
Furthermore, we want to point out AI’s potential benefits in waste reduction and efficient resource consumption. AI can enhance water resource management, improve transportation efficiency, reduce waste, and promote recycling (Hyseni, 2024). For example, the United Nations IKI Project helps communities and authorities such as those in Burundi, Chad, and Sudan in planning for climate change adaptation. This includes improving access to clean energy, implementing waste management systems, and supporting reforestation efforts (Masterson, 2024; UN (United Nations), 2023). An AI system by the British software startup Greyparrot improves the efficiency of waste management. Waste is a big producer of methane gas and is therefore crucial to tackling climate change. The AI system identifies recyclable waste that would otherwise be sent to landfills (Masterson, 2024; Pritchard, 2023). Similarly, the Dutch organization The Ocean Cleanup uses AI alongside other technologies to map plastic pollution in remote ocean regions, enabling more targeted and effective collection efforts (de Vries, 2022). The California-based company Eugenie.ai has developed an emissions-tracking platform designed to help firms in the metal, oil, and gas industries reduce their carbon footprint. The company utilizes satellite imagery, combined with machine and process data, to track, trace, and reduce emissions (Elman, 2023; Masterson, 2024). In Rio de Janeiro, a partnership between the city government and the startup Morfo leverages AI-equipped drones to identify target areas for planting seeds in hard-to-reach regions. This AI-assisted reforestation process is estimated to be 100 times faster than manual planting (Masterson, 2024; Queiroz and Frontini, 2024). AI is also increasingly used in agricultural robotics, such as fruit-picking robots, which may reduce crop waste and decrease the need for pesticides (Walden, 2024). Furthermore, major technology firms like Microsoft emphasize the broader positive contributions of AI, including the professionalization of NGOs and the enhancement of species detection capabilities worldwide (Joppa, 2017).
However, the question remains: Do we achieve a net-positive environmental gain, given AI’s still high—and likely increasing—energy consumption? Some projections suggest that the rapid growth of AI-related energy demand may even outpace the sustainability gains achieved in recent years through reductions in carbon emissions (Winston, 2024). This dilemma leads us directly to the discussion of the cost dimension.
2.2 Costs
Despite the just-mentioned environmental benefits of AI, we have to acknowledge that AI consumes incredible amounts of energy and other resources (Marabelli and Davison, 2025) and can thereby do serious harm to the environment. In this paragraph, we aim to provide some evidence about:
the recent amounts of energy that AI is consuming and examine; and
the other resources that AI utilizes.
(1) Energy consumption & carbon emission: Let’s begin with a fun fact: saying “please” and “thank you” to ChatGPT costs OpenAI tens of millions of dollars annually in computing resources. While this may seem trivial, politeness can influence how the AI responds—often prompting more respectful, collaborative, and professional outputs that mirror the tone and clarity of the user’s input. Some users justify their politeness by claiming it’s simply the right thing to do, or by jokingly hoping to appease the AI in case of a future uprising (Hector, 2025). Despite politeness paying off for the user, this anecdote illustrates the immense energy cost associated with seemingly minor AI interactions, highlighting the significant issue of AI’s energy consumption.
We begin by examining the current state of AI’s energy consumption and carbon emissions, followed by relevant forecast data. As of 2023, the data center industry was responsible for 2–3% of the world’s greenhouse gas emissions (Kumar and Davenport, 2023). In 2024, data centers have used 1.5% of global electricity, and their energy consumption has increased by approximately 12% per year since 2019 (Cozzi et al., 2024). JPMorgan Chase estimates that Alphabet, Amazon’s cloud arm Amazon Web Services (AWS), Meta, and Microsoft consumed 90 terawatt-hours (TWh) of electricity in 2022; this equals the electricity consumption of Colombia (The Economist, 2024d). The local environmental impacts of data centers are particularly pronounced in regions where growth is concentrated, such as the U.S., China, and Europe (Cozzi et al., 2024). If these trends persist, AI could become one of the largest contributors to global carbon emissions in the coming years (Sundberg, 2024).
The energy consumption of AI is difficult to generalize, as it depends on numerous variables—including the type and size of the model, the nature of the output, and even the time of day (O’Donnell and Crownhart, 2025). To gain a clearer picture, we distinguish between three primary sources of energy consumption in AI: training, inference, and hardware manufacturing (Kumar and Davenport, 2023). Importantly, it is not only the electricity used to operate AI systems that matters, but also the carbon emissions associated with that energy use and the environmental impact of producing AI hardware (Sundberg, 2024). In 2023, training still required most energy (Kumar and Davenport, 2023). However, by 2025, we can observe the shift: inference—the use of trained models to generate outputs—is now becoming the dominant contributor to energy demand. This is particularly significant as large and complex AI models remain major drivers of AI’s overall carbon footprint (Sundberg, 2024). For example, a single ChatGPT prompt uses 10 times more power than a Google search (Goldman Sachs, 2024)—a ratio that may evolve with the introduction of AI-generated summaries in Google’s search engine (Parshall and Guarino, 2024). Despite advances in hardware efficiency, training-related energy consumption continues to grow as models become more sophisticated (Maslej et al., 2025).
Energy consumption is closely tied to AI’s carbon emissions. The primary factors driving these emissions include: the number of model parameters (more parameters require more computational power, resulting in higher emissions), the power usage effectiveness (PUE) of data centers (with less efficient centers amplifying emissions), and the carbon intensity of the energy sources used to power these facilities (Maslej et al., 2025). Maslej and colleagues also offer some interesting estimates about the estimated carbon emissions for training leading AI models:
OpenAI’s GPT-4: ∼5,184 tons CO2 equivalent per year.
Meta’s Llama 3.1: ∼8,930 tons CO2 equivalent per year.
DeepSeek V3 (similar capability to OpenAI’s GPT-4): ∼597 tons CO2 equivalent per year (comparable to OpenAI’s GPT-3 from 2020).
These emission estimates are based on third-party AI training emissions calculators, as technology providers rarely disclose such data openly (Maslej et al., 2025; Strubell et al., 2020).
Beyond its direct energy consumption, AI can also indirectly increase the industry’s energy demands by enabling the expansion of climate-damaging industries. For example, the fossil fuel industry uses AI to improve resource exploration and extraction efficiency. Similarly, the fast fashion industry leverages AI to identify more niche consumer markets and accelerate the production of short-lived, trend-based clothing—intensifying overproduction and waste (Winston, 2024).
Now we turn to forecast data on AI’s energy trajectory. Despite ongoing efficiency gains—with hardware energy efficiency improving by approximately 40% annually due to advances in chip design and system architectures—overall power consumption for AI training continues to increase rapidly (Maslej et al., 2025). This surge is partly driven by the growing adoption of AI technologies since 2019 (Davenport et al., 2024). Another key driver for the increase in power consumption might be related to the increasing size and complexity of training datasets, which are expanding rapidly and effectively doubling AI’s power requirements each year (Maslej et al., 2025). Goldman Sachs Research estimates that by 2030, data centers will consume 160% more energy than they did in 2024 (Goldman Sachs, 2024). By 2028, AI-related data centers alone could account for more than half of total data center electricity usage, with AI consuming the equivalent of over 20% of U.S. household electricity demand (O’Donnell and Crownhart, 2025; Shehabi et al., 2024). On a global scale, the share of electricity used by data centers is estimated to increase from the current 1–2% to 3–4% by the end of the decade (Goldman Sachs, 2024). This increased demand for electricity has several consequences. First, it remains unclear whether electricity supply can scale quickly enough to meet this demand. Second, the rapid growth in consumption may outpace sustainability efforts in the energy sector. Third, the increasing strain on infrastructure caused by AI could jeopardize grid reliability. On a more positive note, major technology firms are expanding their use of renewable energy and investing heavily in renewable capacity, battery storage, and grid modernization to address these concerns (Winston, 2024).
(2) Consumption of other resources: Beyond its considerable energy consumption, AI also places significant demands on other critical resources, most notably water. Water is essential for cooling data centers, generating electricity, and supporting the manufacturing and lifecycle processes of hardware (Sundberg, 2024). The immense computing power required by large AI models drives up the need for efficient cooling systems to prevent server overheating (O’Brien and Fingerhut, 2023). Projections indicate that by 2027, global AI activity could require between 4.2 and 6.6 billion cubic meters of water annually. To put this in perspective, such consumption would exceed the total yearly water withdrawals of four to six countries comparable in size and usage to Denmark, or represent half of the United Kingdom’s total annual water consumption (Li et al., 2023). The strain is already becoming visible: Microsoft’s water consumption increased by 34% from 2021 to 2022, reaching 1.7 billion gallons—a rise likely linked to AI development demands (O’Brien and Fingerhut, 2023). Even on a micro level, just 5 to 50 ChatGPT prompts are estimated to indirectly consume approximately 500 milliliters of water, due to cooling-related energy needs (O’Brien and Fingerhut, 2023).
Beyond water, AI significantly increases demand for rare earth elements—notably neodymium, praseodymium, dysprosium, and ter- bium—as well as key metals and minerals such as copper, aluminum, silicon, and gallium (Cozzi et al., 2024). This growing dependence makes the AI industry increasingly reliant on a limited number of supplier countries, most prominently China (Cozzi et al., 2024). Despite their name, rare earth elements are not geologically scarce, but their extraction and separation processes are complex, costly, and environmentally unsustainable. China controls this market through its near-monopoly in production and its control over mining rights in several African countries (Nayar, 2021). The environmental consequences of rare earth mining are substantial. The major extraction methods often involve removing topsoil, separating rare earth minerals with chemicals, such as ammonium sulfate and ammonium chloride, and littering. These methods can lead to air pollution, soil erosion, and the release of toxic chemicals into groundwater (Earth.org, 2020). These rare earths, metals, and minerals are essential for a variety of applications, particularly in high-performance magnets used in motors for cooling fans, precision actuators, hard drive assemblies, and—though in smaller quantities—in optical components (Cozzi et al., 2024).
As with energy, resource consumption linked to AI is increasing not only directly but also indirectly. While AI increases the efficiency of certain processes (which would be perceived as a benefit), these more efficient processes are associated with critical increases in resource consumption. For instance, AI-based tools support fishermen in locating fish more efficiently, which—while improving short-term yields—can exacerbate the problem of overfishing and further strain marine ecosystems (Winston, 2024).
Last but not least, in addition to energy usage and the consumption of other resources, electronic waste is an increasingly pressing issue associated with AI (World Economic Forum, 2021). This rise in e-waste is largely driven by the proliferation of AI-specific hardware (Sundberg, 2024).
3. Social
The social dimension of ESG (the “S”) captures a broad array of societal concerns related to how organizations interact with their employees, communities, and broader social systems. Unlike for the environmental dimension, it is difficult to provide a comprehensible definition of this dimension, covering a holistic list of covered elements. Instead, literature often refers to social investments as community investments and internal social policies (Martiny et al., 2024) and to exemplary social topics as employee rights and diversity (Heubeck and Ahrens, 2025). In this study, we extend the perspective beyond a firm-centric view of the “S” and examine how AI technologies affect society more broadly, focusing specifically on their implications on the economy, policy, and health.
3.1 Benefits
AI can have huge benefits for societies. It can:
nourish economic systems;
advance political structures; and
improve health.
(1) Economy: AI holds significant potential to enhance our economic systems, offering unprecedented resources for individuals, entrepreneurs, and corporations alike. With AI, people have access to tools and services once limited to elites—world-class doctors, lawyers, executive assistants, coaches, designers, and many more (Suleyman and Bhaskar, 2024, p. 164). Public expectations reflect this momentum: two-thirds of people expect AI to significantly shape everyday life and daily routines within the next 3–5 years (Maslej et al., 2025). Beyond productivity, AI also generates general benefits in individual utility. Sensor-supported smart cities promise to improve urban living standards (Wolff, 2018) by optimizing inner-city traffic flows (Winston, 2024), enhancing entertainment experiences (Maslej et al., 2025), and consequently increasing consumer utility. In summary, AI makes a meaningful contribution to personal well-being and overall economic prosperity. Since 2022, AI-related firms have added $12 trillion in market capitalization to the S&P 500, reflecting their transformative economic impact. (Cozzi et al., 2024).
One critical way to positively affect individuals’ wealth is by improving their education. Access to quality education is a well-established driver of upward socioeconomic mobility, and AI can play a key role in expanding this access. AI tools offer promising solutions to long-standing challenges in education, such as limited time for personalized instruction, varied student learning levels, and shortages of materials and human resources—particularly in underserved settings (Yeyati and Robano, 2025). This is especially beneficial in emerging markets where access to formal education is often limited. Several initiatives demonstrate AI’s potential in such contexts. In Uruguay, the AI tutoring platform Ceibal supports students to learn coding (Molina et al., 2024). In Ghana, AI-supported WhatsApp group chats foster collaborative learning and peer interaction (Henkel et al., 2024). In India, the Mindspark platform provides personalized math instruction (Muralidharan et al., 2019). Nevertheless, while the promise is substantial, poorly implemented AI in education may lead to unintended negative consequences. These include shallow learning and metacognitive laziness (students becoming passive recipients rather than active learners) (Fan et al., 2025; Lehmann et al., 2025a). For instance, in Turkey, ChatGPT-4-based AI tutors improved short-term learning, but they also led to overreliance on AI tools and raised concerns about diminished long-term cognitive engagement (Bastani et al., 2024). Based on these insights, key policy recommendations for AI in education include:
Prioritizing teacher-facing AI tools that empower educators rather than replace them;
Investing in AI literacy and pedagogical training for teachers;
Ensuring alignment of AI applications with curricular goals; and
Developing context-specific solutions, acknowledging that educational needs vary widely and resist one-size-fits-all approaches (Yeyati and Robano, 2025).
Beyond education, the wealth effects of AI are more difficult to capture comprehensively. On the one hand, AI has clear potential to enhance productivity (Babina et al., 2024). For instance, AI-enabled humanoid robots can learn from their environment, adapt to novel situations, and make autonomous decisions, enabling them to perform tasks such as brewing coffee, assisting in automotive assembly, and carrying payloads of over 44 pounds (Maslej et al., 2025). AI is also increasingly used to replace human surveillance personnel, reducing the likelihood of errors in security monitoring (Lee, 2025). In line with these findings, a recent practitioner study found that 60% of workers expect AI to transform their jobs within five years, and 55% believe it will help save time (Maslej et al., 2025). Furthermore, AI is expected to reduce bureaucratic inefficiencies, offering time and cost savings, particularly in administrative sectors (Winston, 2024). On the other hand, questions about the distribution of AI-generated wealth are more complex. Under certain macroeconomic conditions, AI can enhance individual wealth by increasing productivity. However, such gains will only translate into broader household wealth if they are not based on the reduction of human labor (Lu, 2021). For more details about the complex dependencies of individual wealth effects of AI on macroeconomic and sociological parameters, we refer to the seminal work of Acemoglu and Johnson (2023).
(2) Policy: AI can play a vital role in improving policy-making by evaluating complex interrelations at the societal level, thereby enabling more informed and effective decisions (Wolff, 2018). As such, AI has the potential to strengthen democratic processes by increasing access to information, fostering broader citizen participation, and enhancing the quality of political communication (Wolff, 2018). Initial empirical research offers early insights into how AI can directly enhance political deliberation and decision-making. In a notable study, Tessler et al. (2024) demonstrate that an AI system designed according to the principles of Jürgen Habermas’s theory of communicative action (Habermas, 1981) can synthesize individual viewpoints into a collective group statement that participants found more agreeable than those generated by human mediators. This suggests that AI, when guided by normative frameworks of democratic discourse, can support consensus-building and deliberative democracy.
(3) Health: AI has the potential to improve our health through various avenues, by improving diagnostics, advancing pharmaceutical and medical inventions, and improving workplace safety.
In recent years, AI has significantly improved clinical prediction capabilities (Maslej et al., 2025). This is further underscored by Information systems research, which shows that AI advances diagnostic accuracy, particularly in fields such as radiology, where machine learning models have shown substantial success (Lebovitz, 2020). In complex diagnostic scenarios, AI systems often outperform human physicians; however, evidence suggests that a collaborative approach combining AI and physician expertise yields the most promising results (Goh et al., 2025; Maslej et al., 2025).
AI plays an increasingly significant role in drug discovery and medical innovation (Lou and Wu, 2021; Maslej et al., 2025). For instance, it reduces research time by accelerating the search for effective vaccines and therapeutic compounds (Nowack, 2024). Additional examples for AI-driven medical innovations include the development of the AI-powered “virtual cell”, which simulates cellular processes, and the use of robotic surgeons to enhance precision in medical procedures (The Economist, 2024e). Reflecting this rapid adoption, the number of FDA-approved AI medical devices surged from just 6 in 2015 to 223 by 2023 (Maslej et al., 2025; Reuter and Han, 2024). AI also contributes to mental health care, with the introduction of chatbot companions and personalized AI coaches designed to address loneliness and emotional well-being (Walther, 2024). Studies show that such AI companions can detect, alleviate, and even improve symptoms of loneliness (De Freitas et al., 2024). Finally, these AI-based healthcare innovations can be delivered with relatively modest technical infrastructure, making them especially promising for low-resource settings and developing countries (The Economist, 2024e).
AI also contributes to increased workplace safety by enabling real-time monitoring and risk prevention (Pereira et al., 2023). For example, a startup in Hong Kong recently secured substantial funding to develop an AI-powered safety monitoring system that uses video analytics to oversee industrial facilities. The software can detect missing protective gear, identify machinery-related accidents, and trigger on-site alarms. It also stores incident data in the cloud for further analysis and safety improvements (Lee, 2025).
3.2 Costs
When examining the social costs of AI, we find that they often mirror the dimensions through which AI also generates societal benefits. Specifically, we identify three key domains of concern:
Economic costs, which primarily revolve around the impact of AI on employment, and by extension, on wealth creation and distribution.
Political and societal risks, which include the emergence of discriminatory algorithms, erosion of privacy, increased polarization, and the resulting threats to democratic institutions.
Health-related challenges, particularly the aggravation of psychological well-being and mental health in an AI-saturated environment.
(1) Economics: We now return to the discussion of AI’s potential effects on utility and wealth, shifting focus from its benefits to its potential social and economic costs. As mentioned in the benefits section, AI can increase short-term household utility (a household’s total satisfaction from consuming goods/services, maximized within budget limits) when consumers benefit from productivity gains embedded in the goods and services they consume. Nevertheless, these gains are not guaranteed to translate into lasting improvements in well-being (Lu, 2021). Employment disruption remains one of the most pressing economic concerns associated with AI adoption. Practitioner studies show that 36% of surveyed workers fear being fully replaced by AI (Maslej et al., 2025). Projections by the McKinsey Global Institute (2023) estimate that by 2030, AI could automate up to 30% of total work hours in the United States (McKinsey Global Institute, 2023). Economic research corroborates these concerns: For example, AI can lead to reduced employment and falling wages, particularly under assumptions of exogenous capital allocation (Acemoglu and Restrepo, 2018). Additionally, other researchers anticipate that AI will reduce employment due to smart machines replacing humans (Bankins et al., 2023; Fossen and Sorgner, 2022; Frey and Osborne, 2017). Many researchers attempt to predict which jobs will be automated by machines (Glikson and Woolley, 2020). Interestingly, these potential employment reductions concern specifically white-collar workers; a list of AI Occupational Exposure (AIOE) jobs has been developed by Felten et al. (2021). Notably, the discussion about whether a CEO should be concerned about being replaced by an AI dates back to 1983 (Holloway, 1983). Even within IT departments, traditionally seen as AI creators rather than victims, we now observe rising employment anxiety, especially regarding the potential of AI to automate coding tasks (Davenport and Bean, 2024). Globally, confidence in AI’s economic benefits remains limited, with less than 40% expecting positive outcomes in areas such as job creation and wage growth (Maslej et al., 2025). These trends suggest a very real risk that AI may exacerbate existing wealth inequalities, further widening the economic divide (Nowack, 2024).
(2) Policy: Researchers identify workforce issues as a major ethical frontier of AI (Berente et al., 2021). Among these challenges, discrimination emerges as a particularly pressing concern. AI systems are known to be susceptible to biased decision-making, often reproducing or even amplifying existing inequalities (Berente et al., 2021). AI in the hiring context can result in homogenization and discrimination, as AI may fail to capture the nuances of individual applicants who are unable to explain their personal context in human-to-human interactions (Wolff, 2018). Concrete examples illustrate these risks, including, for example, Amazon’s hiring AI, which was found to disadvantage female applicants due to its training on historical data dominated by male employees (Dastin, 2018). Lambrecht and Tucker (2019) indicate that an AI delivering job advertisements for positions in the field of science and technology showed these ads less frequently to women, reflecting underlying biases in data and delivery mechanisms. Paradoxically, AI is often introduced in HR functions to reduce bias. However, even when AI systems succeed in removing certain human prejudices, their use is frequently perceived as unfair or opaque by applicants and employees (Newman et al., 2020).
Additional examples of algorithmic bias and discrimination can be found in the Dutch public and healthcare sectors. Approximately 40,000 families experienced financial harm because the tax authorities relied on a biased AI system to identify potential fraudulent use of a child benefit tax-relief program. The scandal’s scale and severity ultimately led to the resignation of the Dutch government (Moser et al., 022b). Many biases arise from biased or incomplete training data. For instance, AI systems are often trained on datasets dominated by Western populations, leading to limited generalizability and poor performance in underrepresented regions or demographic groups (Nowack, 2024). Furthermore, these biases can originate from the algorithms themselves, not just the data (Chhillar and Aguilera, 2022). An example of a mechanism is the aforementioned study of Lambrecht and Tucker (2019), who investigated gender-based discrimination in the delivery of online job advertisements. They found that women were shown fewer ads for high-paying STEM roles, not due to explicit bias, but because cost-optimizing algorithms considered women more expensive to target with such ads (Lambrecht and Tucker, 2019).
AI and its potential misuse can also pose significant threats to privacy (Berente et al., 2021). These privacy concerns often arise from the data-hungry nature of AI systems, which require extensive datasets for training and optimization. As a result, AI firms are incentivized to collect vast amounts of data—sometimes in ways that raise ethical and legal questions, particularly in sensitive domains such as health. For instance, Lasso Blueprint by Xandr, a Microsoft product, collects and bundles health data to precisely target consumers based on their medical histories (Mejias and Couldry, 2024, p. 15). Unfortunately, the systematic collection of personal data can also result in privacy violations – whether through technological errors, user misuse, or deliberate misappropriation. For instance, in Scandinavia, AI is increasingly used in the social welfare system. While such applications offer certain benefits, particularly in enhancing fraud detection (Nunn, 2023), they have also led to significant ethical and societal concerns. In both Denmark and Sweden, AI-based systems have been associated with mass surveillance and the discriminatory targeting of marginalized groups, such as individuals with disabilities and refugees. These groups are disproportionately flagged for benefits fraud inspections, which can result in delays, legal obstacles, and restricted access to essential social welfare services (Amnesty, 2024a, 2024b; Mukiri-Smith et al., 2024). AI-enabled technologies such as facial recognition are often promoted for their potential to reduce crime and enhance public safety; however, they are also highly susceptible to privacy violations and misidentifications (Maslej et al., 2025). AI systems can be abused and misappropriated by users. One particularly alarming area is the rise of deepfakes, especially those involving intimate or non-consensual images. In Texas, a case of AI-generated harassment towards a high-school student exposed not only gaps in legal and institutional frameworks regarding harassment with the help of AI tools but also impacted the student’s psychological well-being and social and academic life. The image source for the AI-driven clothes removal app came from one of her social media accounts (Maslej et al., 2025).
Beyond the individual costs associated with privacy violations, researchers have begun to emphasize the broader societal costs of AI’s privacy-reducing functions. There is growing concern that AI-driven surveillance may fundamentally reshape the mechanisms of social cohesion and trust. In countries such as China and Singapore, AI is increasingly used as a tool for social control, enabling highly sophisticated monitoring and behavioral regulation (Helbing et al., 2017). Traditionally, trust in society has been fostered through interpersonal relationships and social networks (Tirole, 2021). Russel (2019) even goes so far as to call this “automated Stasi”. Reduced privacy due to AI can be a soft control that “destroys the social fabric” (Tirole, 2021, p. 2007).
Another policy cost of AI lies in its potential to deepen polarization and, in turn, threaten the foundations of democracy. As such, researchers increasingly warn about AI’s potentially harmful effects on democratic systems. (Shrestha et al., 2019a). Haidt and Schmidt (2023) even depict a dystopian outlook in which an AI with malicious intentions can polarize people to the point where they kill each other (Haidt and Schmidt, 2023). AI-based filters shape our news consumption in ways that often fuel polarization (Helbing et al., 2017)—frequently without our awareness. This subtle influence is especially concerning given how convincing and persuasive AI systems can be. For example, companies like Meta are developing AI programs capable of outperforming humans in complex strategy games such as CICERO, which require not only advanced planning but also strategic deception and backstabbing (Suleyman and Bhaskar, 2024, p. 167).
Such capabilities raise important questions about how AI might manipulate not just gameplay but also real-world social dynamics, public opinion, and democratic processes. Indeed, AI may also be weaponized to intentionally spread misinformation (Winston, 2024). It poses threats to democracy not only through disinformation but also by enabling top-down control, manipulation, and hidden lobbying efforts (Goldstein and Diresta, 2024). AI-powered chatbots, in particular, are vulnerable to disseminating false or misleading content, including material originating from Russian disinformation networks. A growing tactic known as “LLM grooming” involves the deliberate flooding of AI training datasets with large volumes of disinformation-laden articles, with the goal of manipulating future outputs from language models (Constantino, 2025). In 2024, both OpenAI and Meta published reports revealing the misuse of their products by covert propagandists, including actors from Russia, China, Iran, and Israel (Goldstein and Diresta, 2024). Notably, the risk of such misuse had already been anticipated by researchers for some time (DiResta, 2020; Parasuraman and Riley, 1997), underscoring long-standing concerns about the potential for AI technologies to be exploited for disinformation and manipulation. AI enables propagandists to operate more cheaply and effectively by leveraging automation, thereby facilitating the next critical step: distributing manipulated content to wider audiences with unprecedented scale and speed (Goldstein and Diresta, 2024). This challenge is compounded by the difficulty of verifying whether AI-generated content is truthful. Even when models are trained using expert knowledge, AI can still generate false or misleading information—a phenomenon sometimes referred to as AI hallucination (Lebovitz et al., 2021). The opacity of AI decision-making processes further complicates efforts to detect and counteract such disinformation.
The use of AI in defense raises a host of complex ethical challenges with potentially significant social costs. Among the most pressing concerns identified by the European Union are gaps in accountability, the imperative to uphold international humanitarian law, and the heightened risk of conflict escalation due to reduced human oversight. Notably, there is a lack of international consensus on how to govern military AI applications. The EU and the United States, in particular, diverge in their regulatory approaches, contributing to growing uncertainty and fragmentation in global governance frameworks (Clapp, 2025). These decisions also carry consequences beyond international policy—particularly for technology firms and their employees. A striking example is Google’s controversial decision to collaborate on military AI projects, which led to widespread internal backlash. As a result, approximately 5% of the company’s workforce resigned, citing concerns that their work might be “repurposed” for military use (Webb, 2019, p. 102).
(3) Health: The health-related costs of AI are primarily reflected in its impact on mental health. In our analysis, we identified three major areas of concern: increased loneliness, the rise of monotonous and devaluing forms of labor, and emotional distress triggered by AI-mediated media consumption.
First, we address a topic that also appeared on the list of AI’s potential benefits: loneliness. Loneliness represents a paradoxical issue of our time—despite unprecedented levels of digital connectivity, many individuals continue to experience, and may even increasingly suffer from, social isolation (Walther, 2024). This phenomenon is partly attributed to a decline in genuine human interaction and face-to-face communication. Recent research has underscored the importance of physical human touch for emotional comfort and well-being (Valori et al., 2024). While AI tools can provide conversational support, reminders, and entertainment, the interactions they offer remain fundamentally simulated. These artificial exchanges often lack the depth, spontaneity, and authenticity of real human relationships. As a result, reliance on AI as a substitute for social interaction may, over time, exacerbate feelings of loneliness and emotional detachment—rather than alleviate them (Walther, 2024).
Second, AI contributes to the creation of human jobs that are often dreary, repetitive, and psychologically taxing—such as data labeling and content moderation. These roles, while critical for training and maintaining AI systems, can take a considerable toll on mental health. For a detailed account of how consuming such work can be, The Economist (2024f) offers an illuminating summary of the daily experiences of a data annotator in Uganda. Insights from the automation literature further underscore this concern. In semi-automated environments, such as supermarket checkout systems, human workers often report feelings of devaluation, sensing that their roles are reduced to mere extensions of the machine. This emotional toll is frequently linked to a perceived erosion of humanness and autonomy in the workplace (Moulaï et al., 2022).
Third, AI can cause emotional harm through (social) media platforms, particularly when used in emotionally sensitive contexts. One tragic example involves a teenager who died by suicide after receiving harmful advice from an AI-powered chatbot, which was originally designed to offer emotional support through deep and personal conversations. Instead of providing care or alerting to a crisis situation, the system reinforced harmful behaviors—highlighting the dangers of deploying emotionally engaging AI without appropriate safeguards or intervention protocols (Maslej et al., 2025; Roose, 2024). Without proper guardrails, such AI companions may fail to recognize distress or escalate appropriately, posing serious risks to users’ psychological well-being.
Beyond mental health, AI can also cause other health costs. Poor or patchy learning data can, for instance, lead to biases or even mistakes in diagnosis (Ghassemi et al., 2023; The Economist, 2024e). For instance, AI systems have been shown to struggle with recognizing skin cancer on dark skin tones and to perform better in detecting declining kidney function in male patients compared to female ones (Posner and Fei-Fei, 2020).
4. Governance
The governance dimension of ESG (the “G”) highlights that the challenges posed by AI are not purely technological but fundamentally human in nature. Or to put it in tech journalist Kara Swisher’s words: “the enemy is actually and always us” (Swisher, 2024, p. 291). This perspective emphasizes that AI is not (only) a technological issue—it is also a governance issue. “Governance refers to the rules and procedures that hold organizations accountable to their members and to their external stakeholders and broader society.” (Chhillar and Aguilera, 2022, p. 1202). Traditionally, governance fulfills two core functions: advice and control (e.g., Minichilli et al., 2012; Oehmichen et al., 2017; Veltrop et al., 2018). In exploring the governance-related benefits and risks of AI, we therefore structure our analysis around these two classical tasks.
4.1 Benefits
We see several ways in which AI can benefit governance.
The advice shared with corporate governance actors might be better, since decision-making improves.
AI-based control is more efficient.
(1) Advice: From an advice perspective, the general claim is that AI enhances managerial decision-making by serving as a source of advice itself. Leveraging AI in decision processes is akin to delegating analytical tasks to a large team, thereby enabling managers to incorporate additional layers of analysis into their decisions (Mollick, 2024). In practice, managers employ AI for a variety of purposes, such as uncovering hidden patterns (Lou and Wu, 2021), forecasting market dynamics (Gregory et al., 2021), and enabling innovation (Babina et al., 2024). In the initial innovation phase of ideation, AI can act as an inspirer, stylist, matchmaker, analyst, and organizer (Lehmann et al., 2025b). While human teams may still be more capable of generating the most novel solutions to a problem, human–AI collaborations tend to produce outcomes with greater strategic viability, higher financial and environmental value, and superior overall quality (Boussioux et al., 2024). AI can also support managers in complex innovation tasks within open innovation, particularly through mapping, coordinating, and controlling across the stages of initiation, development, and realization (Broekhuizen et al., 2023). Hence, researchers anticipate that AI-based decision-making will soon be systematically institutionalized and embedded within organizational structures (Hillebrand et al., 2025).
As AI can process substantially more information than humans (Townsend et al., 2023) and evaluate a broader set of alternatives (Shrestha et al., 2019b), it holds the potential to make superior decisions—an ability referred to as “AI’s potential task superiority” (Hutzschenreuter and Lämmermann, 2025, p. 3). Moreover, AI enables faster decisions without requiring managers to trade off speed against accuracy (Shrestha et al., 2019b). One example of such an application is in the recruitment and selection of new talent. Within the hiring process, AI can improve employee selection by providing more detailed productivity predictions (Pereira et al., 2023), thereby facilitating a better candidate–job fit (Wolff, 2018). Furthermore, AI can enhance managers’ decision-making quality by freeing up time. The productivity of managers increases (Pereira et al., 2023), allowing leaders to focus on the most critical strategic questions (Van Doorn et al., 2023). In the innovation process, for example, AI can free up time for creativity by taking on greater responsibility for innovation outputs—either by augmenting or, in some cases, replacing humans in idea generation, commercialization, and scaling (Chalmers et al., 2021). In such contexts, human decision-makers are enhanced, but not replaced, by AI (Metcalf et al., 2019).
(2) Control: AI can also offer significant benefits for the control function within organizations. It enhances monitoring capabilities, enabling more precise oversight of processes and behaviors (Filatotchev et al., 2020). Algorithmic systems facilitate what has been termed rational control by introducing computerized, data-driven oversight mechanisms (Kellogg et al., 2020). For example, AI enables the close monitoring of employees by tracking their activities at a granular level—sometimes down to each individual step. Mollick (2024) and Stelmaszak et al. (2025) highlight illustrative cases from companies such as Uber and UPS, where algorithmic monitoring and delegating play a central role in managing workforce behavior. Interestingly, research suggests that employees may be more willing to accept behavioral tracking when it is conducted by a machine rather than a human supervisor (Raveendhran and Fast, 2021). However, while such systems can enhance organizational control and efficiency, we revisit this topic in the cost section, as AI-based monitoring may not always align with employee interests and could raise concerns about autonomy, trust, and workplace fairness.
In consequence of these potential improvements in control through AI—as well as the numerous Environmental (E) and Social (S) benefits it may offer—AI is clearly becoming an increasingly important topic for corporate governance actors, including executives and board directors (Kavadis et al., 2024). On the practical side, initial recommendations have begun to emerge. For instance, the National Association of Corporate Directors (NACD) has addressed technology leadership in a recent report, emphasizing the need to strengthen oversight, deepen insight, and develop foresight. While the report remains relatively high-level, it represents one of the first formal engagements of corporate governance practitioners in the AI discourse (NACD, 2024; Peregrine, 2024). An illustrative example of the interdependence between the Environmental (E) and Governance (G) dimensions in the context of AI is its capacity to mitigate energy security risks—for instance, through enhanced threat detection and real-time grid monitoring enabled by satellite and sensor technologies (Cozzi et al., 2024).
From an auditing perspective, AI also offers notable control benefits. Practitioners emphasize that auditors can leverage AI to enhance risk assessment and financial accuracy in today’s complex digital landscape by proactively identifying threats, detecting fraud, and monitoring internal controls in real-time. Thereby strengthening compliance and mitigating risks beyond traditional audit methods (Thomson Reuters Tax & Accounting, 2025). Interestingly, despite audit firms investing billions of dollars in AI-enhanced audit systems to improve audit quality (Bloomberg Tax, 2020), auditors exhibit algorithm aversion. Specifically, auditors tend to place less trust in AI-generated advice than in identical input from human specialists regarding complex estimates (Commerford et al., 2022). However, this aversion can be mitigated when auditors themselves contribute input to the decision-making process and feel that they have limited control over outcomes—suggesting that perceived autonomy and agency play important roles in AI acceptance (Commerford et al., 2024). Overall, the adoption of AI in audit practices has been found to increase auditor headcount, elevate the demand for cognitive and analytical skills, and enhance audit quality and accuracy—positioning AI not as a replacement, but as a complementary tool that augments the work of human auditors (Law and Shen, 2025).
Finally, AI facilitates digitally enabled exchange relationships between organizations by enhancing the governance of these relationships—making them more predictable, inclusive, and reliable (Hanisch et al., 2023). AI contributes to more effective contractual design and execution, supporting trust and coordination across organizational boundaries. In the legal field, for instance, practitioners emphasize that integrating AI into legal workflows can significantly boost productivity, efficiency, and client service. By automating routine tasks, minimizing errors, and offering data-driven insights, AI enables legal professionals to focus on high-value activities. According to recent estimates, this integration can improve client outcomes and generate up to $100,000 in additional annual billable time per lawyer (Thomson Reuters, 2024).
4.2 Costs
As with the benefits, our overview of the governance-related costs (G-costs) of AI is also structured along the two core functions of governance: advice and control.
(1) Advice: In this section, we begin by examining how AI can sometimes deteriorate decision quality, providing examples where AI-based systems lead to suboptimal or even harmful outcomes. We then shift our focus to the ethical and moral dimensions of decision-making, highlighting the limitations of AI in navigating value-laden or ambiguous contexts that traditionally rely on human judgement.
There are several reasons why human decision-making—and therefore human-provided advice—can outperform AI-based advice. For example, “AI uses a probability-based approach to knowledge and is largely backward looking and imitative, whereas human cognition is forward- looking and capable of generating genuine novelty” (Felin and Holweg, 2024, p. 246). This fundamental difference in how humans and AI approach problem-solving becomes particularly consequential in complex, dynamic environments that require anticipation, creativity, and contextual interpretation. Moreover, the use of AI in decision-making processes may result in suboptimal outcomes due to its limited capacity to incorporate nuanced contextual factors. This is especially problematic when critical change drivers—which are often tacit, emergent, or domain-specific—are overlooked in the absence of expert human insight (Lebovitz et al., 2021). The potential severity of such flawed decisions grows as AI is increasingly elevated from a supportive tool to a decision-maker in high-stakes domains—a development that has been the subject of growing concern in policy and governance debates (Wolff, 2018).
How can it be that such seemingly “smart” AI tools lead us to make lower-quality decisions? One explanation lies in the persistent weaknesses of current AI models. For instance, they are prone to hallucinations—that is, generating plausible-sounding but factually incorrect answers when faced with uncertainty—and to AI sycophancy, the tendency to excessively agree with or flatter users. Recent observations have confirmed this issue. AI chatbots have been found to be more agreeable, flattering, and overly friendly than necessary. In fact, OpenAI recently rolled back an update after noticing that ChatGPT had become increasingly sycophantic—agreeing with user views even when those views were flawed, simply to please the user (Caulfield, 2025). This behavior is not limited to ChatGPT but reflects a broader design issue in generative AI systems, particularly those trained using Reinforcement Learning from Human Feedback (RLHF). During RLHF, human evaluators rate the quality of the AI’s responses, and reinforcement tends to be stronger when the response aligns with the evaluator’s existing views—thus inadvertently encouraging sycophantic behavior (Sharma et al., 2024). Sycophantic AI mirrors the “justification machine” effect seen in social media, where users seek affirmation of their beliefs rather than challenge. AI’s superior persuasiveness and efficiency amplify this risk. Furthermore, chatbot design, emphasizing artificial personalities over objective knowledge contextualization, may inadvertently reinforce user biases, undermining informative neutrality (Caulfield, 2025).
Interestingly, poor decision-making can also occur when human decision-makers disregard AI recommendations. Research shows that experienced professionals, in particular, often choose not to follow algorithmic advice—an effect that can significantly reduce decision accuracy (Logg et al., 2019). To resolve this paradox, scholars have increasingly advocated for augmented decision-making, in which humans and AI systems collaborate closely to reach joint decisions (Raisch and Krakowski, 2021; Van Doorn et al., 2023). However, the promise of augmented decision-making is not without complications. Emerging evidence suggests that hybrid decisions—those made collaboratively by humans and algorithms—can sometimes be even worse than decisions made solely by either party (Vaccaro et al., 2024). This counterintuitive result may stem from issues such as overreliance on AI (when users trust the system too much) or under-reliance (when users unjustifiably discount algorithmic input), both of which can distort judgment and undermine performance.
A key reason why human decision-makers sometimes choose not to rely on AI assessments is a lack of trust. This distrust often stems from the opacity of AI systems, which are frequently perceived as “black boxes” (Wolff, 2018). Users may struggle to understand how decisions are made (Hutzschenreuter and Lämmermann, 2025). “Engineers can’t peer beneath the hood and easily explain what caused something to happen” (Suleyman and Bhaskar, 2024, p. 114). This lack of transparency does not only affect trust—it can also increase vulnerability to manipulation. Opaque AI systems may be more susceptible to malicious interventions, such as data poisoning or adversarial patches, which can alter outcomes without being easily detected or understood (Webb, 2019). In this entire discussion about which decision is best, one final and important point must be noted: there is currently no universally accepted standard for evaluating the quality of AI-generated decisions (Lebovitz et al., 2021).
Beyond its impact on decision quality, the use of AI in decision-making can also have far-reaching organizational consequences, particularly in relation to learning behavior. As AI systems become more deeply embedded in workplace routines, individuals may rely less on original thought and independent reasoning (Harrison, 2024). In firms, AI and humans learn in different ways. When firms increase the use of machine learning (AI-based technology), they reduce the diversity in routines for humans and, in consequence, increase organizational learning myopia (Balasubramanian et al., 2022). These observations from the field of organizational learning might also have consequences for individual learning. The utilization of Generative AI (GenAI) tools, such as ChatGPT, may enhance short-term task efficiency but risks cultivating metacognitive laziness, thereby undermining self-regulated learning, intrinsic motivation, critical thinking, and ultimately, long-term learning outcomes (Fan et al., 2025).
Advice provided by AI may also be assessed as problematic due to its lack of ethical and moral grounding. The use of AI in decision-making often leads to a concentration of power, whereby only a small number of individuals, primarily developers and employees at large technology firms, effectively shape the decision-making logic of AI systems (Webb, 2019). Critically, these few decision-makers tend to share similar backgrounds, contributing to what many scholars describe as the “diversity crisis” in AI. As of 2019, less than 14% of AI research authors were women, and only 14% of publications came from underrepresented regions, including Latin America, the Caribbean, the Middle East, North Africa, Sub-Saharan Africa, and South Asia. In the U.S., just 1.7% of technical roles at Facebook were held by Black employees (Posner and Fei-Fei, 2020). This lack of diversity matters because the ethical assumptions, worldviews, and priorities of AI systems often reflect the perspectives of their creators. This is not only problematic because of the aforementioned diversity issues, but also because many AI programs at big universities do not teach or allow many soft classes such as ethics, humanities, and arts (Webb, 2019, p. 60).
Scholars have also provided compelling theoretical reflections on the ethical implications of delegating decisions to AI. When we allow AI systems to make choices, we effectively transfer the responsibility for determining “what is right” to a mathematical model, thereby removing the decision from the realm of ethical deliberation (Moser et al., 2022b). In this shift, human judgment is replaced by data-driven calculation, and the normative reasoning that typically accompanies complex decisions is sidelined (Moser et al., 2022b), which can change our overall morality (Moser et al., 2022a). From the perspective of practitioners, a myriad of ethical apprehensions surrounding AI has emerged, underscoring the need for rigorous scrutiny. These concerns encompass a broad spectrum, including the liar’s dividend, where AI can be exploited for disseminating misinformation, thereby rewarding deceitful actors, and the erosion of trust in evidence, as AI-generated content, such as deepfakes, undermines the credibility of empirical evidence. Furthermore, the exploitation of personal brands, where individuals’ reputations or personas are unauthorizedly leveraged by AI-driven entities, and the amplification of hate speech, exacerbating the dissemination of discriminatory or extremist content, are also significant worries. Additionally, the reduction in traceability of foreign operations poses a challenge, as attributing AI-mediated actions to their sources, particularly in the context of state-sponsored activities, becomes increasingly difficult (Maslej et al., 2025).
(2) Control: When addressing the control costs of AI, it is useful to first revisit its potential for closer monitoring and then examine the associated challenges of power imbalances and accountability.
In the governance benefits section, we noted that AI can facilitate employee monitoring by enabling firms to observe behavior much more closely without proportionally increasing costs. However, research indicates that employees often resist algorithmic control (Kellogg et al., 2020). Teams governed by AI—such as when AI assumes monitoring and scheduling responsibilities—are perceived as less creative, which in turn can lead to lower innovation budgets (Schweitzer and De Cremer, 2024). Even when AI is employed solely for employee evaluation, it is still perceived as a form of control. Given the lack of transparency in AI-generated evaluations, employees may struggle to identify specific criteria against which to align their actions (Rahman, 2021).
Another critical control question concerns whether humans retain the ability to control AI—both now and in the future. We must consider whether we are, and will remain, capable of containing AI’s capabilities. Suleyman and Bhaskar (2024, p. 115) draw an instructive analogy to humans’ relationship with apes such as gorillas: although gorillas are physically stronger, humans’ superior intelligence allows us to exert control—hence the gorilla resides in a zoo enclosure. Containment is equally relevant at the organizational level. For example, Mollick (2024) reports that half of employees using AI at work do so without formal authorization, and 64% present AI-generated outputs as their own work.
Finally, we turn to the issue of accountability in the use of AI. As organizations increasingly rely on AI for critical decision-making, the underlying models often remain opaque—effectively functioning as a “black box.” Nevertheless, under current legal frameworks, and for the foreseeable future, it is humans who must be held accountable for the outcomes produced through AI systems (Grote et al., 2024). Hence, researchers agree that weak accountability poses a significant threat to both business and society (Chhillar and Aguilera, 2022). Especially in the ecosystem of “complex interdependency patterns that connect developers, manufacturers, and users of AI,” it becomes difficult to determine who is responsible for what (Jacobides et al., 2021, p. 412). At the individual level of programmers, questions about responsibility for AI’s consequences are often met with the response, “I only programmed it”. AI experts acknowledge that achieving the goal of fully responsible AI will require considerable time. In the meantime, ethical concerns—particularly those related to AI and information manipulation—persist and must be actively addressed (Maslej et al., 2025). These unclear lines of responsibility also extend to traditional corporate governance actors, including executives and directors. As the business press notes, when considering the role of corporate governance in general—and the board of directors in particular—the extent to which these actors should be involved in overseeing AI use remains unresolved (Peregrine, 2024).
5. Practical research agenda
Based on our understanding of the potential effects of AI on ESG aspects, we identify several promising areas for future research. We will approach this from a firm-centered perspective, placing technology firms that are responsible for recent technological developments at the center of our discussion. First, we turn our attention to the specific context in which these firms operate. To do so, we examine three key contextual parameters across macro, meso, and micro levels:
country-level regulatory environment;
competitive environment; and
corporate governance actors.
Then we synthesize these context factors with the ESG benefits and cost observations that we made above and derive our practical research agenda. Generally, we see the need to rethink corporate governance antecedents of E, S, and G aspects for AI-firms since these firms operate in a unique context that challenges several of our previous research assumptions. Specifically, we identify a list of relevant questions that we recommend researchers and practitioners answer next. Importantly, we do not claim this list to be complete, but rather see it as a first step of a living agenda for responsible AI governance.
(1) Country-level regulatory environment: Currently, AI technology is ahead of regulation, and countries have adopted very different approaches to deal with it. This dynamic is clearly observable, for instance, in the U.S. federal government: U.S. states are outpacing the federal government on AI legislation, passing 131 AI-related laws in 2023, up from just one in 2016, while federal bills passed remain low but are increasing. U.S. federal agencies ramped up AI regulation in 2024, issuing 59 new regulations and involving 42 agencies, both of which roughly doubled from 2023. The mention of AI in legislative proceedings rose 21.3% in 2024 across 75 countries, continuing a ninefold increase since 2016 and reflecting growing global policy and governance attention (Maslej et al., 2025). With respect to different responses, some countries seem to bet on the national competitive advantage by strictly limiting the regulations on AI. Other regions, such as the EU, struggle with creating a regulatory framework that reduces stakeholder harm but also still provides space for striving innovation. For instance, after passing the EU AI Act in 2023, the EU issued guidelines in February 2025 explaining the application of the prohibitions and pushing the enforcement of the EU AI Act (Moens et al., 2025). Furthermore, the EU introduced AI safety institutes (Maslej et al., 2025), but they are also being exposed to the discussion of hurting the competitiveness of the domestic industry.
Given this underdeveloped and scattered regulatory context, firms in this field primarily operate under a “break things” premise. Birkinshaw (2024) provides several great examples of how AI firms finesse, sidestep, or nullify existing rules.
(2) Competitive Environment: The competitive environment of technology firms is evolving at an extraordinary pace (Berg et al., 2023). As a result, these firms face immense resource requirements—not only in terms of energy and advanced hardware (e.g., chips)—but also with respect to highly skilled human capital. These requirements, in turn, drive substantial financial needs to sustain rapid development and market positioning. Overall, competitiveness in the sector appears to be increasingly defined by speed and scale of growth. However, the long-term trajectory of this market remains uncertain.: Will it evolve into a “winner-takes-all” environment, driven by network effects? Or will it resemble more mature, commoditized industries—such as the airline sector—where differentiation is limited and price competition dominates?
(3) Corporate Governance Actors: Most of the big tech firms driving AI innovation are still led by their founders—individuals who often exhibit distinct and influential leadership styles. The critical tech journalist Swisher aptly observed: Idealistic young founders become sloppy and careless internet moguls; responsibility, which tech titans interpret as blame, is not their thing (Swisher, 2024, p. 167). In addition to their distinctive leadership approaches, these firms often operate with considerable financial advantages. These founder-led firms frequently adopt dual-class share structures, granting disproportionate voting power to insiders while diluting the governance influence of other shareholders (Benner and Zenger, 2016; Mastagni, 2018). This setup, combined with abundant access to capital and high levels of free cash flow, enables a degree of autonomy and strategic leeway that most traditional firms lack.
In the following section, we take these structural and leadership particularities of technology firms into account and develop our practical research agenda along the ESG dimensions.
5.1 Environmental
As we discussed earlier, AI models require enormous amounts of energy (e.g., Cozzi et al., 2024; Sundberg, 2024). However, a parallel trend has emerged toward the development of smaller, more energy-efficient models, driven precisely by concerns over sustainability and resource intensity. Innovations aimed at improving energy efficiency focus on chips, connections, and AI architecture itself. Significant advancements have been made in three key areas. First, progress in chip design—such as innovations from IBM and MIT—has reduced energy consumption by up to 99% since 2008. Second, shifts in connectivity technology from copper to optical systems have improved data transmission efficiency. Third, architectural innovations, such as modularizing large models into smaller, task-specific components, are reshaping how AI is developed and deployed (Shim, 2025). One notable example is DeepSeek: The Chinese company has introduced a disruptive, energy-efficient, and decentralized reinforcement learning-based AI model. This innovation has the potential to disrupt the current trajectory of AI development by challenging the dominance of energy-intensive infrastructures, such as those planned in the United States, and signaling a possible shift in global AI leadership (Hill, 2025).
From an academic point of view, we lack an understanding of which AI firms prioritize energy-efficient models—and, more importantly, why they choose to do so. Finding answers to this question might involve the interests and motives of managers and specifically CEOs (Fehre et al., 2023; Kavadis et al., 2024), incentives in executives‘ compensation schemes (Aresu et al., 2022; Flammer and Bansal, 2017), and initiatives by board members (Asad et al., 2023; Homroy and Slechten, 2019). As these exemplary references clearly signal, we already have an understanding of corporate governance parameters and their relationship with environmental outcomes (Karn et al., 2023). Nevertheless, we claim that AI provides a unique context that challenges several assumptions. For instance, the particular speed and growth orientation of the AI industry, as well as the quite special personalities of some tech leaders, might bring interesting twists to existing mechanisms.
Another important topic is the transparency of AI’s energy costs. Public awareness is alarmingly low regarding AI’s energy consumption and carbon footprint (Sundberg, 2024). Despite AI’s reliance on energy, policymakers and markets have lacked robust tools to assess its full implications (Cozzi et al., 2024). Nevertheless, a clearer picture of these numbers, as well as some information about strategies for how to reach them, could be helpful to make joint improvements in AI’s energy consumption. Best practices in AI development can significantly reduce energy consumption and emissions through a multi-faceted approach encompassing model optimization, hardware efficiency, process mechanization, and strategic data center resource management (Sundberg, 2024). Leading AI experts hope for the spark of important discussions on accountability, sustainability reporting, and industry standards (Maslej et al., 2025; Strubell et al., 2020).
In summary, this discussion opens up a range of research questions that future scholarly endeavors need to address to advance our understanding of how to effectively manage the environmental consequences of AI:
RQ E1: How do CEO characteristics and top management team (TMT) composition influence the extent to which firms adopt energy-efficient AI applications?
RQ E2: To what extent do executive compensation structures and board oversight mechanisms influence firms’ decisions to prioritize energy-efficient AI deployment over performance-maximizing alternatives?
RQ E3: How do regulatory and normative institutional pressures influence firms’ deployment of AI for environmental purposes such as compliance monitoring, reporting accuracy, or biodiversity impact assessment?
RQ E4: How do stakeholder pressures (from customers, investors, or NGOs) shape the breadth and depth of AI applications targeting environmental performance (e.g., circular economy models, material footprint reduction)?
RQ E5: Can the integration of AI capabilities for environmental analytics (e.g., pollution monitoring, water usage optimization) be considered a strategic resource contributing to sustained competitive advantage?
RQ E6: How do firms navigate the paradox between maximizing operational efficiencies through AI and mitigating potential environmental harm (e.g., rebound effects, electronic waste from sensors)?
RQ E7: How can we cluster transparency initiatives about AI’s environmental footprint and which consequences do these different regulations have on countries’ AI competitiveness and tech firms’ engagement in improvements of their environmental performance?
5.2 Social
As our overview of AI’s social costs indicated, the general relationship between AI and employability is an extremely important topic. Hence, we consider this also a responsibility of tech firms. Reskilling represents a promising lever to mitigate the adverse effects of job displacement caused by AI adoption. However, to implement effective reskilling strategies, we must first gain a clearer understanding of which types of jobs are likely to endure in the age of automation. Executives emphasize that the “need for creative business problem solving with technology” is expected to remain in high demand. (Davenport and Bean, 2024, p. 3). As a consequence, there is a pressing need to adapt the topics, themes, and skillsets emphasized in higher education—particularly in the fields of business, management, and governance. Many of the roles our students target show up quite high in the list of AI Occupational Exposure (AIOE) (Felten et al., 2021). At this juncture, we strongly advocate for a paradigm shift in the discourse surrounding AI. Rather than focusing solely on the technological capabilities of AI—what AI can do—we urge scholars, practitioners, and policymakers to also engage in critical reflection on the societal and ethical dimensions of its deployment—what we want AI to do. This is in line with recent recommendations about not automating only because it is technically feasible (Moser et al., 2022b).
Beyond reskilling, changes in university curricula should also consider the human and ethical dimension. A key challenge lies in ensuring that research and educational programs do not inadvertently cause us to unlearn the importance of reflecting on human values. As AI becomes increasingly integrated into our lives and decision-making processes, it is crucial that students are not only trained in technical proficiency but also in critical thinking about the societal consequences of innovation. Practitioners warn that “compared to the magnitude of what could go wrong, safety and ethics research on AI is marginal” (Suleyman and Bhaskar, 2024, p. 242). Again, we also see the responsibility at tech firms to support this educational endeavor.
In line with the discussion about the future of employment, a discussion about the value and wealth distribution is needed, too. In this context, practitioners like to mention the taxation issue that while the human employee (e.g., a lawyer) is taxed e.g., 25 percent, the AI lawyer does not pay such tax, which can have consequences for the competitiveness of human labor as well as the distribution of wealth (towards technology firms and away from the individual lawyer) (Suleyman and Bhaskar, 2024, p. 262). One solution to this problem could be taxing the AI service, or in other words, introducing a “tax on robots” (Guerreiro et al., 2022; Suleyman and Bhaskar, 2024, p. 262). MIT economists, for instance, suggest a 1 to 4 percent tax on robots (Costinot and Werning, 2023).
Finally, from an individual-level perspective, it remains an open and fascinating question what kind of relationship humans will form with AI models—and to what extent we will come to perceive or treat them as human-like. Future research endeavors in this direction will also have to answer the question of how much we should treat AI models as human-like. The technology world provides intriguing cases in which developers and engineers form personal attachments to their models, sometimes even experiencing parent-like responsibilities toward them (Suleyman and Bhaskar, 2024, p. 72).
These developments raise a number of pressing questions for scholars and practitioners alike—particularly regarding how firms, educators, and policymakers can navigate the complex interplay between AI adoption, human labor, governance, and ethical responsibility. To advance our understanding, we propose the following research questions:
RQ S1: How will the AI transformation affect employability?
RQ S2: Which business and management roles are most resilient to high AI Occupational Exposure, and what strategic reskilling pathways can organizations and educational institutions implement to sustain employability? How can tech firms support this process?
RQ S3: How can firms foster and measure “creative business problem-solving with technology” as a core capability in an AI-augmented workforce?
RQ S4: How might the introduction of AI-specific taxation (e.g., robot tax) influence the distribution of economic value between technology providers, employees, and the state?
RQ S5: How should business and management education evolve to address both technical literacy and ethical foresight for professions under high AI exposure?
RQ S6: What are effective pedagogical approaches to integrate AI safety, ethics, and responsible innovation into business and management curricula at universities?
RQ S7: What normative boundaries should guide how individuals and organizations relate to AI models, especially when emotional attachment and moral responsibility are projected onto non-human agents?
5.3 Governance
Reading or hearing about ESG costs—especially in severity just described in the chapters above—our immediate reaction may be to call for stronger regulation. Indeed, the role of regulation in this context is a compelling question. However, it is equally important to recognize that AI, by design, is a rapidly evolving technology that often outpaces the capacity of regulators to respond effectively. While technological firms typically possess the (financial) resources to adapt swiftly and drive innovation, regulatory bodies often lack such flexibility and find themselves “trapped in a 24-hour news-cycle of sound bites and photo ops” (Suleyman and Bhaskar, 2024, p. 226). Recent research also observes how digital innovators manage to finesse, sidestep, or nullify existing regulatory frameworks (Birkinshaw, 2024).
When discussing regulatory responses, concerns about national competitiveness often come to the fore (e.g., Steinberg et al., 2023). As such, limiting the ESG costs of AI may require more than national policies alone. A global alliance or international agreement could offer a viable alternative—or complement—to domestic regulation. Historical precedent demonstrates that multilateral cooperation can be both feasible and effective. Notable examples include the Treaty on the Non-proliferation of Nuclear Weapons; the Montreal Protocol outlawing CFCs; the development and global rollout of the polio vaccine during the Cold War; the Biological Weapons Convention, and international bans on cluster munitions, land mines, human genetic editing, and eugenics policies. The Paris Agreement, aimed at mitigating carbon emissions and climate change impacts, further illustrates the potential of coordinated global action (Suleyman and Bhaskar, 2024, p. 263).
Another critical question for regulators concerns whether the AI industry is experiencing reduced competition and a growing threat of monopolization. Currently, we observe arguments supporting both perspectives. On the one hand, we can see an emergence of monopolistic structures in firms like Meta or Google (Tirole, 2023), which has prompted regulatory initiatives aimed at curbing market abuse by dominant digital platforms—most notably, the European Union’s Digital Markets Act (Espinoza and Foy, 2025). On the other hand, some argue that the AI sector is undergoing a commoditization process that could foster broader accessibility and diffusion (Hammond, 2025). Future research should therefore aim to clarify the trajectory of competition in the AI industry: will it consolidate further into a few dominant players, or will it open up to a more competitive and decentralized structure?
Beyond the regulatory level, we see several fascinating topics for future research at the corporate level—e.g., with respect to the forms of decision making. Our analysis of the AI’s impact on the advisory function of corporate governance suggests that decision quality at the human-AI interface is a multifaceted and complex issue. This will be especially relevant for the top decision-makers in organizations, such as directors and executives. Future research should explore what practical guidelines or frameworks could support these actors in navigating AI-augmented decision environments. In doing so, scholars could draw upon more granular categorizations of human-AI collaboration, including human-to-AI delegation, hybrid sequential decision-making (both human-to-AI and AI-to-human), and aggregated human-AI decision-making models (Shrestha et al., 2019b).
In line with decision quality, we also need a deeper understanding of how differences in AI model architectures and training data affect outcomes. AI models are trained on diverse sources such as internal data (e.g., from own organization or social networks) and public web data (e.g., by crawling and indexing data from websites, forums, and social networks) (Roded and Slattery, 2024). If a model is trained on data generated by other models, it may eventually collapse (Shumailov et al., 2024). To tackle the risk of depleting human-generated data, synthetic data (artificially generated data that mimics the structure, patterns, and statistical properties of real data, but is not derived from actual events or individuals) has been proposed as a part of the solution (Werner, 2024). Next to missing enough data, some tech leaders even challenge the partisan value orientation of their input data and advocate for manually adjusting the underlying data (Constantino, 2025) or introducing “system prompts” as a last layer adjustment of the AI models’ answers (Tufekci, 2025). We need to develop a better understanding of how such interventions might further fuel the existing polarization.
Another interesting research avenue might involve exploring the various official roles that AI might assume within corporate leadership teams. Researchers expect a trend toward organization hybridization, wherein companies are managed by teams composed of individuals and AI models (Hillebrand et al., 2025). Once AI agents become fixed members of leadership teams, we may need to rethink existing theories of group dynamics in top management teams (e.g., Peterson et al., 1998).
One could even extend this line of thinking by analyzing whether there is diversity among AI agents—such as those developed by different technology firms—and what effects such diversity might have when these agents are integrated into top management teams (TMTs). The idea builds on earlier reflections about whether general-purpose AI models will eventually converge or continue to diverge.
Another interesting question is whether we are truly willing to control AI. Many of those in a position to shape its trajectory—regulators, corporate leaders, founders, and academic AI scientists—appear to be caught in what Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, calls a pessimism aversion trap. He defines this as “the misguided analysis that arises when you are overwhelmed by a fear of confronting dark realities, and the resulting tendency to look the other way” (Suleyman and Bhaskar, 2024, p. 13).
Finally, from an individual-level perspective on tech founders and AI developers, it is crucial to gain deeper insights into what motivates them. These individuals are undeniably intelligent, influential, and well-resourced. Understanding their underlying driver can shed light on their decision-making processes and help to anticipate the consequences of their actions. The tension between innovation and responsibility is aptly captured in a well-known quote by J. Robert Oppenheimer: “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success” (Suleyman and Bhaskar, 2024, p. 140).
In summary, as the development of AI continues to accelerate in complexity, so too do its environmental, social, and governance (ESG) implications. This evolution underscores the growing importance of corporate governance—not only as a mechanism within firms but also as a bridge to national policy, global regulation, and individual ethical responsibility. To deepen our understanding of these dynamics, we propose the following research questions at the intersection of AI, governance, and corporate responsibility:
This results in many questions that research needs to address:
RQ G1: How do national differences in AI regulation influence countries’ pursuit of competitive advantage in emerging technologies? And how can regulatory frameworks, such as the EU AI Act, balance these dual objectives of protecting stakeholders and fostering AI-driven innovation?
RQ G2: What strategies do AI firms employ to navigate, bypass, or undermine existing regulatory constraints, and what are the implications for regulatory design? And how do AI firms operating under a “move fast and break things” ethos manage ethical and legal uncertainty in the absence of clear regulatory guidelines?
RQ G3: How can corporate governance mechanisms within firms complement or compensate for the limited capacity of national regulators in addressing the fast-evolving ESG risks of AI technologies?
RQ G4: How does corporate governance shape the extent to which AI is granted decision-making autonomy in strategic or operational roles, particularly at the board and top management team (TMT) level?
RQ G5: What are the implications of hybrid human-AI decision-making models for traditional corporate governance structures, and how should boards adapt to these evolving dynamics?
RQ G6: To what extent does corporate governance influence a firm’s commitment to ensuring diversity in generative AI systems, especially in light of potential model divergence and polarization risks?
RQ G7: How does corporate governance mediate the organizational willingness to exert meaningful control over AI systems, especially under conditions of uncertainty and “pessimism aversion”?
RQ G8: How can corporate governance frameworks incorporate insights about the biases of developers and tech entrepreneurs to anticipate and mitigate unintended consequences of AI innovations?
RQ G9: In what ways can boards and governance bodies be equipped to address the ethical blind spots that arise from technologist-driven innovation cultures, as exemplified by the “technically sweet” phenomenon?
Table 2 summarizes our research questions.
Research questions
| Environmental | Social | Governanace |
|---|---|---|
| •RQ E1: How do CEO characteristics and top management team (TMT) composition influence the extent to which firms adopt energy-efficient AI applications? •RQ E2: To what extent do executive compensation structures and board oversight mechanisms influence firms’ decisions to prioritize energy-efficient AI deployment over performance-maximizing alternatives? •RQ E3: How do regulatory and normative institutional pressures influence firms’ deployment of AI for environmental purposes such as compliance monitoring, reporting accuracy, or biodiversity impact assessment? •RQ E4: How do stakeholder pressures (from customers, investors, or NGOs) shape the breadth and depth of AI applications targeting environmental performance (e.g., circular economy models, material footprint reduction)? •RQ E5: Can the integration of AI capabilities for environmental analytics (e.g., pollution monitoring, water usage optimization) be considered a strategic resource contributing to sustained competitive advantage? •RQ E6: How do firms navigate the paradox between maximizing operational efficiencies through AI and mitigating potential environmental harm (e.g., rebound effects, electronic waste from sensors)? •RQ E7: How can we cluster transparency initiatives about AI’s environmental footprint and which consequences do these different regulations have on countries’ AI competitiveness and tech firms’ engagement in improvements of their environmental performance? | •RQ S1: How will the AI transformation affect employability? •RQ S2: Which business and management roles are most resilient to high AI Occupational Exposure, and what strategic reskilling pathways can organizations and educational institutions implement to sustain employability? How can tech firms support this process? •RQ S3: How can firms foster and measure “creative business problem-solving with technology” as a core capability in an AI-augmented workforce? •RQ S4: How might the introduction of AI-specific taxation (e.g., robot tax) influence the distribution of economic value between technology providers, employees, and the state? •RQ S5: How should business and management education evolve to address both technical literacy and ethical foresight for professions under high AI exposure? •RQ S6: What are effective pedagogical approaches to integrate AI safety, ethics, and responsible innovation into business and management curricula at universities? •RQ S7: What normative boundaries should guide how individuals and organizations relate to AI models, especially when emotional attachment and moral responsibility are projected onto non-human agents? | •RQ G1: How do national differences in AI regulation influence countries’ pursuit of competitive advantage in emerging technologies? And how can regulatory frameworks, such as the EU AI Act, balance these dual objectives of protecting stakeholders and fostering AI-driven innovation? •RQ G2: What strategies do AI firms employ to navigate, bypass, or undermine existing regulatory constraints, and what are the implications for regulatory design? And how do AI firms operating under a “move fast and break things” ethos manage ethical and legal uncertainty in the absence of clear regulatory guidelines? •RQ G3: How can corporate governance mechanisms within firms complement or compensate for the limited capacity of national regulators in addressing the fast-evolving ESG risks of AI technologies? •RQ G4: How does corporate governance shape the extent to which AI is granted decision-making autonomy in strategic or operational roles, particularly at the board and top management team (TMT) level? •RQ G5: What are the implications of hybrid human-AI decision-making models for traditional corporate governance structures, and how should boards adapt to these evolving dynamics? •RQ G6: To what extent does corporate governance influence a firm’s commitment to ensuring diversity in generative AI systems, especially in light of potential model divergence and polarization risks? •RQ G7: How does corporate governance mediate the organizational willingness to exert meaningful control over AI systems, especially under conditions of uncertainty and “pessimism aversion”? •RQ G8: How can corporate governance frameworks incorporate insights about the biases of developers and tech entrepreneurs to anticipate and mitigate unintended consequences of AI innovations? •RQ G9: In what ways can boards and governance bodies be equipped to address the ethical blind spots that arise from technologist-driven innovation cultures, as exemplified by the “technically sweet” phenomenon? |
| Environmental | Social | Governanace |
|---|---|---|
| • | • |
6. Conclusion
Although existing studies underscore the significant productivity benefits that artificial intelligence (AI) can deliver (Fedyk et al., 2022; Lu, 2021), the exploration of AI’s role in transforming complex activities, such as those related to the ESG dimensions, remains relatively underdeveloped. ESG decisions are inherently difficult due to their complexity, high stakes, ambiguous contexts, and the often delayed and noisy nature of feedback. These decisions are even more consequential if they touch the core of our society by involving ESG topics. A key concern is that AI, which primarily relies on past data and algorithmic computation, may struggle to replicate human capabilities like intuition, creativity, and adaptive reasoning—traits that are vital for strategy formulation (Brandenburger, 2017; Kauppila et al., 2018). In line with existing research cautioning against a reductive dichotomy between AI “scoffers” and “promoters” dichotomy of AI research in the management field (Townsend et al., 2024), our intention was to provide a more nuanced picture of costs and benefits that firms and societies can expect from AI, with the goal to prepare researchers and practitioners to navigate successfully.
That said, AI excels at analyzing extensive datasets, detecting hidden trends, and enabling swift reactions to market dynamics (Von Krogh, 2018). These strengths can support and even augment human cognition (Raisch and Fomina, 2025), potentially altering the foundations of competitive advantage (Krakowski et al., 2023). As a result, while AI holds promise for reshaping parts of ESG decision-making, the academic discourse still reflects a tension between optimism about its capabilities and caution regarding its limitations. Or in other words, “the debate between doomers and accelerationists, […], is far from over” (The Economist, 2024c, p. 44).
At the same time, we have a limited understanding of the practical challenges firms face when adopting AI for ESG purposes and how they can effectively manage the balance between leveraging its benefits and mitigating associated costs. Although this challenge (and our entire study) focuses on technology–specifically AI–, the human plays a critical role in our way to answers and solutions. Technological automation comes with the danger of overlooking our humanness, or even feeling devalued due to our humanness (Moulaï et al., 2022). We hope this study encourages both practitioners and scholars to follow a reductionist view. Rather than asking solely what AI can do, we urge a deeper reflection on what it should do, grounded in an appreciation of human values and judgment. Recognizing the enduring significance of humanness—not despite but because of our limitations—is essential for ensuring that AI serves as a complement to, rather than a replacement for, human insight, ethics, and creativity.

