Managing AI’s environmental impact
October 6, 2025
The rapid advancement of AI models might be creating new industries and fuelling business transformation, but concerns about their environmental footprint are rising just as quick. Gartner predicts artificial intelligence (AI) models will account for 50% of IT greenhouse gas (GHG) emissions by 2028, up from about 10% in 2025.
Training and running AI models demand enormous computing power, new IT infrastructure and advanced cooling systems – an investment that strains budgets and can derail sustainability goals.
Yet AI’s environmental footprint extends far beyond energy consumption. Water use, hard-to-track supply chain, e-waste and hidden AI lifecycle costs are routinely overlooked, combined with the lack of transparent, standardised reporting from vendors.
To ensure sustainable adoption, AI’s environmental impact must be measured and mitigated beyond just calculating direct training and inference energy use.
Truly managing impact requires a shift towards demanding comprehensive transparency and adopting holistic measurement frameworks that integrate sustainability into business strategy. Only then can innovation be balanced with environmental responsibility.
Measurement is key
Accurately measuring the environmental footprint of AI models is essential for managing their impact. The complexity of AI models – size, number of parameters, volume of training data and computational resource requirements – directly determines their sustainability and resource consumption.
Taking an aggregate approach considers the carbon footprint of AI as a subset of the overall IT footprint. Often this will include a baseline measure before and after deployment to assess AI’s relative impact on key measures, including power usage effectiveness (PUE), water usage effectiveness (WUE), IT equipment utilisation (ITEU) and waste.
While this provides a high-level understanding of AI’s contribution to global emissions, it doesn’t offer insights into the specific environmental impact of individual AI models. Accurately pinning down the carbon footprint of each AI model can be more challenging, largely due to a lack of detailed data provided from vendors on the energy consumption of many large AI models.
To better capture the complexity of AI’s impact on sustainability, there are a few newly developed model-specific methodologies that can help quantify the environmental footprint at various stages of the AI model life cycle. These include breaking down AI’s environmental impact into component parts – hardware, software, data lifecycle, water use and energy consumption; software-based emission tracking tools; and AI energy scores.
After using one or more of these methods to quantify the scope 1 and 2 GHG emissions impact, then add the scope 3 supply chain emissions to round out the final calculation.
These aren’t perfect solutions, but their accuracy is rapidly improving with adoption. Where possible, prioritise the use of component-based measurements, as that is the most accurate methodology for measurement.
Taking societal impact into account
Social pushback is one of the largest barriers to effective AI deployment. Some countries have seen boycotts on growth plans for AI datacentres, with community concerns about grid stability and water availability causing delays or cancellations. Organisations must evaluate not only operational efficiency, but also the wider social and environmental consequences of AI infrastructure.
While traditional datacentre design has focused on efficiency and reliability, integrating social equity considerations can generate broader community benefits and strengthen stakeholder trust.
Innovative reuse schemes are a good example. This includes heat recovery systems that supply energy to nearby buildings, water recycling initiatives that support irrigation and industrial use, and partnerships with local recyclers to minimise electronic waste.
Equitable access to renewable energy is another benefit. By investing in new solar or wind farms connected to local grids, AI datacentre operators can help communities access cleaner power while advancing energy justice – ensuring benefits are distributed fairly and vulnerable populations are not left behind.
Embedding sustainability into AI strategy
A clear sustainability plan is essential for leaders to ensure AI use doesn’t outpace environmental responsibility. This means integrating sustainability considerations into every stage of AI development and deployment, accounting for emissions across the entire lifecycle and building in opportunities for reduction.
One of the most effective levers is model efficiency. Designing energy and carbon efficient models – such as sparse architectures that require less computation – can dramatically cut energy use.
Leveraging pre-trained models also reduces the resources required for training. For example, instead of using a general purpose large language model (LLM) like ChatGPT for coding tasks, a specialised code assistant model can deliver the same functionality with far lower environmental cost.
Infrastructure also plays a critical role. While cloud deployments often deliver economies of scale and access to providers with renewable energy commitments, not all AI workloads benefit equally. In some cases, on-premise infrastructure can be more sustainable if energy sources are carefully optimised.
The key is to evaluate deployment options on a case-by-case basis, factoring in transparency, renewable sourcing and operational efficiency.
Ultimately, building a sustainable AI strategy isn’t just about cutting carbon emissions – it’s about aligning innovation with long term resilience, ensuring organisations can harness AI’s benefits without compromising the environment.
Autumn Stanish is a director analyst at Gartner focused on IT sustainability and the role of infrastructure and operations in environmental, sustainability and governance initiatives
Search
RECENT PRESS RELEASES
Related Post