The turn of the 21st century ushered in an era of unprecedented convenience, highlighted by the emergence and widespread adoption of artificial intelligence (AI) across global industries. For instance, corporations are leveraging AI to automate routine activities, while students are flocking to generative AI tools like ChatGPT to assist in completing coursework and applying to jobs. Currently, AI is proving an agile tool in enabling climate solutions, however, the significant computational resources required by AI has resulted in the emergence of alarming environmental concerns; a worrying thought considering the global AI market is setting a growth pace that makes Moore’s Law look leisurely in comparison. But what does this environmental damage actually look like?
AI consumes a staggering amount of energy. Power grids globally are becoming exhausted as AI-driven carbon emissions continue to rise, prompting a resurgence of coal plants to meet the surging demand for energy. It is estimated that by 2027 AI could consume as much electricity as The Netherlands, and currently a single ChatGPT query consumes as much as ten times more electricity than a standard Google search. A considerable proportion of this consumption comes from forms of machine learning such as Large Language Models (LLMs) or Generative AI – think ChatGPT – which relies on “feeding” these models huge volumes of information to train them to learn patterns and generate human-like responses, of course, requiring substantial computational power and energy resources in the process. The anticipated rapid growth for the AI market means that tech companies need more data centres to provide power and cool large numbers of servers and networking operations necessary for processing and storing the enormous amounts of data consumed by AI. Cooling of these servers typically comes in the form of using millions of litres of freshwater to absorb and dissipate the heat from electronic components. To put this into perspective, training GTP-3 in Microsoft’s US data centres directly evaporated 700,000 litres of clean freshwater and it is estimated that the global AI demand for water may soon be accountable for 4.2-6.6 trillion litres of water, which is 4-6 times the total annual water consumption of Denmark. The problem is that clean freshwater resources suitable for use are extremely limited and unevenly distributed across the globe; reinforced by the fact that severe water scarcity is affecting approximately two-thirds of the global population for at least one month each year. So, when tech companies build data centres in climate-stressed regions lacking in water supplies, they deprive the surrounding natural environment of sufficient quantities and compete with local communities for freshwater.
For instance, Africa is currently experiencing vast data centre expansion, with major tech companies like Oracle, Microsoft, Amazon and Huawei investing in or building data centres across the continent. This development has led to land tenure disputes and negatively impacted local economies as farmland is repurposed for large-scale infrastructure projects like data centres and water supplies are strained. In their lack of responses to public concerns, big tech companies have shown an alarming disregard for both community welfare and environmental sustainability. For instance, in 2020, Google, a company that is “committed to significantly improving the lives of as many people as possible”, obtained permits to build a data centre in Chile that would consume 169 litres 169 litres of water per second – a concerning statistic in a country where, in 2022, over half of the 19 million population suffered water scarcity. Luckily, the project had its permit partially pulled by the Chilean Government thanks mainly to grassroots activists and water usage concerns; however, Google is insisting on maintaining the location, which significantly underlines the hypocrisy of the US conglomerate. Clearly, aspects of AI and the environment are antithetical, and the cost of AI advancement could come at the expense of the planet’s natural resources and communities in climate-vulnerable regions.
So, is there any way that we can meet the world’s digital needs without sacrificing our planet? At the moment, tech companies lack transparency in reporting their energy consumption data. Between 2020 and 2022, the actual emissions from data centres owned by Google, Microsoft, Meta and Apple are approximately 662% higher than officially disclosed. These companies tend to leverage renewable energy certificates, or Recs, to calculate “market-based” emissions, allowing them to claim lower emissions on paper while their actual carbon footprint remains significantly higher. This makes it extremely difficult to calculate exactly how much energy AI consumes, particularly as the energy requirements for training different models vary widely. Therefore, the first step towards achieving sustainable development of AI is to gain a clear understanding of AI’s actual energy consumption. However, following the results of the 47th US Presidential Election, with Trump’s highly controversial return to office, there is growing concern that he will slash environmental regulations for the tech industry. Trump’s commitment to reversing Biden’s AI regulations would likely reduce regulatory oversight on tech giants, allowing them greater freedom to develop AI technologies, potentially weakening sustainability standards and slowing progress toward clean energy practices in tech infrastructure.
Another more obvious solution is to promote the use of cleaner forms of energy and resources. While most data centres use closed-loop water-cooling systems to continuously recycle water, the environmental impact rating of these centres could be further improved by shifting from freshwater resources to a rainwater harvesting strategy. There is also a growing trend toward adopting nuclear energy as a more stable, zero-carbon resource. Both Amazon and Google have recently announced plans to invest in small nuclear reactors, which produce about one-third the power of traditional reactors, as a cleaner energy solution to support the expansion of energy-intensive AI technologies. The appeal of shifting towards this cleaner energy solution lies not only in their carbon-free nature, but also in their significantly smaller footprint compared to wind and solar, and the fact that its output is not dependent on weather. However, obstacles to widespread adoption remain, including regulatory hurdles, cost barriers, public misconception about safety, and concerns over whether these facilities can be deployed swiftly enough to meet rising energy demands. Additionally, more effort on reducing the computational energy demands of training LLMs without compromising their performance must be made by minimising redundant computations and supporting open-source approaches in the machine-learning community.
One AI startup called Hugging Face promotes sustainable AI by encouraging open-source sharing of pre-trained models, which reduces the need for resource-intensive training from scratch. The startup has developed a platform which includes a database for finding low-emissions models, and tools like Code Carbon to help track the carbon footprints in real-time, offering estimates based on training hours and hardware. Such tools could lead to better understanding of the environmental impact of training, helping companies set benchmarks and think of ways to develop more sustainable practices. Finally, discussions must be held around how we as participants of AI, leverage it to automate simple tasks. Increasing awareness of the environmental impact of such automation can drive more responsible usage as we work toward a balance that benefits both society and the planet.
While AI has certainly transformed how various industries approach everyday problems, AI systems are fundamentally at odds with ecological sustainability, often relying on exploiting social and ecological resources for their creation and operation. Vendors must foster a shift towards environmentally responsible AI modelling and implementation, and everyday users, such as us students, need to be more mindful of our AI footprint and advocate for tools that prioritize energy efficiency and responsible data practices. In the end, any vision for how AI will revolutionise our lives is meaningless if we do not ensure a habitable planet to sustain that future.