ChatGPT and the environment: Where do we draw the line?

September 29, 2025

These days, you can’t go anywhere without running into AI. 

With Duke’s recent rollout of its own LLM service (DukeGPT) and free access to ChatGPT to all undergraduate students, the use of AI has been a hot topic on campus. Some of my friends are all for using it, asking “Chat” to write up emails or summarize readings. Others use it in less academic manners, searching up recipes or movie recommendations. And some won’t touch ChatGPT with a ten-foot pole.

Almost equal to its celebration of AI, Duke has also heavily publicized its Climate Commitment in the past few years, which highlights the University’s dedication to address “the climate challenge with true impact through innovations in education, research, sustainable operations” and to create “sustainable actions that place society on the path toward a resilient, flourishing net-zero world for all.”

Personally, I’m no expert in the field of Large Language Models (LLMs) like ChatGPT. Coming to conclusions regarding ChatGPT’s accuracy or impacts on our health, like lowering cognitive ability, is certainly beyond my reach. 

But I am a senior who has spent four years studying in the Nicholas School of the Environment. And recently, as I was sitting in an ecology lecture listening to my professor repeatedly encouraging us to use ChatGPT to complete our assignments, I found myself looking left and right to see if anyone else could sense the growing elephant in the room. Should a school for the environment be promoting the use of AI, a technology known to consume precious resources at vastly high rates? Should a university boasting sustainability and carbon neutrality be encouraging the use of AI at all?

The environmental impacts of LLMs, like ChatGPT, are often thrown into the ring in the “to use or not to use” debates. Like that one vegan friend, pledging abstinence due to environmental concerns seems logical. However, the actual details, numbers and direct line between cause and effect, often grow hazy and fizzle out in actual discussion. 

Nonetheless, since its inception, there’s been an increasing awareness of the dire environmental consequences of ChatGPT, and rightfully so. The power needed to train AI models requires a staggering amount of electricity, increasing demand on our electric grids and increasing carbon dioxide emissions. And the energy demands don’t stop once the model is trained —  researchers estimate that each time a model is used, say asking ChatGPT to summarize an email, the command requires five times more electricity than a simple web search. 

In addition to its electricity demand, AI technology also depletes water resources. While we often think of the software we use as existing in “clouds,” physical data centers power the tech we log onto daily — like ChatGPT — and cold water is used to cool these data centers by absorbing heat from the computing equipment. It’s estimated that two liters of water is needed to cool equipment for every kilowatt hour of energy a data center consumes. 

Things quickly go from bad to worse when we consider that many of these data centers are located in the south, where water availability is often already a battle. ChatGPT, for example, is hosted in a Microsoft Azure data center in San Antonio, Texas, which has seen drought conditions since 2022.

And to really pile on the bad news, extracting the materials needed to maintain AI hardware also often involves mining rare earth metals, like lithium and cobalt, which requires significant water usage and can lead to pollution and environmental degradation; not to mention the human rights violations and poor labor conditions which are often linked to the extraction of these minerals.

But I’m not afraid to acknowledge that AI has been used to help the environment, too, by being able to model and predict the impacts of climate change faster than we would ever be able to. The United Nations Environment Program, in particular, has been using AI to monitor methane emissions and track air quality, increasing the speed and scale at which we are able to process data and act on environmental health issues. 

So, where do we draw the line?

When do environmental ethics outweigh convenience and advancement? 

For me, the fact that our University’s use of ChatGPT applies mostly to things we can typically do on our own quickly defeats the campaign to adopt it fully. Although it may take longer and be more of a pain, we can write our own code, draft our own emails, take our own notes and go back to good old Google searches. We might even be better for it in the long run. 

It’s impossible to avoid all environmental hypocrisies, but committing some environmental sins doesn’t mean you should give up and give in to them all. I own an iPhone, I fly on airplanes and, every now and then, I enjoy a good burger. But using ChatGPT, a piece of technology known to consume lots of environmental resources with potential impacts on my ability to think critically and creatively — just to save some time on assignments — that’s where I draw the line.

The popularity of ChatGPT also offers Duke the perfect opportunity to show its true commitment to the climate. Rather than flashing DukeGPT and ChatGPT promotions on TVs across campus, the University should educate its students on the true costs of the technology and encourage the use of search engines that consume fewer resources. If Duke really cares about our environment, the environment should be included in the conversation.

So, I can understand a professor’s desire to adopt new technology, but as I watch ChatGPT fill the screens of students in front of me, I can’t help but grow wary. I’m all for advancement, but sometimes we have to look around the corner and see that some things are really too good to be true. We can’t let AI become another environmental catastrophe we turn a blind eye to in the name of convenience. 

Samantha George is a Trinity senior. Her column typically runs on alternate Mondays. 

 

Search

RECENT PRESS RELEASES

ChatGPT and the environment: Where do we draw the line?

September 29, 2025

These days, you can’t go anywhere without running into AI. 

With Duke’s recent rollout of its own LLM service (DukeGPT) and free access to ChatGPT to all undergraduate students, the use of AI has been a hot topic on campus. Some of my friends are all for using it, asking “Chat” to write up emails or summarize readings. Others use it in less academic manners, searching up recipes or movie recommendations. And some won’t touch ChatGPT with a ten-foot pole.

Almost equal to its celebration of AI, Duke has also heavily publicized its Climate Commitment in the past few years, which highlights the University’s dedication to address “the climate challenge with true impact through innovations in education, research, sustainable operations” and to create “sustainable actions that place society on the path toward a resilient, flourishing net-zero world for all.”

Personally, I’m no expert in the field of Large Language Models (LLMs) like ChatGPT. Coming to conclusions regarding ChatGPT’s accuracy or impacts on our health, like lowering cognitive ability, is certainly beyond my reach. 

But I am a senior who has spent four years studying in the Nicholas School of the Environment. And recently, as I was sitting in an ecology lecture listening to my professor repeatedly encouraging us to use ChatGPT to complete our assignments, I found myself looking left and right to see if anyone else could sense the growing elephant in the room. Should a school for the environment be promoting the use of AI, a technology known to consume precious resources at vastly high rates? Should a university boasting sustainability and carbon neutrality be encouraging the use of AI at all?

The environmental impacts of LLMs, like ChatGPT, are often thrown into the ring in the “to use or not to use” debates. Like that one vegan friend, pledging abstinence due to environmental concerns seems logical. However, the actual details, numbers and direct line between cause and effect, often grow hazy and fizzle out in actual discussion. 

Nonetheless, since its inception, there’s been an increasing awareness of the dire environmental consequences of ChatGPT, and rightfully so. The power needed to train AI models requires a staggering amount of electricity, increasing demand on our electric grids and increasing carbon dioxide emissions. And the energy demands don’t stop once the model is trained —  researchers estimate that each time a model is used, say asking ChatGPT to summarize an email, the command requires five times more electricity than a simple web search. 

In addition to its electricity demand, AI technology also depletes water resources. While we often think of the software we use as existing in “clouds,” physical data centers power the tech we log onto daily — like ChatGPT — and cold water is used to cool these data centers by absorbing heat from the computing equipment. It’s estimated that two liters of water is needed to cool equipment for every kilowatt hour of energy a data center consumes. 

Things quickly go from bad to worse when we consider that many of these data centers are located in the south, where water availability is often already a battle. ChatGPT, for example, is hosted in a Microsoft Azure data center in San Antonio, Texas, which has seen drought conditions since 2022.

And to really pile on the bad news, extracting the materials needed to maintain AI hardware also often involves mining rare earth metals, like lithium and cobalt, which requires significant water usage and can lead to pollution and environmental degradation; not to mention the human rights violations and poor labor conditions which are often linked to the extraction of these minerals.

But I’m not afraid to acknowledge that AI has been used to help the environment, too, by being able to model and predict the impacts of climate change faster than we would ever be able to. The United Nations Environment Program, in particular, has been using AI to monitor methane emissions and track air quality, increasing the speed and scale at which we are able to process data and act on environmental health issues. 

So, where do we draw the line?

When do environmental ethics outweigh convenience and advancement? 

For me, the fact that our University’s use of ChatGPT applies mostly to things we can typically do on our own quickly defeats the campaign to adopt it fully. Although it may take longer and be more of a pain, we can write our own code, draft our own emails, take our own notes and go back to good old Google searches. We might even be better for it in the long run. 

It’s impossible to avoid all environmental hypocrisies, but committing some environmental sins doesn’t mean you should give up and give in to them all. I own an iPhone, I fly on airplanes and, every now and then, I enjoy a good burger. But using ChatGPT, a piece of technology known to consume lots of environmental resources with potential impacts on my ability to think critically and creatively — just to save some time on assignments — that’s where I draw the line.

The popularity of ChatGPT also offers Duke the perfect opportunity to show its true commitment to the climate. Rather than flashing DukeGPT and ChatGPT promotions on TVs across campus, the University should educate its students on the true costs of the technology and encourage the use of search engines that consume fewer resources. If Duke really cares about our environment, the environment should be included in the conversation.

So, I can understand a professor’s desire to adopt new technology, but as I watch ChatGPT fill the screens of students in front of me, I can’t help but grow wary. I’m all for advancement, but sometimes we have to look around the corner and see that some things are really too good to be true. We can’t let AI become another environmental catastrophe we turn a blind eye to in the name of convenience. 

Samantha George is a Trinity senior. Her column typically runs on alternate Mondays. 

 

Search

RECENT PRESS RELEASES

Go to Top