Thursday, April 30, 2026

Should We Be Using GenAI?

Introduction

As you have likely worked out on your own already, Artificial Intelligence is not going away. It has gone from being a joke to a novelty to the bogeyman to a tool that many of us use all the time. And yet, there are still holdouts, perhaps you among them. In my workplace, it’s a mixed bag. As recently as 2024 I was forbidden to use ChatGPT or other AI platforms at work; now, my employer  is wheedling, exhorting, begging, and all but requiring my colleagues and me to adopt it. On the family front, one of my daughters uses it a fair bit (sometimes frivolously), the other not at all. My wife is wary of it.

So should you use AI? I consider myself fairly well qualified to answer this. I have been dabbling in AI for almost fourteen years; have devoted a fair amount of research to kicking its tires; and now use it extensively both at work and at home. I’ve blogged about it a bunch of times. I’m unbiased, since I don’t work for the AI Industrial Complex, but I also don’t have a knee-jerk fear of technology.

I’ve blogged before (here) about how we can use AI, describing two fundamental ways—operationally vs. creatively—that people do use it. Today’s post is more about whether we should use it, and how often, particularly in light of the resources (electricity and water) that it consumes. Is environmental responsibility a compelling reason to curb our use of GenAI?


Some housekeeping

As I’ve explained here, AI is much bigger than the Large Language Model (LLM) chatbots that we consciously use as the natural successor to Google. We generally speak of AI as a productivity tool, but a whole lot of AI is devoted to the invisible algorithms on social media, YouTube, etc. that grab and hold our attention, threatening to reduce our productivity. I think of this as secondhand AI (like smoke). Meanwhile, you’re surely hearing a lot of hype about “agentic AI,” which can supposedly act on its own volition to achieve a goal. At this point I’m scared of agentic AI and think you should be, too, but that’s another post. The AI I’m considering here is Generative AI (GenAI), which is the type of chatbot (e.g., ChatGPT, Gemini, Copilot, Claude) that you feed a prompt to as a way to research something, or as a way to quickly compose an essay, letter, or picture. This is how I believe most people think of AI, which is why the terms “AI” and “GenAI” are so often used interchangeably.

(Note that if you are reading this post long after April of 2026, and there isn’t a single living human not using GenAI, and/or the robots have taken over and enslaved you, treat this post as a historical artifact. At least you’ll get a sense of how society initially approached this technology.)

GenAI at work

If you work for a corporation that is clearly embracing GenAI, providing you a commercial, “walled garden” version of it, and the training to go with that, adoption is a no-brainer: do as you’re told and embrace GenAI immediately. My employer is already monitoring my use of it (though they haven’t said exactly how), and their expectation that GenAI will make me more productive is reasonable. Oddly, I have seen GenAI’s use go from something my colleagues tried (in vain) to hide, to something that my manager will outright ask me about. When asked, “Did you use AI to help you with this?” I now assume that the correct answer is a version of “yes.” (This answer is necessarily nuanced. Both in terms of being honest and articulating my ongoing value as an employee, I am sure to explain both how it helped and how it fell short of doing the task for me.) This week my boss tasked me with figuring out how to create a NotebookLM chatbot specializing in expertly summarized minutes of all our team meetings, which will update its training data automatically. I feel like this assignment would have been unheard of a year ago.

But what if you work for a small business, or have your own? This is a greyer area, of course. A member of my family is a sole proprietor, and so far has shied away from GenAI because she’s concerned about becoming too reliant on the technology. I get her point, and have blogged before (here and here) about how doing our own thinking and writing prevents us from falling into intellectual torpor. But isn’t a tool that legitimately improves efficiency something we ought to rely on? After all, we wouldn’t even think of trying to run a business without email, a laptop, a smartphone, in many cases videoconferencing capability, and (depending on the business) various types of specialized software. All of these tools were new once, and any small business owner still using a typewriter to generate invoices is surely a) in the minority, and b) wasting a lot of time. From that perspective, it’s all but inevitable that any small business owner will ultimately adopt GenAI for his or her business … so why wait?

GenAI at home

Using GenAI outside the workplace is a more complicated matter, since it’s not helping put food on the table. I mentioned earlier in this post that my older daughter has occasionally used it rather frivolously, such as to punk me. Consider this drawing she had ChatGPT create to memorialize an accident I had at a hotel pool back in 2024, when I got out of the hot tub too fast and fainted:


Her prompt for this was, “Can you create an image of a tall skinny white man feeling faint after leaving a hot tub?” As you can see, the man portrayed looks more hunky than skinny, and my daughter tried three more times to get the picture more accurate. Given that these were throwaway efforts (or would have been had I not used them in an early AI analysis here), this was devoting rather a lot of computing resources to a pretty trivial problem, or shall we say exercise. (Of course part of the point for my daughter was exploring the early technology; it’s not like she’s stuck with throwaway art as her primary use case for GenAI.)

On the flip side, her sister won’t use GenAI at all, somewhat on grounds of intellectual authenticity but mainly due to its environmental impact. The constant construction of ever-larger data centers is all over the news, with some shocking statistics thrown around about how much power and water a single GenAI prompt requires. Today I decided it’s time to vet this claim a bit, studying the available data and describing it in a context that could help guide our behavior appropriately.

How much electricity does GenAI use?

With the help of Claude, because it works better than a Google search, I did some light research and found some great analysis (here) on the website of Epoch AI, a nonprofit founded to “help people understand what is happening in AI from a neutral perspective and grounded in the best possible evidence.” Epoch AI partners with Stanford’s AI Index, which I’ve come across in my professional life and seems well respected, as well as the UK’s Department for Science, Innovation, & Technology, which I trust even more (since it doesn’t have ties to the tech industry like Stanford does). I must acknowledge that truly disinterested AI research is hard to come by, because almost every organization doing serious work in this realm has a business relationship with it. So to spread out the risk of misinformation I also put this query to ChatGPT, which came up with similar numbers but from other presumably trustworthy sources, ScienceDirect  (which Gemini says “is considered one of the most reliable and authoritative sources for factual data in the world”) and Cornell University.

So: Epoch AI, in an article from about a year ago, examined a widespread previous claim that “an individual ChatGPT query requires around 3 watt-hours of electricity, or 10 times as much as a Google search.” Epoch AI, leveraging “more up-to-date facts and clearer assumptions,” arrived a the following conclusion:

We find that typical ChatGPT queries using GPT-4o likely consume roughly 0.3 watt-hours, which is ten times less than the older estimate. This difference comes from more efficient models and hardware compared to early 2023, and an overly pessimistic estimate of token counts in the original estimate. For context, 0.3 watt-hours is less than the amount of electricity that an LED lightbulb or a laptop consumes in a few minutes.
For further perspective: according to this article, “Google says that its median text query uses around 0.24 Wh of electricity. That’s a tiny amount: equivalent to microwaving for one second, or running a fridge for 6 seconds.”

But that’s just text queries. Creating a picture uses a lot more resources. According to this article by the University of Southern California, using GenAI to create a picture uses 2.9 Wh—over ten times as much as a text query. I had Gemini come up with some household use equivalents to give this number some context, and here’s what it came up with:

  • Phone: charges your battery about 19%
  • LED bulb: about 19 minutes of light
  • Dishwasher: about 14 seconds of a cycle
  • Clothes dryer: about 2.6 seconds of a cycle

These seem pretty trivial, but if you consider all the millions of people using GenAI, it can add up, especially if people get it the habit of iterating a dozen or so times to get the image just right.

How much water does GenAI use?

Water is another matter, and very difficult to quantify because the location of a data center has a lot to do with how efficiently it can cool all its servers. This “Washington Post” article documented a study, involving research from the University of California at Riverside, that found that using ChatGPT to write a 100-word email consumed 519 milliliters of water, which is a little more than a standard bottle. Obviously that is really high, especially considering how many people use GenAI and how much that’s growing.

At the same time, as pointed out by this article, many other industries also use a ton of water, and people don’t seem up in arms about it: “A single burger takes more than 400 gallons of water to produce; a humble cotton T-shirt takes more than 700. The United States’ 16,000 golf courses, meanwhile, each have the potential to use on average between 100,000 to 2 million gallons of water per day. (For comparison, Google says its thirstiest data center in Iowa consumed about 2.7 million gallons per day in 2024; most of the company’s data centers used substantially less.)”

A less abstract comparison

To be fair, it’s not like we all sit around eating burgers all the time; for most of us, that’s a treat. Meanwhile, I would hope most albertnet readers are enlightened enough to hold out for grass-fed beef, which uses a lot less water to produce. And if you’re like me, you buy a lot of clothing secondhand, which helps mitigate the resources required for your wardrobe. So what’s a better comparison that can help us frame the environmental cost of using GenAI? I propose: beer. (I know what you’re thinking: that’s my answer to everything.” Well, okay … guilty as charged.)

So here is my thought exercise: how does using GenAI compare to cracking open a beer? And what is the value of the former vs. the latter? Obviously this is a wide-open scenario so I’ll narrow it down to how I most often use GenAI: when researching a blog post.

Here’s what Claude had to say about the electricity required for a 30-minute research session:

Based on current estimates, a substantive text exchange with an AI like this one — say 20–30 back-and-forth exchanges — is probably in the neighborhood of 5–10 watt-hours of electricity. Google has reported that after major efficiency gains, the median Gemini prompt consumed about 0.24 watt-hours, representing a 33× reduction in energy per prompt compared to a year earlier. At that figure, 30 prompts would use about 7 Wh — roughly equivalent to running a phone for 20 minutes or leaving an LED bulb on for half an hour.

Regarding water use, a Mistral AI lifecycle analysis found that a typical 400-token exchange consumes about 45 milliliters of water—about three tablespoons. Multiply by 30 exchanges and you’re somewhere around 1.5 liters of water—very roughly two or three bottles’ worth attributable to the 30-minute research session. (This varies enormously by data center location and cooling method, so we should treat it as an order-of-magnitude estimate.)

To compare the electricity cost of the GenAI session vs. the can of beer, I downloaded a spreadsheet-based waste reduction calculator directly from the EPA’s website. It is designed to help consumers like me understand the value of recycling something vs. tossing it. It calculated that recycling a 12-ounce aluminum can saves 0.3 kWh—which is roughly 40 times more energy than what’s consumed by an entire 30-minute GenAI research session. Granted, I often generate a picture to go with my post, but even if we assume it takes five tries to get it right, the energy cost of those five images is still only about one-twentieth of the energy wasted by tossing a single beer can in the trash. And since this is only the energy cost of recycling, which is less than producing a can from scratch, these numbers are highly conservative. (Meanwhile, I haven’t even factored in the energy required for brewing and transporting the beer itself.)

Meanwhile, the Water Footprint Network, as described here, estimates a total water footprint of 298 liters per liter of beer—so a standard 12-oz can of domestic beer takes over 100 liters of water to produce. More than 90% of that water comes from the agricultural supply chain (e.g., growing the barley) while the brewery uses about 6–8 liters per liter of beer (though a large facility may achieve a 3-to-1 ratio). So my 30-minute research session uses something like 1–2% of the water embodied in the can of beer I might have next to my keyboard. (Full disclosure: there’s a now-empty pint glass on the arm of the sofa as I type this. Yes, drinking while blogging: a rhetorically risky and planet-impacting combination. So sue me.)

Factoring in value

So that covers the environmental cost of researching a blog vs. drinking a beer. But what about the value of each? Discounting pub crawls with my friends—which occur far more seldom than I would like, to the point that they’re a rounding error—I’m really talking about unwinding with a solitary beer at the end of the workday. So in general the value of that beer accrues solely to me.

So does my blog-related GenAI research create any value to justify its water and electricity use? In the interest of humility I won’t merely assume this, and will instead dive into the data. Pageview stats across my blog wouldn’t be very representative, as at least half my posts don’t require any research at all. So for lack of a better idea, I’ve decided to analyze the pageview count for each of the albertnet posts that are about AI. After all, those have to be among the most GenAI-intensive of all, because in writing them I was test driving the various platforms. Here’s a brief summary of how these posts have performed:

  • Total pageviews across nineteen AI posts: 15,578 (so far)
  • Average pageviews per AI post: 819.9
  • Average pageviews per AI post per month: 35.5

I could conclude that, from a somewhat abstract viewpoint, each post is seen by a person a day. But averages aren’t very reliable, and greater specificity is more revealing. Lurking in that “average pageviews per AI post per month” is a bit of (GenAI-performed) number crunching, accounting for the fact that the posts that I published years ago have had a lot more time to accrue pageviews. Ranking my AI posts by pageviews per month shows that they are gaining in popularity, with the more recent ones averaging two to three views per day. Here’s the ranking of all these AI posts over time, so you can see the momentum:

Views/Mo Total Views Title
1102.51,742Tech Check-In – How Good is the Latest A.I.? – Part II
285.7257New Year's Resolutions — AI Edition
382.81,077What Is ChatGPT Great At (and Not)?
469.91,189Tech Check-In – How Good is the Latest A.I.? – Part I
562.4312AI Smackdown – ChatGPT vs. Copilot vs. Gemini
658.0290More AI Smackdown – ChatGPT, Copilot, & Gemini Write Poetry
751.2256Tech Reflection – Two Sides of AI
827.41,040A.I. Smackdown – English Major vs. ChatGPT – Part 2
927.11,031A.I. Smackdown – English Major vs. ChatGPT – Part 1
1023.0597Will A.I. Steal Our Jobs?
1120.0739Schooling ChatGPT
1211.1719Could Artificial Intelligence Replace Writers? – Part 1
1310.6680Could Artificial Intelligence Replace Writers? – Part 3
1410.01,230A.I. Smackdown – Moto vs. Cortana vs. Siri
158.8563Could Artificial Intelligence Replace Writers? – Part 2
167.31,201Almost Intelligent – Part I
176.3838Smartphones & Artificial Stupidity
186.21,016I, Chatbot
194.9801Almost Intelligent – Part II

It would be reasonable to conclude that the more recent posts, which leverage more GenAI research, are reaching more readers, thus providing a better ROI.

The bigger point here is that the can of beer is consumed once, quickly, leaving nothing behind (except maybe a nice belch). In contrast, the energy that goes into researching a blog post has an effective cost-per-view that keeps dropping every month it’s up, in perpetuity. If you use GenAI to draft an email, how many people will it reach, and low long is its tail? Could you have drafted it on your own—thus exercising your brain—or did you really need GenAI?

I’m not trying to imply that only bloggers should use GenAI; this is just one illustration of a cost/benefit analysis of the use of this tool. If you are doing something useful and an AI chatbot is helping you do it better or more efficiently, then it’s arguably worth the energy and water—or, at least, is a more worthy use of it than shopping for a bunch of clothes, going out for a burger, and then having a few beers.

The point is to be aware of the environmental cost of this technology, the same way so many of us do when we decide among driving, biking, walking, or taking mass transit  somewhere. Just because GenAI takes less water than beef or cotton doesn’t mean we should ignore its environmental cost, since it’s a whole new way people are consuming energy and water. As recently as three years ago, almost nobody was using GenAI in their daily lives; now, it’s an increasingly entrenched behavior, data centers are expanding rapidly, and in some regions power grids are struggling to keep up with demand.

This being said, I truly don’t believe opting out of GenAI is the solution; just reflecting on how much it helped me write this post, I can’t imagine not taking advantage of it. Instead, I’d like to see the millions of people already using it stop acting like it comes without a cost. It’s the same as driving: did I really need to surround myself with two tons of steel and burn a cup of gasoline just to travel a mile to the gym and back?

Speaking of cost: one way to keep yourself honest with GenAI is to not pay for it. If you are on an unpaid account and use up your tokens, so that your chatbot cuts you off for some number of hours, maybe that should be your indication that you’ve gone overboard. Come to think of it, video games, YouTube, and social media should have that “feature.”

A final note on GenAI at work

Now that I’ve examined the environmental cost of GenAI, it’s worth pointing out a final wrinkle: using it in the workplace is actually much more efficient than using it at home. Corporations get the most benefit out of GenAI through Retrieval Augmented Generation (RAG), which is where, instead of asking a large language model to answer from its entire trove of training data, the GenAI retrieves relevant documents from a corporate knowledge base (contracts, manuals, research reports, emails, whatever the organization has indexed), then passes those retrieved chunks to the model as context for its answer. Tools like NotebookLM, most enterprise Copilot implementations, and corporate deployments of models like Gemini or Claude typically work this way.

This is much more efficient than “raw” GenAI like consumers use. The retrieval step is computationally cheap—essentially a sophisticated search. The generation step is shorter because the model doesn't have to work as hard to “remember” or construct relevant context; it’s been handed it. And the answers tend to be more accurate and require fewer iterations, which means fewer wasted queries. For a user to opt out of using it makes less sense, because the big resource expense has already been incurred. As Claude puts it:

The infrastructure cost of a corporate RAG deployment is largely fixed relative to usage. The vector database has to stay current whether 500 employees query it or 5,000. The embedding pipeline runs continuously. The API connections to the underlying model are on retainer. So each additional active user essentially dilutes the per-capita environmental and financial cost of that overhead. An employee who declines to use the tool isn't reducing the infrastructure footprint; they're just reducing the output derived from it. In accounting terms, they're lowering the return on a sunk cost.

Synthesis

Wow, I just threw a ton of words at you, didn’t I? Maybe I’m the most verbose Large Language Model since, well, ChatGPT! Anyway, here’s my final conclusion: of course you should use GenAI. It’s an amazingly powerful tool, and it’s getting better all the time. Now that it’s here, declining to use it makes about as much sense as blending a smoothie with a knife and a whisk, or doing arithmetic with an abacus, or churning your own butter. But use GenAI judiciously. Ask yourself: is this improving the quality or efficiency of my output? Or am I just being lazy?

Other albertnet posts on A.I. in order of publication

—~—~—~—~—~—~—~—~—
Email me here. For a complete index of albertnet posts, click here.

No comments:

Post a Comment