Thursday, January 8, 2026

New Year’s Resolutions — AI Edition

Introduction

It’s that time of year again, when you start planning out how you’ll be better this year. This can be really annoying, especially when you have to read about “SMART” goals even though you know in your soul that “DUMB” ones are better. And now you’ve stumbled across this post. Well, fear not – I’m just as fed up as you, and will try to make this as painless as possible.

As with everything now, I will consider the topic through the prism of Artificial Intelligence. That satisfies the “T” in “SMART” because it’s timely—and for this I apologize. But I must press on, because there are many ways in which we could resolve to use AI better, more responsibly, or less annoyingly. I’ve managed to winnow this post down to five key resolutions.


Resolution #1: Do not pass off AI work as your own

This recommendation probably seems self-evident, and yet it needs to be said. How many times have you read something ostensibly written by a human but obviously ghost-written by AI? It’s kind of amazing to me how brazenly people will paste straight from ChatGPT or another AI chatbot and think they can get away with it. Only an AI could fail to spot the nuances that betray AI-generated text.

Even if AI worked perfectly as a ghost-writer, by using it you would suffer from the neglect of your own intellect. There is an intrinsic value in learning to represent your ideas in your own words, just as you would do when speaking. Ideally, over time, by doing your own work, you will develop a writing style—a unique voice—and this is what makes you you, and ought to keep you from being easily replaced by AI. And the more you write, on your own, the better this voice will be developed, and the better you will do at in-person communication, which is almost always ad hoc. This capability ought to take precedence over the convenience of outsourcing to AI.

I asked ChatGPT to weigh in on this matter, and it said, “If AI helped draft, summarize, or rephrase something substantial, acknowledge it—especially in professional, academic, or published contexts. For your blog, this might mean a light disclosure like: ‘Drafted with AI assistance; final edits and opinions are mine.’” I cannot imagine using AI for drafting, summarizing, or rephrasing anything substantial. I assume that you are reading albertnet right now only because you trust that I can do more for you than an AI chatbot, and that I’m willing to put in the hard work to make the writing witty, entertaining, and concise. ChatGPT’s disclosure would be like the cook greeting you at his or her restaurant and saying, “My gravy is from a mix and my apple crisp is Sarah Lee.”

Resolution #2: Do not trust AI with any info you cannot verify

Obviously one of the fundamental benefits of AI chatbots is that unlike a single-query Google search, you can have a dialogue to get very specific about your question or problem and provide all kinds of context. When this works, it’s great. The trouble is, as we all know AI hallucinates. And to make matters worse, it hallucinates very confidently and feeds you incorrect information very convincingly and assuredly. I spoke with a doctor recently who described a bizarre and yet increasingly typical dialogue with a patient: she asked him why she shouldn’t just take x amount of such-and-such medication, as ChatGPT suggested.  He replied that that dosage would be lethal.

Of course you, gentle reader, are too wise to use AI like that, but it’s surprisingly how poorly it can do even with very basic information. Recently I asked two different chatbots, ChatGPT and Copilot, if my Sigma Sport bike computer is compatible with the power meter on my new bike. I provided the make and model of each, and both chatbots assured me they were indeed compatible. So I tried to pair them and got nowhere.

Both chatbots dragged me further off into the weeds when I started troubleshooting. Copilot determined, definitively (or so it claimed) that my power meter was defective. Based on blinking colored lights, it declared that “your 4iiii is doing exactly what a unit does when it can power on briefly but fails its internal startup test” and that the cause is most likely “internal hardware failure” that is “unfortunately not rare with brand-new 4iiii units.” It then offered to draft a note to the manufacturer to get the device replaced. Its voice was that of the expert advisor, when it was actually swinging wild based on training data of unknown and unverifiable provenance.

Of course the whole reason I reached out to AI on this in the first place is that I’d never messed with power meters before, and knew nothing. And yet, just having human-grade intuition and skepticism ended up being more valuable than all that training data and lightning-fast research capability. I distrusted Copilot’s conclusion, figuring hallucination on its part is more likely than a hardware defect. So instead of wasting any more time online, I tried a different bike computer to see if it would sync, and it did instantly. What a relief, that I didn’t start some needless warranty replacement process and waste some tech support person’s time, only to end up looking like a jackass. I’m still grieving over the fifteen or twenty minutes I’d spent pointlessly troubleshooting with AI. I should have experimented with the second bike computer in the first place, before asking AI for help. (Why didn’t I? I was intimidated and wanted my hand held. This was a poor instinct.)

This isn’t to say AI is never an appropriate tool, of course. As I’ve blogged about before, it can be very helpful in all kinds of technical matters, such as scripting HTML. But you should only use it when you can verify its output experientially instead of blindly trusting it. For example, when ChatGPT helped me implement the copyright footer on this blog, I knew the instructions were valid because I could see the footer for myself (as can you, below).

Resolution #3: Limit the influence of “secondhand AI”

Now that we’re all using AI chatbots more and more, it’s easy to forget that most of the AI that affects our lives is behind the scenes. We think of AI as a productivity tool, but that’s just the chatbots; most AI is developed by corporations to drive algorithms that try to grab and hold our attention, which ends up reducing our productivity. To make an analogy, a chatbot is like smoking a cigarette and getting all the benefits it provides—e.g., the drug, the rich and satisfying smoke, and the coolness—while AI-driven algorithms are like secondhand smoke that doesn’t taste good, doesn’t make us cool, and just gives us cancer. I hereby nominate for widespread adoption the term “secondhand AI,” meaning the AI that drives us instead of responding directly to our queries. (Yes, I was being facetious about the “benefits” of smoking. Just making sure you’re awake.)

So the gist of this resolution is to try to limit our exposure to secondhand AI, or at least the extent to which we let it shape our behavior. Instead of looking at the books Amazon suggests, get more of them out of those little free libraries, or from the “Staff Picks” section of your bookstore or library, or ask your friends for recommendations. Stop letting YouTube and TikTok thrust content in your face. And instead of letting Spotify choose the music after your selected album or playlist is finished, configure it to just stop (i.e., turn Autoplay off). Why? Because the crap it chooses doesn’t belong in your ears or brain. All these algorithms share the same central flaw: they select for stickiness, not quality. They can’t judge quality because they have no taste  … just the ability to carry out endless A/B tests and learn from the results.

Resolution #4: keep AI out of your messy human stuff

The most sensitive human interactions—consoling, arguing, advising, listening—might present the most tempting use cases for AI. After all, here’s a platform that can give you guidance, suggestions, actual written content, etc. without judging you or getting distracted or running out of time or patience. But this is also the area where I exhort you to close the laptop, lock your phone, and sort things out on your own. (Using a close friend as a sounding board is fine.) Why? Three reasons.

First, what if AI came up with the perfect thing to say, and in just the right way, and you couldn’t resist and just delivered its sensitive message verbatim? This might work, but what if the person you’re having the difficult dialogue with detects the distinctive AI diction and figures out you used a chatbot? This sends the message that you cut corners, that you couldn’t bother being sincere and authentic—that you outsourced your role in the interaction. This could (and probably should) be deeply offensive to the other person.

Second, if you struggle during the dialogue, and the other person perceives your vulnerability, I think that can only help. Instead of being perfectly articulate and glib, why not let your difficulty be plain to see, so that the other person can tell the struggle is mutual? Meanwhile, if you have “your” thoughts perfectly rendered with the help of AI, won’t you be more inclined to doggedly stick to that script, instead of letting the dialogue go where it needs to?

Finally, engaging in this struggle on our own is good for us. Inhabiting this discomfort, instead of trying to settle the dialogue with maximum efficiency, is bound to lead to the kind of soul searching we ought to be doing anyway. And, like with anything, we get better with practice, which is important, because we won’t always have the opportunity to stop and consult AI during a social crisis. Thinking on the fly will go a lot better if you’ve done the time working through the messy human stuff on your own.

Resolution #5: don’t replace humans with chatbots

According to this article, about half of the teens in a Common Sense Media study reported they use AI bots “regularly, not just for entertainment, but for venting, emotional support, and companionship.” And according to this article, “About one-third (31%) [of American teenagers] actually claim that dealing with AI companions is more satisfying than talking to a human being.” It seems incredible to me—that is to say, I’m amazed that I find myself even  weighing in here—that anybody should need to be advised against using AI in this way. How did we get here?

I cannot get past the most obvious issue which is that every minute a person spends typing into a void (or talking, I guess they have voice mode now) is a lost opportunity to bumble around in the real world and have the opportunity to meet people, one or two of whom could potentially become a friend. Can’t we all agree that there is a nonzero chance of making friends just by leaving the house? And that the chance of forging a real friendship with an AI chatbot is zero?

Okay, fine, I don’t personally struggle with social anxiety, and I should try to empathize with those who do, but it’s difficult, particularly since I myself was a social pariah in grade school, and then things got worse in middle school, and I’m constitutionally shy, but I did manage to eventually learn how to get along. But setting all that aside, how good is the strategy of replacing human interaction with programmatically easy, safe AI companionship? I’ll cite one article, from the Columbia Teachers College:

According to research from MIT, for example, people who are lonely are more likely to consider ChatGPT a friend and spend large amounts of time on the app while also reporting increased levels of loneliness. This increased isolation for heavy users suggests that ultimately, generative AI isn’t an adequate replacement for human connection. “We want to talk to a real person and when someone's really suffering, that need to feel personally cared for only grows stronger,” says George Nitzburg (Ph.D. ’12), Assistant Professor of Teaching, Clinical Psychology. 

Gosh, this last resolution seems like the literary equivalent of a plate of bulgur wheat salad with a side of kale. I hope it has so little to do with your life that you can just flick it off your sleeve like a booger. And then resolve to get fewer boogers on your sleeve to begin with. In fact, why not resolve to get that number down to zero?

 Other candidates for New Year’s resolutions

If you don’t overmuch care about AI and are just looking for general inspiration as you contemplate your own resolutions, here is a wide assortment of suggestions: 

Further reading 

—~—~—~—~—~—~—~—~—
Email me here. For a complete index of albertnet posts, click here.