Friday, October 31, 2025

Tech Reflection - Two Sides of AI

“This Halloween, I’m dressing up as generative AI. I’m going to show up to the party without a costume and just start stealing pieces of other people’s outfits.”
An X dispatch my niece screenshotted for me

Introduction

Is AI the amazing new technology that’s changing the world, or a petty thief that just steals people’s ideas and passes them off as its own? Does it actually carry out anything approaching thought, or is it just a zombie, stalking humans’ digital relics and muttering “brains … brains … brains” as it angles to get a piece of us?


In this post, I examine the two most fundamental functions AI chatbots can carry out, and draw a distinction between the two. I believe this can give us useful guidance in deciding how we ought to use this game-changing technology.

Ecclesiastes vs. Barthelme

AI is evolving fast, perhaps faster than our ability to understand it. I’m having to adapt; for example, I’ve stopped spelling it “A.I.” because leading media outfits like The New Yorker and The New York Times have now eschewed the periods. So if you’re reading this on your phone in a sans serif font you may have initially thought I was writing about there being two sides of Albert or Alfred. I asked ChatGPT what to do about this ambiguity between a capital “i” and a lowercase “L,” and it suggested I could “kern or tweak the glyphs.” I’m not exactly an expert at kerning glyphs, so I asked the chatbot how. It gave me all kinds of strategies, the best one for my blog format (HTML) being this:

AI <!-- default -->

A<span style="letter-spacing:0.05em;">I</span> <!-- slightly looser -->

So you can see, GPT is right there with an answer when queried about a technical operation that has been done before. But what about doing something creative and original? This is a fundamental distinction and I am going to propose we look at AI from two largely separate perspectives, for which I’ve invented labels:

  • Operational mode – I thought about calling this Ecclesiastes mode, for “no new thing under the sun.” This mode is about helping with a nuts-and-bolts operation (e.g., HTML scripting, DNS routing) that somebody else, probably many people in fact, already figured out and documented out there on the Internet for AI to gobble up, distill, pretty up, and present. Here, AI is basically a really good large language model that excels at combing through gobs of chaff to find answers, and organizes and summarizes information very clearly. I wouldn’t say it’s as parasitic as what’s suggested by the X epigram above, because lots of people freely post technical stuff to the Internet just to be helpful, without thinking of it as sacrosanct intellectual property.
  • Creation mode – I think of this as Barthelme mode, named for the writer Donald Barthelme, because I think he’s the epitome of totally original, wacky, one-of-a-kind creative types with an absolutely distinctive voice. In other words, this is the intelligence that I am quite convinced AI could never approach. By creation I mean using the full capability of your own mind to advance ideas that are new, and yours.

The trouble is, many people don’t make any distinction between these two general areas of AI, so on the basis of its prowess as a natural language search engine, they are be led to believe it can do a perfectly good job at creation mode. And since most people aren’t English majors, and in fact don’t respect English majors, AI platforms get to roll out some pretty inferior writing and everybody thinks it’s genius. (This widespread lack of sophistication is also why McDonald’s makes so much money.)

So what?

For many years, as I’ve lamented at length in these pages, kids have been told all the jobs are in tech, and they need to study STEM. And now, many of the kids who dutifully followed these marching orders are graduating from college with Computer Science degrees and not getting jobs, and tech is laying off gobs of people. Next time I meet a STEM major I’m gonna ask him, “Computer Science? What are you gonna do with that?”

So how did STEM go from meal ticket to a food stamp? Well, I think it’s largely because AI is actually getting pretty good at the operational mode. It writes software so well, all industry needs is a seasoned coder to check it. Will we still have seasoned coders in 20 or 30 years, when all the current ones have retired and nobody has come through the ranks to replace them? Probably not, but that’s a whole other blog post somebody has surely already written. (I did blog about ChatGPT’s prowess with operational mode earlier this year, here.)

So as we look at AI, and particularly its role in our personal and professional lives, I think we need to ask ourselves what we have to offer that is rare and valuable, and how AI can help. Specifically, I believe we should be asking the question: how do we use operational AI to handle rote stuff, so we have more time to develop our unique, original ideas—so as to bring out our inner Barthelme?

What to use AI for

I have to confess, I love AI for light research when I’m blogging. The kernel of my posts always comes from my own brain, usually from pondering all kinds of things while I’m out on a solo bike ride. But ChatGPT is a great way to chase down and pinpoint something I had vaguely committed to memory. For example, when working on a recent post I asked it, “Can you track down the Lore Segal quote from ‘Her First American’ about ‘protocol is the art of not doing what comes naturally’?” I probably could have found this with Google, but the AI helped (and might have been indispensable here had I not remembered the name of the novel). ChatGPT was also super helpful when I was writing my post on induction ranges, in researching certain facts (e.g., energy efficiency info and whether government rebates are available).

AI is also pretty helpful at work, where I use a “walled garden” version my employer provides. (It doesn’t use any of my chats as training data for the AI’s ongoing education.) In fact, my employer exhorts all us employees to use AI every day. It’s like with any great tool: we’re expected to work more efficiently because we have it, so we’d better use it well. Recently, I took several product specification documents for different Internet hardware devices, fed them into an AI utility, and asked it to read them all, highlight the differences among the different makes and models, and tell me which one I want for xyz purpose. This was much faster than poring through everything myself, which is a decidedly operational task. The report it generated was clear and reasonably concise, and probably won’t be read very carefully anyway. In fact, someone will probably upload it to a chatbot and have it summarized. All this is fine with me.

One other great use for AI chatbots is to ask them for instructions for quotidian technical matters in your personal life, like disabling the child lock on your new microwave oven, charging your new bike’s electronic shifting, or restoring your playlist after updating your smartphone’s MP3 app. Sure, these are things you could look up on YouTube, but often that search can be tricky, and the videos can be agonizingly slow. The following video tutorial, which is crisp and concise and beautifully shot, is perhaps the exception that proves the rule:

I guess one benefit with YouTube is it’s less likely to hallucinate. I asked ChatGPT if my bike’s brake/shift levers have button cell batteries, and it explained in great detail how there are actually wires running from the battery pack to each lever, so they get recharged along with the derailleurs. The chatbot even drew me a nice diagram to illustrate this. Alas, it was hallucinating: the levers totally do have button cell batteries that need to be periodically replaced.  But all this being said, it’s easy enough to sanity check this kind of output, and I usually get a good answer from AI when I can’t locate a product owner’s manual or don’t feel like leafing through the 50-page one that I have, trying to get past the 14 foreign language versions.

What NOT to do with AI

I think where people get into trouble with AI is when they try to get it to do their work for them, particularly writing documents or correspondence that they then pass off as their own. In some cases this is an ethical or even legal matter; as I described here, the New York Times is suing OpenAI for copyright violation, and I have firsthand evidence of ChatGPT essentially plagiarizing this blog. But I doubt you overmuch care about that. There are two bigger issues, I think:

  • What this “creation mode” usage does to the quality of “your” writing
  • What it does to the quality of your thinking

There’s this notion that you can ask a chatbot to write something for you, anything from an email to an invitation to a work report, and then you can just polish it up a bit, and you’re done. No more writer’s block! No more outlines, or worrying how to organize your thoughts! That might be okay for a very basic report, like what I described comparing features of tech hardware. But when you start from scratch with your own document, you’re not just leveraging AI’s impersonal, sprawling training data; you’re using your own—everything you’ve experienced, heard, read, and dreamt of. It’s your own personal muse, not the generic Internet one.

Honestly, for anything loftier than a rote technical document—that is to say, anything designed to edify, persuade, or entertain—haven’t you seen for yourself how AI can fail? Like, you’ll get this chipper invitation to a family reunion and it’s using corny phrases like “drum roll please” and joking about your family’s dance moves, and it just seems generic and clichéd? That’s all AI can do. It doesn’t know you or your family or friends well enough to say anything truly clever, and all the polish you want to give its rough draft won’t help. Your invitation will never have real style, along the lines of, “L— gets dibs on the guest room (which she may still anachronistically refer to as “her” “bedroom”) and its magnificent new king-sized guest bed. If you’re nice she might invite you to a slumber party there. Other guests can fight over the legendary Futon of Sand down in the home office. Beyond that, we have two large sofas for those interested in the college-esque party-‘til-dawn experience. We would not be offended if one or more parties were to seek a motel/hotel/AirBNB/VRBO, especially given the relatively small number of bathrooms here (i.e., one). Regarding rumors that the men are encouraged to pee in the backyard, this is true, but please stick to the planting beds and the fountain.” See how much better that is?

Now, you might be thinking, “Wait, I’m not a blogger and I wasn’t an English major. Cranking out an email or an essay may be easy enough for Dana, but I just want to get this task done and checked off.” But stop and think for a moment: what would you like to be good at, in life? Please tell me the answer isn’t just “typing good prompts into AI.” Wouldn’t you like to be articulate, interesting, and capable of thinking on your feet? Because what are you going to do at a cocktail party, or a job interview, or a non-virtual work meeting, when you don’t have a chatbot to help you, and that’s the habit you’ve let yourself fall into? The reality is, we get good at thinking by struggling to do it, for ourselves, the old fashioned way.

So let’s not undervalue written communication by outsourcing it to AI. The best case scenario is that it’ll do an inferior job, replacing what could have been original thought with a pile of trusty clichés and/or stealthily plagiarized, slyly anonymized content. The worst case scenario is that it’ll actually get good enough that you never have to write for yourself again, and your brain can atrophy to the point that you’re not even a thinker anymore … just a chatbot operator.

Because you don’t care

Gosh, I guess I drifted into high-and-mighty, pompous, full-on pontification there, and I feel pretty sheepish about it! Fortunately, I’m realistic enough to sense you snickering, and I know you’re going to turn right around and keep on using AI for whatever you can possibly think of. That being the case, check back next week because I’m going to catch you up on the latest AI technologies and how much they’ve improved since my last check-in. Whether your chatbot of choice is ChatGPT, Gemini, or Copilot, I’ll have you covered. Until then, I’ll be getting back to what I really enjoy in life: kerning glyphs.

Other albertnet posts on A.I.

—~—~—~—~—~—~—~—~—
Email me here. For a complete index of albertnet posts, click here.

No comments:

Post a Comment