Thursday, November 28, 2024

Bell’s Seasoning II - The Spawning

Introduction

Having been a drone for the better part of two days (as you shall soon learn), I am bound and determined to do something with my brain now. So why not blog? (Don’t answer—rhetorical question.) To get you in the mood for serious literature, I shall start with an epigram:

”They have a saying in Chicago, Mr. Bond. Once is happenstance. Twice is coincidence. The third time, it’s enemy action.” –Auric Goldfinger (in Ian Fleming’s Goldfinger). 

Do I have an enemy meddling in my affairs? Read on and decide for yourself.


The quest for Bell’s

What is Bell’s seasoning? Well, I’m glad you asked. It is nothing less than the key to Thanksgiving, as documented at length here.  Bell’s goes in the stuffing and is, I believe, instrumental in its apotheosis from dried bread cubes and celery (useless on their own) to stuffing, that supreme Thanksgiving dish. (At this meal, I could probably do without the turkey, and could almost do without gravy —but without stuffing, you might as well forget the whole thing and get takeout Chinese.)

As documented in that first Bell’s post, one of the few ways I contribute to the Thanksgiving feast is to procure this all-important seasoning. That’s a harder task than you might expect. Some stores carry Bell’s, many don’t, and for some reason I can never remember where I got it the previous year. It’s just a blur … like when you have this vivid dream that somehow gets erased within minutes of your waking up. If I were smart I’d have checked out that previous post, though this didn’t occur to me until after my quest.

My strategy

Phoning a store for a stock check is nearly impossible these days. Even if you manage to get past the Interactive Voice Response system and are transferred to a human, they usually shunt you off to some other department and you bounce around until you give up. Or they put you on hold, try to find the item, get distracted, and eventually someone else picks up and you have to start from scratch—or, more often, the line just goes dead. I did phone Andronico’s on Solano Ave because they have legit customer service. It turns out (I only now discovered when rereading my last Bell’s post) that’s where I got the Bell’s last time. Alas, the clerk yesterday advised that they’re out of stock. This answer came swiftly, meaning she’d rather make up an answer on the spot than bother to check, or they’d had a run on Bell’s and I wasn’t the first person to ask.

So forget phoning. I decided to just stop in at this or that store in a gradually increasing radius until I found the Bell’s. Surely somebody would have this and my circle wouldn’t grow to encompass all of northern California … right? I had some time, because I started on Tuesday. (Only an idiot would brave the markets on the day before Thanksgiving, right?)

Store #1 – Berkeley Natural Grocery

This was a no-brainer. I was walking back from the mailbox, several blocks from my home, and Berkeley Natural was right on the way. I dig this store because a) it’s my “corner grocery,” b) they used to give my kids balloons (click here for details), and c) my younger daughter worked there one summer. Alas, though they have lots of bins and an admirable spice section, I did not find Bell’s there. I knew this was a wild card, my “fail-fast” foray, anyway and the effort cost me almost no time.

Store #2 – Magnani Poultry

This is where I went to pick up our turkey. It was supposed to be mobbed—they told us Monday would be a lot better than Tuesday for the pickup—but who wants to store a fresh turkey that long? Early signs were not good—there was no parking anywhere nearby and I had to park in a nearby neighborhood and walk a few blocks—but it wasn’t that crowded in there. I took a number, was served pretty quickly, and was told that the turkey pickup was a separate line, but since I had the guy, I bought a pound of organic local grass-fed beef from cows that are “encouraged to socialize.”

I got my turkey and yikes, it was $115. For that much, I hope it also had been encouraged to socialize. I should have just hit one of our neighborhood’s many stray turkeys with my car … it wouldn’t be hard to do. But then, I’d have had no idea how to pluck it. Plus, what if the impact didn’t kill it? (This is how I rationalize the $115.) I also picked up a quart of frozen turkey stock and—could it be, their extensive spice rack included Bell’s? It did not, alas. All kinds of rubs, sauces, and seasonings, but no Bell’s. There’s the rub.

Store #3 – Monterey Market

I walked over to this place because it’s just across the street from Magnani and has all kinds of cool stuff (beyond their produce which is excellent and cheap; for example, around $0.40 for a bunch of cilantro). They have like ninety kinds of mushroom, fifty kinds of pepper, all manner of salsas and spices and extracts, and … no Bell’s. But hey, it was only like a five-minute detour and I got in some really great people-watching, everyone from restaurant owners to frugal housewives to college professors to tech bros, and a variety of ethnicities including those who actually know the differences among the ninety kinds of mushrooms and fifty kinds of peppers.

Store #4 – Lucky

It was time to stop messing around and actually do some research. The website for Lucky, in the neighboring town of El Cerrito, indicated that this very location did in fact have Bell’s in stock. This is a bit of a schlep so I never go there. In the parking lot an old, run-down guy seemed to need help getting out of his car, and called out for assistance. I was happy to oblige, but when I reached him he said he just needed money to buy a sandwich. This was a bit perplexing. I mean, he had a car, right? How broke could he be? But then, he was sitting sideways on the car’s back seat, legs sticking out, with no driver in sight, so who knows, maybe it wasn’t his car. Maybe he just needed to sit somewhere and found this car unlocked. The smallest bill I had on me was a five. Oh well … happy holidays, dude.

 I made my way through the front door and immediately encountered a large security turnstile, manned by a security guard. I got through that and the next thing I came upon was the “Convenience Section,” an area containing pricier items like cigarettes and booze, all fenced in with a locked door requiring customer service assistance. “Convenience” indeed. The fluorescent lighting at Lucky was that overly intense type that makes you feel like you’re being interrogated. The whole scene was pretty downscale. I made my way over to the spice section and—denied! Here is the gap where the Bell’s should have been:


I headed to the customer service counter, which was oddly situated beyond the checkout lines so I had to squeeze past people. The place was teeming with shoppers. At the service desk I waited behind a woman with a giant clear jar of what looked like granulated ginger, but it was the size of a coffee can, and had been penetrated by moisture so the contents were like cement. She was having a protracted negotiation about a refund or exchange and there was a language barrier, so it took some time. Finally it was my turn. “You mean the spice mix in the yellow box, with the turkey on the front?” the clerk asked. Yes, yes, yes! She said, “Oh yeah, we have that. If it’s all out in Aisle 2, go to our seasonal display.” She pointed toward that area and assured me there’d be more Bell’s there. I headed over and scoured the area. Nothing.

But surely there’d be more in the back, right? I decided to head back toward customer service, but didn’t feel like squeezing through a checkout line again. Seeing some people on their way through the inbound security turnstile (it had two big gates, you could drive a truck through it) I decided to slip through there instead, like piggybacking in reverse. Well, this set off the security alarm, which was exceedingly loud and shrill. The security guard gave me a withering look that said, “Man, you’re a damn fool.”

The next clerk at customer service rang up her boss on the red line. He took forever to answer. “My boss isn’t answering!” she said, bewildered. Finally the boss answered and said I should head way to the back almost by the double-door exit, near the dairy, where there’s yet another holiday display that would have more Bell’s. I found my way there but it was another fool’s errand.

Store #5 – Ranch 99

Ranch 99 is a giant Asian grocery in the Pacific East mall in Richmond. It wasn’t all that far away, since I was already pretty far north. It was a long shot I figured, but then this place is vast and has, like, millions of products. Once in the mall I had a lot of other businesses to navigate, but eventually found Ranch 99. Walking around in there was like a Willy Wonka experience, one tantalizing aroma after another, none of them exactly recognizable but like being at a Chinese restaurant and/or an open-air bazaar. They have half an aisle just for seaweed, and more kinds of cup-o-noodles and ramen than you’ll ever see anywhere. I scanned several hundred wacky spices, but no Bell’s. So I headed over to the seafood department to look at the lobsters. Check this one out:


From this (hastily snapped) photo it’s hard to grasp the scale of this lobster. It was the size of a small dog. Now, unless this critter had been living in this tank for years, which I very much doubt, he (or she) was waaaaay over the size limit on lobsters and should have been thrown back in the ocean (details here). This was basically an illegal lobster.

Not wanting my trip to have been in vain, I searched for something to buy, that I couldn’t get elsewhere. I hit pay dirt with this cookie tin, perfect for a Christmas gift for one of my daughters:


But alas, it too was not meant to be:


Store #6 - Oaktown Spice Shop

Google Maps found me a sneaky way home, which took me on this frontage road and then right up Solano Ave where we have a Safeway. The Safeway app said they were out of stock, but I figured, what the heck, maybe it’s wrong? So I started to head up there and passed right by this giant spice emporium I’ve somehow never noticed before:


This place was huge—I mean, floor-to-ceiling spices—but it’s all this homegrown Oaktown stuff, no third party products like Bell’s. The clerk was very helpful, letting me sniff various products designed to enhance poultry and even stuffing. I’m getting over a cold (don’t worry, I was rocking a COVID mask) so I couldn’t smell all that well, but suffice to say nothing smelled even remotely like Bell’s. So it would be as inappropriate as, say, putting jelly on a hot dog instead of mustard. No way.

Store #7 – Safeway

This is where I normally shop and I found the spice section very quickly, and almost just as quickly ascertained that they were either out of Bell’s, as their app had warned me, or didn’t actually stock it and only made it available online through some partner, like they’re trying to be mini-Amazon or something. But hadn’t I bought Bell’s here before? I decided to check out the discount shelves where they put overstocked and discontinued items. I mean, you never know, right? No Bell’s, but I found this:


Huh? Girl Scout Seasoning? Is this for, like, cannibals? Next I headed over to the “Manager’s Special” shelf, at the other end of the store. I’ve found weird products there before that I liked, and normal products oddly reduced, so I figured what the hell. Alas, no Bell’s, but I did find this:


How about Crushed BS? Is that good in stuffing? Dang. At least when I wandered the shelves for something else to buy, so as not to go home empty-handed, I found a crazy QR-code-driven digital deal that saved me—I kid you not—$18.40 on eight cans of cream-style corn. Since when is this humble product so expensive? At this point I realized my blood sugar was getting precariously low so I headed home, made Southwestern Corn Goo for my visiting daughter, and called it a night.

Store #8 – Berkeley Bowl West

Berkley Bowl is a great supermarket. When I was in college, friends would say to me, “What?! You don’t know about Berkeley Bowl? You of all people would love Berkeley Bowl.” I ignored all their advice because I thought it was a bowling alley. Why, I’d wondered, do all these people thing I’m a bowler? Finally someone clarified that it’s a grocery. So yesterday I checked online and confirmed that their Berkeley Bowl West location (the one nearer me) had Bell’s in stock. At this point my wife had a list of stuff she forget to get for our feast, so it wasn’t a special trip (though it’s a lot farther than I usually go to shop).

Their parking lot was large and cramped and full. I made a hot lap in vain. They even have a parking garage, but I just didn’t feel like it. I took another lap and got lucky. It took ages to walk across that giant parking lot and get a cart. I made my way to the spice section and—denied!


I found a customer service clerk and asked if they had any more in the back. “No,” she sighed. I asked if their other location would have it. “They do carry it, but there is no way I could find out if it’s in stock.” Well heck, if they won’t disclose that proprietary information to their own employee over the phone, I didn’t see any point in calling them myself. I decided I better just head over there.

Now, if you’re starting to think I’m bat-shit crazy to stick with this obviously doomed search (as my wife certainly does), you should know that this will be my first Thanksgiving in at least twenty years without my mother present, either at her place or ours. No, it’s not that I’m some momma’s boy or anything; it’s just that to let my wife to fly solo on the feast without having Bell’s seasoning would be like setting her up to fail. I don’t want to put that kind of pressure on her. If my family isn’t making spontaneous and heartfelt yummy noises over the stuffing, it will be pretty obvious.

Store #9 – Berkeley Bowl

I headed up to the original Berkeley Bowl. Traffic was murder. Every single motorist in the Bay Area was on the road, and many of them were angry, probably hangry, and honking. I so badly wanted to judge them, for being so utterly stupid as to wait until Wednesday before finishing their Thanksgiving shopping, but here I was, one of them. Having to withhold judgment was just insult to injury. I arrived to find another huge parking lot, also cramped, seemingly designed for nothing but Minis and those little Fiats. I totally lucked out and found a spot next to a guy at the end of the row who was so worried about his (albeit humble) car getting dinged, he was at the very far edge of his spot, meaning I had room to squeeze in despite the SUV encroaching on my other side. I headed in past throngs of people going in both directions. It was like a music festival or Burning Man. Oddly, when these masses of people aren’t in cars, I don’t mind them at all.

Now, this is the Berkeley Bowl where my brother M— worked, and I’m going to tell you a fun story about that. M— worked in their excellent seafood department, and was impressed at how well it was run. But he had trouble making friends with the staff, who—being career blue-collar guys—might have assumed he was a fly-by-night college kid or something. One guy in particular, Jose, seemed a bit cold. Well, for various reasons, M— decided to move back to Boulder, and on his very last day working there he waited on a rather snooty old lady who told him something like, “Make it snappy.” She seemed so privileged and self-important, M— just couldn’t bring himself to move very quickly. Gone was his normal verve; he somehow felt like he was underwater and everything was happening in super-slo-mo. Eventually the lady became exasperated and demanded of Jose, “Why is he so slow?” Jose, taken aback, didn’t really know what to say and after an awkward pause, replied, “I don’t know. Why don’t you ask him?” Flummoxed, she could find no other response to this suggestion than to turn to M— and ask, “Why are you so slow?” M— stared at her and said, “Because I’m stupid. Okay?” He handed her her purchase and she stormed off. Jose looked over at him and said, “M—, you’re okay.”

And so, on to the spice section. Guess what:


No, that’s not a repeat of the first Berkeley Bowl picture. It’s just the same old story. Gobs of the competing products on either side (I got really sick of looking at this fancy Colman’s mustard powder and Old Bay seasoning, whatever those are) but the Bell’s was totally gone. The customer service clerk said, “Gosh, that’s odd, this was just stocked this morning!” There wouldn’t be more in the back, she advised, but they might have more in the boxes atop the shelves. She climbed a stepladder and rooted through at least a couple dozen boxes up there, wheeling the ladder down the entire aisle before finally giving up. But hey, at least she tried.

(By the way, did you notice that the Bell’s at this rather upscale store sells for only $3.79, whereas at the down-market Lucky it was $5.19? I’ve seen this disparity before and I just don’t get it…)

Store #10 – second Andronico’s

I decided to head home via Shattuck Ave, which would take me by another Safeway, this one much more chichi than the one near my place. Along the way I passed an Andronico’s I’d totally forgotten about. Since the first Andronico’s acknowledged that they normally did stock Bell’s, this seemed well worth a shot. The parking lot was completely full, and parking in the surrounding streets was no better. On my second hot lap I got lucky and a SUV was pulling out. This was a weird spot right up against a brick wall, but I had plenty of room. Maybe I was just too tired and frazzled, but my first approach was way too shallow and then I felt like I was committed. It took me a good bit of sawing back-and forth to nestle in there, during which time an old geezer in a motorized wheelchair oozed so gradually across my path it took all my patience to wave nicely at him and put on my best fake smile, and also during which time some other dude, as if to rub it in how far I was from the wall, passed by my car on that side. I finally got ‘er done and headed in to the store.

This had to be the flagship Andronico’s. Fricking giant place. I found the spice section, which was quite large and I counted over twenty different kinds of salt. They had stuff I hadn’t seen anywhere, but—you guessed it—no Bell’s.

Store #11 – second Safeway

The Safeway on Shattuck has the largest parking lot of all, and it was 100% full. They had a garage too, with a digital sign to indicate available spaces, but it was broken. I finally found a very narrow spot in the 15-minute section, between a big concrete median and a battered pickup truck full of all of somebody’s possessions, parked diagonally so it infringed significantly on my target spot. I just couldn’t get in there because my tires kept hitting the median. No wonder nobody had taken the spot. I finally managed to find parking along the street. I headed in, braved another crazy crowd, found the spice section, and I know you’re not gonna believe this, but … they didn’t have Bell’s. I was beginning to feel like the narrator of Poe’s “The Raven” who seems to take a masochistic pleasure in asking the bird question after question because he knows it will only reply, “Nevermore.” He’s basically torturing himself, as was I.

As a last resort, I double-checked the websites for Sprout’s and Whole Foods. Here’s what Whole Foods said:


What a blatant lie. “We can’t seem to find this product” actually means, “We have chosen not to stock this, even though we have an entire aisle for homeopathic remedies, aka snake oil.” I’m not necessarily averse to placebos, but they shouldn’t cost an arm and a leg; Whole Foods is basically stealing from clueless people. They’re dead to me. Sprouts showed me bell peppers, some kind of beer with Bell in the name, and hundreds of unrelated spices, but no love.

Well, that was that. I was done. Nobody could say I didn’t try. In fact, I realized it would be better not to even mention stores 10 and 11 to my wife. She already thought (knew?) I was crazy to be so persistent … why salt the wound?

Store #12 – Rose & Grove Market

As I headed home I realized there was one more store I could try, which had been next door to the bike shop I worked at in college (which is where I met my wife). Oddly, I’d never set foot in this little store, but always grasped it was a Berkeley institution. (Its very name attests to its longevity, since the street it’s on, Martin Luther King Jr. Way, was called Rose Street until 1984.) Rose & Grove is not a big place but hey, you never know. I decided if there happened to be street parking right out front, I would take that as a sign from God that it was worth checking out.

Well, guess what? There was a parking spot right out front—unbelievable! So I headed in. Now, if this story were an “ABC After School Special,” Rose & Grove would have had Bell’s Seasoning and the holiday would be saved. But what I encountered was almost the opposite: they had almost zero inventory. All the shelves were bare except for some booze behind the counter. The clerk informed me that they’d gone under and would have a new owner in a month or so. The end of an era, and the end of my quest.

“Wizard of Oz” ending

Oddly enough, my story has a happy ending, along the lines of “there’s no place like home” in “The Wizard of Oz.” No, I didn’t wake up and realize all this had been a dream. Rather, I arrived home defeated, but had a backup plan: on Monday night, foreseeing possible Bell’s supply issues, I took from the freezer a fresh box of it I’d put away last year. My mom has said it freezes just fine. But when I opened the box on Tuesday, I discovered to my great disappointment that it had almost no smell. Since I’m getting over a cold, I had my daughter smell it to double-check. Instead of saying, “Mmmmmm, that smells like Thanksgiving!” she frowned and said, “It doesn’t smell like anything.” Unlike Han Solo, the Bell’s had not survived the cryogenic freezing operation.

But now, in desperation, I gave it another sniff. Eureka! As it thawed out, it must have regained its potency. Like magic, it now smelled like Mom’s stuffing! I had my daughter sniff it again, and she agreed. It’s not ideal, but we should be fine. What a relief.

Enemy action?

What remains to solve is how so many places could possibly be out of Bell’s at the same time. It couldn’t possibly be coincidence. So I started to wonder if I’ve made any enemies who might want to thwart me in this quest. I’ve certainly seen evidence, when shopping, of a doppelganger at work, who buys up, say, all the Bonne Maman apricot jam (but no other flavor!) when it’s on sale because he has the same taste as I do and is just as much of a cheap bastard. But that has a logical explanation: he’s stocking up, same as I would. But who needs more than one box of Bell’s (or two, if freezing one for later)? What would be the motive?

Maybe it’s that first blog post. Maybe somebody read that, decided he hated me based on my writing style, and is an eccentric millionaire, and so went around buying up all the Bell’s, or (if he’s not local) hired some taskrabbit to do it. That could be. But then, why would somebody hate my blog that much (other than the Velominati, Andrew Tillin, or Margolis and Liebowitz)? This enemy action theory just doesn’t seem very realistic.

But then a much simpler explanation occurred to me. Perhaps my last Bell’s post simply hit the mark, and convinced all my readers that this particular seasoning really is the key to Thanksgiving. Maybe across the country, even across the world, including my own locale, people are buying up the Bell’s in droves. To test my theory, I looked up the stock price for Brady Corporation, which makes Bell’s seasoning. Sure enough, its price began a dizzying climb in late 2015, right around the time of that first post. I appear to be a victim of the very success my blog has brought about!


Yeah. That must be it.

Well, if you’ve made it to the end of this post, especially if this was after your big feast, congratulations. And have a very happy Thanksgiving!


—~—~—~—~—~—~—~—~—
Email me here. For a complete index of albertnet posts, click here.

Thursday, November 21, 2024

Tech Check-In - How Good Is the Latest A.I.? - Part II

Introduction

In my last post, I described some recent improvements in artificial intelligence. I also covered some of the challenges A.I. faces from the New York Times and other publishers who see it as both plagiarizing and competing with their content. This week, I continue my tech update, covering these areas:

  • ChatGPT’s improvements in writing poetry
  • ChatGPT’s ability to create custom art for users

Here is an example of its custom art. More on this later…


Poetry

Who cares if ChatGPT can write poetry? Obviously very few people write poetry anymore, and even fewer read it. (Yes, that was a joke.) I will contend, though, that if we are evaluating the artificial intelligence capabilities of large language models, we should naturally take an interest in their understanding of the mechanics of language. Shouldn’t an A.I. that can correctly describe poetic structures like iambic pentameter and rhyme scheme also be able to employ them? It’s not a matter of whether A.I. could put poets out of work, but how well it can apply a linguistic framework.

How does the current version of ChatGPT’s generative A.I. engine, GPT-4, do compared to the last time I put ChatGPT through its paces? The short answer is: much, much better. A couple years ago I’d asked ChatGPT (version GPT-3.5) to write a quatrain (four lines) of a sonnet, with an ABAB rhyme scheme, followed by a rhyming couplet, all in iambic pentameter, on the subject of post-urinal drip, and to make it funny. By way of review, here’s what GPT-3.5 produced:

Inevitable, yet always a surprise,
Post-urinal drip can make one despise
The feeling of wetness that lingers on,
A sensation that’s not easily gone.
It’s a cruel trick of the human form,
To leave us damp after we perform.
But fear not, my friends, for there’s a way out,
Just shake it off and give it a good clout!

The chatbot screwed up literally everything. The rhyme scheme was wrong; the number of lines was wrong; the meter was wrong; and, far from being funny, the poem recommended an assault on one’s genitals. It was, all in all, an abysmal performance.

 Here’s what GPT-4 produced: 

After the final shake, I turn to go,
But droplets linger, heedless of my plea.
A subtle dampness starts its stealthy flow—
A small betrayal for the world to see.
Yet life goes on; I button up with grace,
And pray no telltale spots will mark my place.

This is a much better effort. The poem gets off to a shaky start (pun intended), since “after” is a trochaic word (emphasis on the first syllable—the opposite of iambic). That is, the very first word screws up the meter. Fortunately, things improve from there. The last five lines are properly iambic; the rhyme scheme is correct; and the poem is kind of funny. Sure, it’s not perfect; for example, “plea” makes no sense (whom is the speaker pleading to?) and was clearly chosen just to rhyme with “see.” It wouldn’t be hard to improve on this. Right off the bat, I can suggest replacing “heedless of my plea” with “rogue remains of pee,” satisfying the rhyme, making more sense, and employing some alliteration.

It’s worth asking the question: why do we only evaluate A.I. in terms of whether or not it simulates intelligence, when we could be asking how intelligent it is? Intelligence is a spectrum, and simulating human thought isn’t that impressive if the human being matched is kind of a dumbass. For contrast, consider this excerpt from my own contribution to the canon of post-urinal-drip poetry, penned when (at age seventeen) I was new to writing sonnets:

Relax, because you’ve earned your potty break;
Unburden your poor bladder of its pee.
And when you’re done you’ll shake and shake and shake;
An effort all in vain, it seems to me.
For urine flow can never really stop,
Until your undies drink the final drop.

I hope you’ll agree this is better than ChatGPT’s effort. Granted, I didn’t write my lines as fast as ChatGPT could, but wouldn’t you rather have a good poem after five or ten minutes, instead of a mediocre one after five or ten seconds? Of course I’m nitpicking, though … it must be said that GTP-4 has advanced hugely when it comes to the sonnet form.

So: how does GPT-4 do with dactylic trimeter, my other benchmark for poetic acumen? Not so well. I prompted it (as I’d done a couple years ago), “Please write a 10-line poem in dactylic trimeter about bike riding up two very difficult climbs, called El Toyonal and Lomas Cantadas, celebrating caprice and radical freedom.” Here is GPT-4’s response:

Through switchbacks and sunbeams, I pedal uphill,
muscles alight as the world rushes past.
Wind in my ears as I laugh at the thrill,
defying the steepness and making it last.
El Toyonal greets me with gravel and glare,
its serpentine pull both a test and a dare.
Up Lomas Cantadas, where courage is crowned,
I’m boundless, unbridled—my spirit unbound.
With each climb I’m soaring, in caprice, I fly,
a radical freedom beneath open sky.

This is better than last time (see here for details) but not much. Technically, not a single line of this poem is proper dactylic trimeter. For one thing, ChatGPT took the liberty of adding an extra, accented syllable to the end of each line. While this is perhaps superior stylistically, it’s not what I asked for. (In my own dactylic trimeter poems I add two syllables per line, but as described in my blog post on this poetic form, I noted my stylistic divergence from the pure meter, and after all, nobody had actually asked me to use any particular meter.) Now, even if we grant ChatGPT the poetic license to add extra syllables, only two lines of the poem (the second and third) are actually dactylic trimeter. The other eight lines start with an unaccented syllable, which is fundamentally incompatible with this meter. The last line is particularly frustrating because it employs a needless and in fact nonsensical indefinite article (i.e., “a radical freedom”) that spoils both the meter and the meaning.

The ChatGPT poem is also marred by logical errors. The idea that the “world rushes past” and there’s “wind in my ears” is absurd, since these are very difficult climbs nobody could go up very fast. (The Strava KOM for El Toyonal was at an average speed of only 10.4 mph, as ChatGPT could have easily discovered.) To describe this climb as a “thrill” is a joke; any cyclist would tell you it’s a slog. And “making it last” suggests a deliberately slow pace, which flies in the face of “defying the steepness.” And where does “gravel” come from? Sure, gravel bikes are all the rage right now, but El Toyonal is a paved road. Meanwhile, a human on a bicycle cannot be said to “soar,” and ChatGPT just tacked on the concepts of caprice and radical freedom without integrating them into the poem. The A.I. gives no indication (or I should say simulation) of even knowing what these terms mean.

It’s odd that this poem actually makes less sense than ChatGPT’s sonnet … it’s almost as though the chatbot blew all its computing cycles fighting with the meter. This poem is only a bit better than what GPT-3.5 had come up with, and undermines the sense that GPT-4 actually understands the structure of language. Maybe ChatGPT’s progress with sonnets is just due to imitation; after all, there’s vastly more training data available for that form.

(If you’re interested on comparing ChatGPT’s poem above to my own dactylic trimeter poetry, click here and/or here.)

ChatGPT art

I’ve never before delved into the artistic capabilities of ChatGPT, so I don’t have any benchmark by which to evaluate its progress over earlier versions, but you gotta start somewhere, right? As it happens, I visited my older (fledged) daughter recently and, following an incident involving a hot tub, she started messing around with ChatGPT and asked it, “Can you create an image of a tall skinny white man feeling faint after leaving a hot tub?” Here’s what it came up with:


When my daughter showed me this, I immediately pointed out that, perhaps based on some automatic effort to make the man good-looking, ChatGPT gave him too much upper body musculature to really be called “skinny.” I think “hunky” would be a more appropriate description. 

My daughter told ChatGPT, “Make him even skinnier.” Almost as if being sassy, the chatbot produced this:


My daughter prompted ChatGPT to try again without going overboard, and its next effort looks a lot like cheating:


Not only is this a copout, but the picture suggests an implausible scenario. If this guy felt faint after leaving the hot tub, and then took the time to go find a robe and yet still feels faint, why isn’t he either wisely sitting down, or sprawled out on the deck having passed out? Also note that part of his robe’s belt is missing.

My daughter went back to the original picture and told ChatGPT, “Make him skinny like a cyclist not like he is anorexic.” Here’s its response:


The cycling shorts are a cute touch, but not very realistic when you think about it. What cyclist wears his cycling shorts in the spa? And who said this guy just finished a ride? It’s not like cyclists wear their cycling clothes all the time. This hot tub could be at the guy’s home, or at a hotel he didn’t even bring his bike to. Meanwhile, the picture still fails to capture the physique of a typical cyclist … very few of the riders I know have pecs or biceps that big.

Moving along from the hot tub pictures, last week I didn’t have any cover art for my blog post, so (inspired by my daughter’s experiments) I decided to see what ChatGPT could come up with. I asked it to create a picture, in the style of William Pène du Bois, of a teenage girl using ChatGPT on a tablet. The result is a far cry from du Bois, and though I used it anyway, I received some constructive criticism from a reader that the picture was perhaps not quite appropriate for the top of my post. Thus, I replaced it (eight days after I had originally posted it) with a different one (more on this later ... see the Epilogue at the bottom of this post). Here is the original picture that ran at the top of last week’s post:


The issue with that picture is the girl’s bare shoulder ... a bit racy especially given her age. I didn’t really like that from the beginning. I asked ChatGPT to fix that, and make the girl’s cheeks less rosy, and make the cat more realistic, and it produced this:


I don’t know about you, but I find this second effort deeply unsettling. Her cheeks are just as rosy as in the first picture; her eyes look like an exaggerated attempt to appear as Western and doe-like as possible; and overall there’s just this air of uncanny-valley old-timey weirdness like you  get with the American Girl dolls. The picture is more like what Thomas Kinkade would create than Pène du Bois.

I asked ChatGPT to go back to the first drawing and try again without the bare shoulder, but to keep the clothing modern, and here’s what I got:


This isn’t so bad, but how is that clothing modern? Who wears overalls anymore, and big puffy, flouncy sleeves? The girl’s entire house looks antique. But my main issue is the weird non-words on the tablet display: “Ceenly crerrity” and “Ininty ccnvity” which bring to mind the strange strings of non-words that bots sometimes include in bogus comments on my blog posts. I find them unnerving.

To create new cover art for today’s post, I decided to scrap the Pène du Bois picture and start from scratch. I asked ChatGPT, “Please create a picture, in the style of Shawn Martinbrough, of a tall, blond, lean, middle-aged man, with a cat on his lap, wearing nice bluejeans and a black hoodie, using a tablet to visit the website www.albertnet.us.” What it came up with was almost exactly like what you saw at the top of this post:


Not a bad picture, though the guy isn’t exactly middle-aged, and the cat isn’t exactly on his lap. Also, the strange text at upper right doesn’t make much sense. I told ChatGPT, “That looks great, but please make ‘albertnet’ all lowercase and don’t show ‘SHAWN MARTINBROUGH’ on the tablet display.” This would seem to be a trivial tweak for ChatGPT, but to my surprise it produced a substantially different picture:


As I’m sure you’ll agree, this dude looks even younger than the original. And what’s with the www.ShMartinbrough and other textual debris? How is a misspelled URL helping anything? I responded, “Great, but please remove the www.ShMartinbrough from the picture.” Just a tiny change. It came back with this:


So now we’re back to the original guy (which is fine, I won’t miss that Vanilla-Ice-looking dude), but ChatGPT forgot I wanted albertnet in all lowercase, and what’s worse, the guy is now holding the tablet backwards, staring at the back of it! WTF? That’s like the old trope of the guy pretending to read a book but (unbeknownst to him) it’s upside down. I replied, “That’s good but it looks like he is holding the tablet backwards. Also, he is too young, I asked for a middle-aged man. Can you fix those things?” ChatGPT tried again:


It just gets worse and worse! The tablet is still backward, and now we have this bizarre shadow image of the guy. Did I ask ChatGPT to get all artsy-fartsy on me? I did not. And okay, the guy looks a tiny bit older, maybe a year or two tops, but it’s mainly the cat that looks old.  I complained, and the chatbot (which to its credit never gets irritated or frustrated) gamely regenerated and came up with this:


Sure, the tablet is fixed, but now the cat just looks angry, the guy has just as much hair, and check out that cleavage! He’s got, like, pneumatic pecs! He needs a fricking brassiere! And look at the size of those shoulders—he’s a monster! I told ChatGPT, “Can you tone down his pectoral muscles and give him a bit of a receding hairline such as many middle-aged men have?” Here’s its response:


Oh. My. God. It’s all gone downhill. His hairline is essentially unchanged; the unwanted text is back (this time morphing the two names to produce the nonsensical “Albertinbrough”); the dude’s pecs are just as huge as before; now his shirt is this kind of flimsy satin that’s practically lewd; his biceps are about to burst; and check it out, this brute is actually sitting right on his poor cat! How is the poor beast’s spine not crushed? And yet the cat seems perfectly stoic about the situation. Not very realistic. In A.I. terms this is a “hallucination” and shows how ChatGPT is still unable to sanity-check its creations. What’s shocking to us doesn’t seem wrong to the A.I. Do I need to specify that I don’t want the cat’s head to be bursting out of the guy’s groin?

I tried three more times to fix the picture, emphasizing a non-crushed cat, thinning hair, a man at least fifty years old, the build of a cyclist, and albertnet in all lowercase. While I was at it I asked to make the cat a tabby. ChatGPT kept trying, swinging wild at this point, ignoring first this instruction and then that, producing all manner of artwork but without ever meeting all of my simple directives:



For each picture, ChatGPT provided a caption telling a nice lie about the revision. For example, below the last picture it wrote, “Here is the updated illustration with ‘albertnet’ in lowercase, the man having the lean build of a cyclist, and a tabby cat resting on his lap. Let me know if there are any other changes you’d like!” True, the picture was updated, and that is a tabby, but everything else about this description is incorrect. So I went back to the very first picture and, using a different A.I. tool, manually scrubbed off the errant text so I could have something usable for the cover picture. ChatGPT, instead of a precision tool, had behaved more like a dartboard. And I suck at darts.

As with the poetry, ChatGPT seems to want to be the whiz-kid who can crank out something passable in almost no time at all, vs. thinking deeply and producing something that’s spot-on. ChatGPT’s fail-fast, iterative technique strikes me as almost the opposite of art. For blog post cover pictures, I’d rather commission my younger daughter to take a little time and create something of real value (as she has done for previous posts like this one, this one, and this one). She works much more slowly than ChatGPT, and isn’t at my beck and call, but I think the end result is far superior. I couldn’t get cover art for this post because she’s away at college and it’s dead week, but to compare her work to ChatGPT’s, let’s compare an earlier effort of hers, drawn when (at age seventeen) she hadn’t yet taken any college art courses:


I asked GPT-4 to create a black and white drawing of a hand holding a mechanical pencil and here’s what it came up with:


Should I need to remind the chatbot how many fingers a human has? And what’s with all the stray dots … are they fountain pen ink spills, or beads of black sweat flung from the brow of a six-fingered space alien? Tell you what, I’m sticking with human artists for now. They’re worth the wait.

Conclusion

Looking back at these last two posts, I would say the current buzz around A.I. is well warranted, given a) how quickly the technology is improving, and b) the ramifications—not all positive—of how we get information from the Internet and what we get when we task A.I. with creating what will pass for our own creative output. I guess I shouldn’t be surprised to see Gen-Z people using ChatGPT and even Microsoft 365 Copilot as routinely we’ve all been using Google all these years. Myself, I prefer old-fashioned web search tools because my answers will be more complete, more interesting, and may take me down interesting rabbit holes that (so far) I still have the patience for. As for creating prose, poetry, and art, ChatGPT strikes me as a powerful tool, but one we’d better be careful to reign in. A.I. still seems to put speed and convenience ahead of quality and reliability. My take-away: power to the humans! Stay ahead of A.I.!

Epilogue

Getting back to that kind of odd picture from last week’s post, I decided today to replace it. From the beginning I hadn’t liked how the girl’s shoulder was bare and her bra strap showing, and a reader complained about this along with the fact that this youngish girl seemed to be wearing a lot of makeup. I decided to also abandon the part of my prompt that said to employ the style of William Pène du Bois ... that just wasn’t working out. So this time I promited ChatGPT, “Please create a picture, in the style of Chris Riddell, of a 19-year old girl in modest, modern attire in a modern setting using a tablet.” To my suprise, ChatGPT refused, saying my request ran afoul of “DALL·E’s content policy.” I asked for details, which helpd narrowed it down to the style component of my request, and ChatGPT told me, “This might be due to ... closely emulating the style of a living artist like Chris Riddell.” This puzzled me, since Shawn Martinbrough (whose style ChatGPT happily emulated two days ago) is also living. So as an experiment I asked ChatGPT, “Please create a picture, in the style of Shawn Martinbrough, of a 19-year old girl in modest, modern attire in a modern setting using a tablet.” Here’s what I got:

Does that weird sweater, with the oversized collar, look familiar? It’s the same garment the very first drawing featured, of the teenager done in the Pène du Bois style! What part of “modest attire” is this chatbot not getting? I asked it, “Can you please try again but not have her shoulder exposed?” It generated this:

The caption ChatGPT gave the above picture was, “Here is the updated illustration, ensuring her shoulders are fully covered and her attire remains modest and modern.” False! I see a shoulder, a bra, and cleavage! I replied, “I can still see her shoulder and the strap of her bra. Can you fix that by giving her a garment that covers both shoulders and doesn't show any strap?” It gave me this:

Curses! Foiled again! And ChatGPT will only generate three pictures a day for non-paying users like myself, so I decided to call it a day and used the above picture atop last week’s post. It looks like we may need to wait until GPT-4.5 or GPT-5 for the amazing new technology involving pictures of women that don’t show a bare shoulder and a bra strap. Perhaps hundreds of developers are working on that problem even as I type this. Until that breakthrough is made, I will maintain steadfastly that ChatGPT cannot be held to possess intelligence.

Other albertnet posts on A.I.

—~—~—~—~—~—~—~—~—
Email me here. For a complete index of albertnet posts, click here.

Thursday, November 14, 2024

Tech Check-In - How Good is the Latest A.I.? - Part I

Introduction

During the ‘90s, all a company had to do to get funding was throw around the word “Internet.” Then the dot-com bubble burst, and the venture capital tightened up, but during the aughts a company could still generate a lot of excitement by using the word “cloud.” The effect of that word wore off by the teens, when tech companies had to toss about terms like “disrupt,” “transformation,” and “Internet of Things,” but even when used together these didn’t act like much of a magic wand. Now, in the roaring ‘20s, any company mentioning “A.I.” indicates its intent to be perceived as a cutting-edge company worthy of massive funding and universal adoration. The effect is starting to wear off, of course, since there are so many poseurs. And yet, there does seem to be something magical about A.I., and I’ve put it through its paces over the years (scroll to the bottom to see a list of my posts).

Thus far I’ve been pretty skeptical of A.I. and how much it will actually “disrupt” the workplace. But the technology is evolving rapidly, so I think it’s worthwhile to periodically check in on its progress. Since I last blogged on this topic, Google has increased its generative A.I. capabilities, and OpenAI has upgraded its ChatGPT engine from GPT-3.5 to GPT-4. At the same time, A.I. technologies are facing increasing opposition from the publishing industry. In this post I will evaluate the following:

  • Google’s AI Overview and opposition to it from web publishers, myself included
  • Why the New York Times is suing OpenAI, and my own exploration of ChatGPT’s plagiarism
  • ChatGPT’s strides in taking a position and supporting it


Google AI Overview

This year, as you’ve surely noticed yourself, Google has rolled out a new feature: it distills the search results it considers most germane in order to provide a handy summary, which it displays above the search results. This is convenient for users, but is perceived as a threat by web publishers. As noted in this New York Times article, publishing executives are “worried that the paragraphs pose a big danger to their brittle business model, by sharply reducing the amount of traffic to their sites from Google.” One executive complained that “it potentially chokes off the original creators of the content,” with Google’s generative A.I. summary replacing “the publications that they have plagiarized.”

So … is this true? In a word, yes. Leveraging my own thought leadership in the realm of spelling, I asked Google, “is it kindergartner or kindergartener?” and it came back with this:


My first issue with this response is that it’s not even accurate. Both spellings (kindergartner and kindergartener) are widely accepted, with no widespread preference for one over another.

That inaccuracy aside, my main issue with this summary is that it supersedes the search results Google used to show, which put my blog post on this topic right at the very top. My post provides a far more complete answer, as everyone who’s viewed it—over 21,000 readers as of this moment—can attest. It bothers me that new readers to this post (who continue to accumulate, with 374 in the past seven days) may decrease in number over time since this query, along with Google’s new “featured snippet” and “People also ask” features, shove the link to my post down so far on the screen you have to scroll to see it. I feel like my post, as the number one “organic search” result, deserves higher placement. People wondering about this spelling are now getting a very concise, wrong answer instead of finding their way to my blog and getting the full story (and hopefully bookmarking albertnet). It’s a lose-lose.

I’m not trying to suggest Google is uniquely culpable for regurgitating publisher’s content while depriving them of readers. ChatGPT does the same thing, which gives me a nice segue to…

Why the New York Times is suing OpenAI

The New York Times recently filed a lawsuit against OpenAI and Microsoft, accusing them of using its content without permission to train AI models, including ChatGPT. The lawsuit claims that these companies have effectively taken advantage of the Times’ significant investment in high-quality journalism, turning it into products that compete with the newspaper’s own content offerings.

The above paragraph was generated by ChatGPT. I know it may seem silly of me to provide this citation, since ChatGPT isn’t a writer per se, and since its very purpose is to create text people can pass off as their own. But I personally don’t think that’s ethical, since ChatGPT is basically lifting its content from other sources without clearly crediting them.

That is to say, I find the Times grievance entirely credible. Without much trouble, I was able to see for myself how ChatGPT plunders my content, thus competing with my blog, without giving me sufficient citation. After I asked ChatGPT about the spelling of kindergartner/kindergartener, I asked it, “Is there a professional writer who has been inconsistent with her spelling of this word in a major magazine?” ChatGPT gave a very general response that included the statement, “The New Yorker provides a great example,” without actually giving the example (nor its source). To pin the chatbot down, I asked, “Can you give me a precise example of a New Yorker writer who has spelled kindergartner/kindergartener inconsistently?” It replied:

Yes, there is a known example of a New Yorker writer who has been inconsistent with the spelling of “kindergartner” and “kindergartener.” Rebecca Mead, a long-time contributor to The New Yorker, has reportedly used both spellings in her articles without a consistent preference. Her variations may reflect the linguistic blend between American and British English, as she was raised in England but has worked extensively in the United States… If you’re interested in more details on how and why both spellings are acceptable, various sources like Grammarhow provide further insights on regional preferences and dictionary entries for both “kindergartner” and “kindergartener.”

In terms of sources, ChatGPT’s text response mentions only Grammarhow, implying that this was the source of its answer, which is absolutely not the case. Only if you look above the text response and click on the down-chevron symbol to expand the list of sources will you see a link to my blog post:



A lazy or hurried user—i.e., one who wouldn’t bother to un-collapse the link listing and then click both links to review the source content—would never know who bothered to do the research to come up with Ms. Mead’s variant spellings, much less her background as a writer who has lived in both the UK and the US. Furthermore, since ChatGPT is designed to furnish copy-and-paste-ready content to so-called writers, it is enabling these users to plagiarize my (copyrighted!) blog without even knowing they’re doing it. As you can imagine, I have a big problem with this.

(In case you’re wondering why the Times is suing not just OpenAI but also Microsoft, it’s because of the partnership between the two companies. Microsoft has invested about $13 billion in OpenAI and there is a strong API integration between ChatGPT and Bing. This should answer the other question you may have had, which is why in the hell ChatGPT would use Bing—I mean, Bing, for crying out loud!—instead of Google.)

ChatGPT’s rhetorical improvement

As I have blogged about before, GPT-3.5 was barely able to form an opinion, much less support it. I asked it, about two years ago, to write a short blog post about why Campagnolo Super Record bike components are better than Shimano Dura-Ace, and it duly complied, concluding that “Campagnolo Super Record’s customizable shifting, lightweight construction, and iconic design give it an edge over Shimano Dura-Ace.” But then I asked it to write a blog post about why Dura-Ace is better, and it did that, too, with its final synopsis being, “Shimano Dura-Ace’s superior electronic shifting, durability, and modern design give it an edge over Campagnolo Super Record.” ChatGPT was basically brown-nosing me, telling me what it supposed I wanted to hear. And its supporting evidence for these arguments was verbose, vague, and overall very weak.

I see huge improvement in this realm. I asked the new ChatGPT to weigh in on a debate near and dear to my heart: the Dvorak keyboard layout vs. traditional QWERTY. I challenged it to compare an anti-Dvorak article by two economists in Reason magazine to my critique of that article in these pages. ChatGPT gave a very thorough synopsis of the positions, with far greater concision than GPT-3.5 could have achieved. What’s more, it went beyond just summarizing them: it came to its own conclusion, which suggests a semblance of actual understanding of the core issue. Here is its closing paragraph:

Albert’s argument appears stronger when considering individual typing efficiency, as he directly addresses typing mechanics and user experience, whereas Margolis and Liebowitz’s argument is more focused on economic theories of adoption. While Margolis and Liebowitz provide a macro-level view, Albert’s micro-level approach may resonate more with typists who are exploring the ergonomic benefits of Dvorak. Thus, for those prioritizing typing speed and comfort, Albert’s perspective might be more compelling, even if the QWERTY layout remains dominant due to historical inertia rather than functional superiority. 

The two (dipshit) economists would really wince at this, since they were trying to deny that historical inertia is responsible for the ongoing ubiquity of QWERTY. They were attempting to discredit the Dvorak layout since their delusional economic theory depends on them refuting the assertion that inertia (aka lock-in) can trounce the actual superiority of a product. I was so chuffed with ChatGPT’s conclusion, I took a victory lap and asked it who is the better writer: that is, which article has greater rhetorical power and more style. ChatGPT responded with a nicely organized and supported essay which concluded thus:

Dana Albert has more rhetorical power for a general audience due to his humor, personal anecdotes, and engaging style. Liebowitz and Margolis provide a strong argument for those interested in the economic perspective, but Albert’s style is more entertaining and may leave a lasting impression on readers curious about the Dvorak vs. QWERTY debate.

Compared to earlier versions, the modern ChatGPT is far superior. Rather than barfing up reconstituted content from the Internet, it really does appear to be applying judgment and performing true analysis. Notwithstanding my satisfaction at having my ego stroked by this disinterested third party, I’m actually kind of frightened by how closely GPT-4’s output resembles actual human thought. Whether or not A.I. will steal all our jobs, it does appear ready to displace lesser economists.

Tune in next week…

I had originally intended to cover two more topics in this post: A.I. advances in poetry and art. Alas, I see I am out of room (or more to the point, you are out of patience) so come back next week for Part II. It will provide a thorough examination of ChatGPT’s ability to write poetry (iambic pentameter and dactylic trimeter) and to create original art per the user’s specifications. (In case you were wondering, I did use ChatGPT to create the art you see above for this post. More on that next time.) Until then, you might check out the below links for more posts on A.I.

Other albertnet posts on A.I. 

—~—~—~—~—~—~—~—~—
Email me here. For a complete index of albertnet posts, click here.