Tuesday, December 29, 2020

From the Archives - Marin Headlands Big Ring Tale


This post goes well beyond the 160-character length of an SMS text, not to mention the 180-character limit of a tweet. Since I cannot reasonably expect anybody to read more than that in a single sitting, I have made this available as a vlog. Here:

If, on the other hand, you cannot stand to look at my face for 13 minutes, either close your eyes throughout the video playback, or read the text below like I originally intended.


As I wrap up 2020, I’ve made a challenge for myself: can I finish the year with more cycling miles than I logged in 2019? This means riding the rollers six days in a row, which at my age feels nearly impossible. But this self-directed age-shaming is a bit silly, really … of course I can keep right on cycling hard, well into middle age. It’s not like I’m in the NFL or something.

My defeatist thinking goes way back. I felt utterly washed up as a cyclist as far back as 1998, having no idea back then how much glorious hammering still lay ahead. The following Big Ring Tale, from my archives, showcases this self-delusion. (To put this story in context, I was just 28 years old at the time, living and working in San Francisco, and I’d quit racing—temporarily, as it would turn out—about six years before.) 

Marin Headlands Big Ring Tale – April 28, 1998

I was outbound on Greenwich street, half a mile into my ride, when I passed a bike messenger with a huge, stuffed-full messenger bag. It was a windy evening, a headwind, and I figured if she wanted to slip in behind me for a draft, I was fine with that. She’d probably had a hard day dodging cars, etc. I was vaguely aware of her presence on my wheel most of the way through the Presidio before he passed me—that is, someone besides the messenger; a racer-type who had traded places with the messenger somewhere along the line. He was a tall guy, with a US Postal Service jersey, a Dura-Ace equipped Serotta, and shaved legs. By this point I was sick of the headwind myself and dropped in behind him. He was setting a good pace.

We cruised pretty fast through the rest of the Presidio, and on the Golden Gate bridge kept up the pace, taking turns pulling. I was grateful for the help since I didn’t have all that much time to ride before dark. A third of the way across the bridge I was trying to predict whether this guy was riding into the headlands, like me, or turning off toward Sausalito after the first short, steep section of the headlands road. A lot of guys who go fast over the bridge do so because they know they have a long coast into Sausalito afterward. They kind of irk me because they can afford to hammer and I . . . well, I guess I can afford to, too, but it’s not a great way to start that longish headlands climb.

This guy looked pretty legit: decent form, back pretty flat, reasonably relaxed on the bike. His pedaling was smooth too, but frankly too fast, as though he were a modern fitness cyclist who reads Prevention magazine, takes antioxidants, and worries about his knees (even though he was a fair bit younger than I). It was like he was slapping at the wind instead of throwing big punches. And yet, something about him was just a little too fresh, a little too springy. He somehow wasn’t brittle enough; he seemed like the ‘98 model, freshly minted. Whatever a crusty old veteran is, he wasn’t.

I thought about how I must look on the bike. Let’s face it, I’m a mosher. I stay on top of the biggest gear I can (on principle or because I have no patience for a high cadence). There’s not much sparkle in my eye. I’m visibly worn. Not just my clothing, either—although it’s pretty sad these days, the side panels on my shorts going translucent, the once neon jersey now completely pale. But as I said, it’s not just my clothing. My form is fine, my position right, but I think I probably look tired most of the time. My pedal stroke is smooth enough, but surely looks unenthusiastic somehow. I don’t pop out of the saddle—I drag myself out of it. I think I must give the impression of an old, beaten-up Dodge Dart; not zippy like those new Neons.

Perhaps as a reflex against feeling intimidated, I began to make judgments about my companion, and probably unfair ones. I mused that although he certainly seemed fitter than I, he didn’t seem the type to have deep wells of guts, either. He seemed the kind of rider who might cave in at the first sign (be it real or a ruse) that he’s outgunned. It dawned on me that I was doomed to mix it up with him—my motivation mixed up between the foolish aggression of aging prizefighter and the detached eye of a scientist conducting an experiment.

I decided to take the last long pull just before the final section of the bridge. The last stretch is always super windy, and can cut your speed significantly, enough to make you feel weak and worthless. Thus, I wanted to make sure this guy had to lead through it, not me. I wanted to arrange it so he pulled good and hard there, so I could hope to demoralize him on that short, steep rise before he (presumably) turned right towards Sausalito and I turned left toward the headlands. When I took the lead for my last pull, I accelerated slightly, upshifting noisily. I hoped this would make him feel slighted, like his previous pull hadn’t been fast enough, and thus shame him into expending too much energy on this last flat bit.

Well, he took the bait, and hammered through the last section of the bridge, while I did my best to achieve cardiovascular hibernation in his draft. Just as the climb started, I punched it. A foolish pace given the length of my climb, but I couldn’t bear to let this guy dust me at the end before his cowardly downhill turn. Well, it worked, and I got a pretty decent gap by the top of that first rise, but then something terrible happened. He turned left, too, to follow me up the hill. He’d planned the same headlands loop I had, all along.

This must be what the dog feels like when it finally catches the mailman. What now? Well, suffer, stupid! It would be humiliating indeed to falter now, and besides, I had the psychological edge. He must have felt pretty outclassed to let me open up such a gap, and so quickly, and he probably had no idea how hard I was going. My heart rate was 189, which isn’t high at all for an adrenaline-engorged junior, but for me that zone is almost off-limits, like that far wing of the house you close off to save on the heating bill. I decided that pacing myself would give him too much hope and he might close the gap—or worse, counterattack me. It’s sure happened enough times before; one time, my assailant had a Fuji Road Look saddle, or seemed to. It’s a humiliating thing. So I decided to stretch out my lead by really kicking it in the guts. A foolish plan, but such recklessness had gotten me this far, hadn’t it?

I rounded the first bend at the Golden Gate Bridge Lookout overlook (not very scenic today, with the bridge engulfed in fog) and the wind switched slightly, now in my favor. This cheered me up a little (luckily, since my body may have been planning a biological mutiny), and I humored myself with the notion that my nemesis was still facing the headwind. Through the next section I cranked up the pace even higher, my throat getting raw and bloody-tasting. Towards where the road flattens out in the twisty section, I really started to bog down. The wind was now against me once again, and I was starting to pay for my ill-considered aggression.

This shallow part of the climb is normally where you can throw her in the big ring, and indeed the propriety of calling this a “big ring tale” demands that I did. But I couldn’t do it. My pace was slowing and I didn’t dare look behind me. I could just picture my opponent seeing my weakness and gliding by me with that jaunty springiness of his. So, despite the apparent impossibility of honoring the gesture, I did put her in the big ring.

It was to avenge the widespread demise of thousands of losers like me that I dug even deeper now, with the pathetic desperation of Ichabod Crane in his final flight from the Headless Horseman. My opponent was a winner: clean bike, clean jersey, shaven legs, real fitness, verve and vigor incarnate. He was probably chuckling at the earnest, flailing efforts of this ragged, washed-up has-been. Well, he could chuckle all he wanted, but he was still behind me. I pushed the big ring all the way to the four-way intersection where you head left toward the summit. By the time I had to go back to the little ring, something important had happened: I had become sick.

Not physically sick, mind you (though that didn’t seem far away). No, I became psychologically sick. This is a dark and disturbing place, the point at which my opponent no longer mattered, my pace no longer mattered, my vital signs—heart rate, speed, rate of vertical gain, elapsed time—all ceased to matter. All that mattered was that I increase the suffering. Some switch had been flipped, some crazy trip-wire had been tripped, some natural system had been deeply subverted. Sometimes a lioness washing her cubs loses track of normal biology and begins actually eating her young instead, and this quirk of nature couldn’t be far from the strong need I had now to increase my own suffering. Like a junkie who craves more and more of what is killing him, I now increased my effort.

You know how some athletes are described (or describe themselves) as having the ability to “turn off” pain? Well that’s total bullshit. Whoever says that just doesn’t get it—the extreme athlete learns now to turn on the pain, how to hurt himself badly and seem to thrive in the process. I was tapped into something sick and wrong, and exhilarating. (How come I so rarely found this spot back when I used to race? Astute tactics tended to get in the way.)

This near-frenzy didn’t last long, but it didn’t end abruptly either. It mellowed out a little; I came off the rush. I came to realize how awful I sounded—my breath a horrible, rattling wheeze. In all the excitement I forgot to look at my heart rate. I finally looked behind me, and . . . my rival was just plain gone. My pace had now dropped a bit, but I was close to the top now [the blue dot in the map] and victory was assured. But what a hollow victory it somehow seemed now; all that hyperbolic effort and near-psychosis, all wasted on an opponent who wasn’t even in sight. And when I reached the top and checked my stopwatch, my time was an unimpressive 10:12. My record is 8:54 … so what happened? Did I misgauge the size of my effort, and the strength of my opponent? Did I just beat up a fifth-grader with a bat?

No, I reassured myself—I had suffered dearly. I had simulated actual fitness, which isn’t easy to do. And by the bottom of the descent on the backside, after it no longer mattered, my rival had caught up with me. (I’d convinced myself my rear tire was half-flat and took the corners wide.) He couldn’t have been that far behind to catch me so quickly.

As if to further reassure me he was a legitimate Goliath, he dragged my sorry ass up the short climb on the backside [not shown in the map above, but heading back to the four-way intersection]. I was too proud to fall off his pace, but too blown out to try anything else. This was normal, regular suffering, and I was back to disliking it. At the intersection, he turned right to do another lap.

Yes, he was the better cyclist—thus I’d scored an (albeit pyrrhic) victory for lesser men everywhere when I shelled him on the first climb.

Email me here. For a complete index of albertnet posts, click here.

Tuesday, December 22, 2020

Last-Minute Online Holiday Gift Guide!


If, like me, you’ve put off holiday shopping for too long, and find yourself in a community lockdown due to COVID-19, and are now forced to buy all your gifts online, you’ve come to the right place. I’ve got the hottest deals on the best merchandise this side of Amazon! Naw, just kidding … but I will help you navigate the world of strange and unique gifts, highlighting the salient features—and the most helpful reviews—of a number of products available online. (My blogging budget—i.e., zero—does not allow me to actually purchase and try these things, but at least I’ve saved you the legwork.)

M3 Naturals Himalayan Salt Scrub - $33

This M3 Naturals Natural Exfoliating Body and Face Soufflé is a very special product because it contains both collagen and stem cells. This strikes me as the kind of product that lotion snipers would be demo’ing if the malls were all open right now.

So is this the real deal? Well, these aren’t the same stem cells that help treat cancer. The product details explain that this scrub contains “a preparation of apple stem cells derived from the ‘Uttwiler Spätlauber’, a rare Swiss apple variety.” Now, I realize “Spätlauber” sounds a lot like “spit-lobber,” so you may suspect I made that up, but honestly I didn’t. (I wish I did.) Is it obvious this would help your skin? Beats me.

The most important positive review I found declared, “I have battled with have little pimple looking bumps all over my legs and arms cause by ingrown hairs under skin since I was in high school. I would pick them and make ugly places on myself which would make me look awful and made me feel so bad about myself. This product took care of that and more!” I have zero reason to suspect this reviewer is dishonest. In fact she is heartbreakingly candid. So at a bare minimum we can assume this product is a strong placebo. But can a placebo be given as a gift? That’s a tough one … how well do you know the intended recipient?

The top negative review, however, ought to give you some pause: “TERRIBLE PRODUCT, PLEASE DO NOT SWALLOW OR USE OR SENSITIVE AREA'S - GENITAL'S. This product is from the devil .. I have only seen bad effect's from this product. it is highly toxic & dangerous if swallowed also please heed advice and do not use on sensitive skin area's !!!!!!” So, knowing this … is it conscionable to give this as a gift without providing the disclaimer? But wouldn’t that kind of warning cast a pall? I’d say proceed very cautiously here…

Internet password journal - $9

This journal is unlike any other in that it has the phrase “Internet Passwords” embossed right on the cover.

Now, the cynic might say, “Couldn’t I just write my passwords down in the back of an existing journal, or even on the blank pages at the back of a paperback novel?” Well, yeah … you could, but that wouldn’t create the security risk of a burglar finding it and stealing your identity along with your silverware and electronics if he acts quickly. And more to the point, that wouldn’t enable your children to find all your passwords and start snooping on your email, checking out your bank balances, removing firewall restrictions, etc. How is your child supposed to become a hacker when you give her nothing to work with? Where has this journal been all your life?!

White sage stick - $7

The White Sage Stick from OurAncestorsRoots is the gift that says, “What the hell is it for?”

It’s surely worth $7 just to watch the recipient try to figure out what this thing is. Prolong the magic by helpfully explaining, “White Sage sticks are great for clearing and cleansing the energy around you and in your spaces.” The most helpful positive review declares, “The sage stick smells wonderful and I can't wait to get the smudging.” Smudging? Beats me. But this is helpful, in the sense that by quoting this, you can further draw out the sacred giving ceremony. And what about the most helpful negative review? There aren’t any! After all, on what basis could anybody possibly be disappointed by this product?

Beer chiller sticks - $33

These beer chiller sticks for bottles purport to solve the problem of forgetting to put beers in the fridge and facing the chilling terrifying prospect of drinking them warm. All you have to do is remember to put these sticks in the freezer at least 45 minutes advance of wanting to drink beer, and then insert them in your warm beer to cool it off.

Granted, you have to sip some of the warm beer to make room for the stick, but that may be nostalgic for you, taking you back to your college days when you’d occasionally find a can of warm Meister Brau in your roommate’s car and guzzle it down before he could stop you.

It’s hard to choose a single most helpful positive review so I’ll just do a mash-up: “I was impressed by the packaging and he loved it,” “I liked the gift package,” “More than expected the package was amazing top quality works great,” “The design and presentation of this product is excellent! I did give it as a gift, and don't know how well it actually functions, but it was spot on as a Christmas gift.”

As for the most useful negative review, it could be this: “Not sure what happen … put it in my beer and took a few sips then pulled it out to look at it and noticed the cap inside the tube popped out and the coolant had been seeping out in my beer the whole time..” Okay, maybe this guy just got unlucky. Another 1-star review: “Doesn't work. Followed directions, and just looking at the design, it does not direct beer through the cold part of it for long enough to make a difference. There is no way to make this heat transfer work out.” Should we take this amateur scientist at his word? Well … there are about 40 other reviews saying the same thing. An alternative to this gift might be a 3x5 card with the following message written in your very best handwriting: “Next time you forget to put beers in the fridge, just chuck a couple in the freezer for 20 minutes. Thank me later!”

Wallet card for Mom - $14

This engraved wallet card tells your mom exactly how you feel about her, in the eloquent words of an anonymous sage:

The amazing thing about this Engraved Wallet Card is that it doesn’t have a single grammatical gaff in it. That’s saying something, when the product manufacturer describes it thus: “The most aspiration words you want to say to your mom are engraved on the wallet. He will feel the deep love of you when he takes it out and sees it.”

Is this special enough to fork out $14 for? Well, the elephant pictures really are top notch, and aluminum is notoriously difficult to work with. Still, it’s hard not to suggest an alternative, such as a 3x5 card with the same message, written in your very best handwriting. Worried about copyright infringement? Don’t be. I’m not an intellectual property lawyer, but I can say confidently it wouldn’t be difficult to establish that every single sentiment on the card is the epitome of cliché. It’s the cumulative fusillade effect that makes it so sweet.

Whiskey glass with cigar rest - $26

This Kollea Cigar Whiskey Glass with Cigar Rest Holder is perfect for assholes. They’re always looking for a way to kick off a long stream of bloviating, and this “conversation starter” does the job. Meanwhile, it frees them from needing to have an ashtray to set their cigar down on, so now they can spread both foul smoke and ashes across their environment.

Do we care what the negative reviews have to say? Naw. The recipient of this gift is such an asshole, you almost hope he’ll hate it.

Death discussion starter - $10

It can be difficult for a father, especially the strong, silent type, to discuss his own mortality with his daughter. And yet, it needs to happen. This talisman necklace, reminiscent of military dog tags, does the job beautifully.

At first, the inscription just seems like an expression of love, but upon reflection the message is clear: “In all likelihood you are going to outlive me. I will die during your lifetime and you will need to deal with that. And then I will not be around to love you anymore.” And that bit about safety? It reminds her that nothing is for sure: she herself could die in an accident or something. These are hard things to discuss. The talisman does it for you. Brilliant!

And the reviews? My favorite 5-star review simply reads, “Wonderful gift – made my daughter cry.” The most useful 1-star may be this: “Broke withing 2 days of my daughter wearing it xxx disappointed.” But the metaphor of the broken chain kind of helps make the point, huh?

Mug for dangerous father - $10

This Protective Dad Mug is the gift that says, “I recognize that you are a dangerous man, possibly psychotic, but far from wanting to hide this disturbing fact, I think it should be celebrated, and by the way I consider myself pretty and you should know that.”

There is only one 1-star review and it’s blank. That’s too bad … I would like to know what the problem was. Either the mug arrived broken, or the printing was poor, or this gal learned the hard way that her dad is either a liberal or doesn’t consider her pretty enough to kill for. There are 19 5-star reviews but they’re all blank, too … I guess this mug speaks for itself.

Stainless steel “soap” - $9

The AMCO 8402 Rub-a-Way Bar Stainless Steel Odor Absorber purports to remove cooking odors, like that of crushed garlic, from your fingers.

This is one of those products that’s so wacky, it’d be super cool if it actually worked. And yet, how could it? Fortunately, there are over 10,000 consumer reviews of it, so establishing is effectiveness should be pretty easy … right?

Well, over 70% of reviewers gave it five stars, which seems compelling at first blush. But the 1-star and 2-star reviews all say pretty much the same thing: it doesn’t work. (Why would someone give 2-stars to a product that fundamentally fails to deliver on its primary function? I have no idea. I guess people are just nice.) So, over 600 people attest it does nothing. And it’s not like they’re doing it wrong … I mean, how hard could rubbing your fingers on steel be?

I researched this, and discovered that almost nobody has tested this who doesn’t have a vested interest in promoting it. NPR did a spot on the concept for “All Things Considered” and, based on hands-on testing by a professor emeritus at the University of Pittsburgh, concluded it’s bogus. The New York Times also ran an article on it, but all they did was cite the NPR article with the caveat that was an awfully small study.

I’d try this out for you and report my findings here, but who am I to weigh in when a professor emeritus has already done so? Besides, I’m highly skeptical and don’t want to have stinky hands for the rest of this blog post. Next time I handle garlic or onions I’ll rub my fingers on the side of a chef’s knife and see. For now, here are my favorite reviews. Positive: “Works like a charm. Even gets dead mouse smell off your hands.” Median (3-star): “I gave my mom one of these and she was confused, she thought it was an actual bar of soap.” Negative: “Tried this ‘wonder bar’ which removed ABSOLUTELY NOTHING! Don't know about others who have reviewed it removes armpit odors! Really??? Use deodorant or something. Just the though groused me out.”

Lip balm for insecure men - $5

Macho men of the old school may think it unbecoming—effete, even—to fuss over their lips. Well, help is here at last. Well, not help exactly, it’s not like these big tough men need help—it’s just, well, look:

Check out those tough, soiled working man’s hands. He’s not some little girlie-man working a desk. He’s rugged even though he’s dapper. So is this Rugged & Dapper Lip Balm good stuff? Well, as the manufacturer tells us, it’s “TESTED ON MEN, NOT ANIMALS.” Think how cruel it would be to try this on, say, a pig. Haven’t the pigs suffered enough, with the lipstick? And of course this isn’t tested on women. That would be weird.

Pro and con reviews? Pro: “Its just so sleek and simple.” Con: “First off, this is not matte. It's misleading to claim it to be matte when it leaves your lips shiny, like lip gloss.” Gosh, that might be tough given the target market … what man wants to have glossy lips?

And does Rugged & Dapper offer SPF protection? Naw … that’s for sissies!

Phone sanitizer - $70

The UV Sanitizer & Wireless Charger kills bacteria using UV light. That’s what differentiates it from other Qi phone chargers (which typically cost only $13-20). All you do is place your phone in the special chamber, turn it on, and wait three minutes. Now all the invisible bacteria are, apparently, gone!

So does it work? Well, that’s a tough question because … well, this product reminds me of a joke. A guy boards a city bus and sees a fellow passenger holding an imaginary box, from which he pinches an imaginary powder that he then flings around in the air. The first guy says, “What are you doing?” The second guy replies, “It’s to keep away lions!” The first guy says, “There are no lions on this bus!” and the second guy says, “See? It’s working!”

But wait, there’s more to this product! It also does aromatherapy. “Add a few drops of plant-based essential oils (not included) to the built-in essence box and take in the soothing scents.” Um … could you add the oils to, say, a napkin or a kleenex and smell them that way? Well, yeah, you could … but how high-tech is that?

As for reviews, that’s easy because there’s only one (five stars!) and it’s so short I can quote the whole thing: “He uses it at home. When he gets home from work he puts it in the case to clean and sanitize his phone very practical.” And who is “he”? I have no idea. But I’ll bet he knows his stuff.

So yeah, you could drop $70 on this if the intended recipient isn’t the skeptical sort. If he is, you might consider instead giving him a 3x5 card with the following message written in your very best handwriting: “Next time your phone seems grubby, just wipe it on your pants!”

Wine filter – 8-pack for $20

The Wand Wine Filter by PureWine is a metal thingy you put in your wine glass. If you stir your wine with it intermittently for eight minutes it will remove 95% of the histamines and sulfites that make some people get “Headaches, Stuffy Nose, Skin Flush, Next-Day Hangovers and Upset Stomach.”

How’s it supposed to work? Beats me. One Amazon customer asked, “Could you publish results of any independent testing you have done, comparing the level of histamines and sulfites before and after using the wand?” Alas, the only response was, “I love it and it works.”

I suppose one risk of using this (after the pandemic, anyway) is that when you explain what it is to a fellow drinker, he’ll say, “Get a fucking backbone!” But then, this is speculation. I have no idea how wine people actually talk. I drink with beer people who a) never put wands in their glasses; b) never take anything close to eight minutes to drink a beer; and c) never have a histamine response to anything they drink.

My favorite positive review: “My boyfriend has never been able to drink wine due to an allergy to the sulfites … after one taste he would begin to get itchy and wheezy so I'd give him a Benadryl and take his glass away before we had more severe issues. These wands are super easy to use and we have been able to enjoy multiple bottles of wine together with no reactions!” Imagine being that guy, having his glass snatched away like that. These wands probably saved his relationship! I want to find that guy and give him—no, not a hug, you fool! Give him a tube of Rugged & Dapper lip balm.

But you should be aware of this negative review, too: “I ordered 8 wands to start and then a case because they work so well for the histamines. But every time I use them I have a problem with loose stool. It's now gotten so bad that I am having severe cramps and have had to give the wands away.” While this could obviously be a problem, I see opportunity here, too … somebody should market these as a stool softener! Say, that reminds me: wouldn’t “Loose Stool Event” be a good name for a rock band?

18k Gold Paper Clip - $1,500

Tiffany describes this product on their website: “An oversized paper clip is reimagined in 18k gold as a whimsical bookmark.”

Wow, what a generous and beautiful gift! The tricky part is to tactfully mention to the lucky recipient that a) this thing cost $1,500 so don’t you dare lose it, and b) since 18k gold is so soft, it’s actually very poorly suited to the task of clipping paper, so the paper clip should just be closed flat in the book, or employed solely as an objet d’art.

Care should also be taken to choose the right recipient; i.e., somebody callous enough to ignore the fact that over 12 million Americans are currently unemployed. We’re talking about somebody with a sufficient sense of entitlement that he would simply enjoy this curio for its beauty, ignoring the reality that for $1,500 you could provide lifesaving vaccinations for 8,000 children, or feed a malnourished child for almost 2½ years.

Any reviews for this product? Naw. Tiffany customers evidently don’t worry about such things.

A gift for the blogger?

I’ll bet I know just what you’re thinking: what gift should I get for Dana, as a reward for his tireless blogging all year? Aw, shucks … you don’t have to get me anything! But if you feel you must, I sure wouldn’t mind a stack a 3x5 cards…

Other albertnet holiday posts

Email me here. For a complete index of albertnet posts, click here.

Monday, December 14, 2020

Could Artificial Intelligence Replace Writers? - Part 3


In my last two posts (here and here) I explored the question of whether Artificial Intelligence could write a magazine article. In particular I described a New Yorker essay on this subject and the misgivings that caused me. Parts 1 and 2 of my essay concerned mainstream applications like Google’s Smart Compose and Gboard predictive text. In this final installment I showcase my experiments with more cutting edge, experimental platforms. The website Write With Transformer gathers together a number of these, including GPT, GPT-2, and Distil-GPT-2. I tried out all three.


Distil-GPT-2 is the first one I tried simply because it was listed first on the website. It is supposedly an improvement over GPT-2, being “twice as fast as its OpenAI counterpart, while keeping the same generative power.” The site “lets you write a whole document directly from your browser, and you can trigger the Transformer anywhere using the Tab key.” I started writing a very simple essay about how the sentence “the quick brown fox jumps over the lazy dog” helps teach typing, as it efficiently covers all the letters in the alphabet. Here’s what I got:

(Click to zoom in, if you’re not as nearsighted as I.)

As you can see, Distil-GPT-2 didn’t do very well. I judge it based on its ability to grasp where I was trying to go with what I wrote, and how well it continued the essay so I wouldn’t have to. I don’t get the sense it had any idea what I was talking about. It did use words realistically, such that it created credible sentences (as opposed to a tossed salad of words), as long as you don’t worry about content or meaning. But it seemed to be trying to tell its own story, about some unfortunate children. And toward the end there it devolved into babble, with “sudden or unusual, sudden or unusual suddenness.” Did this A.I. learn by reading a lot of Samuel Beckett?

I accidentally tried Distil-GPT-2 a second time by (apparently) clicking the wrong link on the website. But this time, when I wasn’t getting very far with the typing theme, I tried something more concrete. Check it out:

This was kind of like trying to draw out a really small child, or somebody with a learning disability. Where things went really sideways was with “ichalba.” I still haven’t figured that out. For once, Google was at a complete loss; it could only find instances of “ich Alba” on German websites and assumed I had mistyped my query:

I tried Google Translate but it wasn’t much help either, unless it turns out this A.I. is a drunken Uzbek.

I couldn’t find Marba on a map … I’d been hoping it was in Uzbekistan. Needless to say, utter babble doesn’t make A.I.’s prose very credible.


This was the original OpenAI composition engine. Obviously I wouldn’t expect it to do as well as GPT-2, but figured it would give us a sense of the progress that has been made. I fed it the same basic prompt as the others. Let’s see how it did.

WTF?! Where did it get the whole gay thing? It’s like the A.I. hijacked my content to try to explain something about how gay men learn to type. I don’t understand this at all. I love its final conclusion … so hopeful. And yet I totally disagree that GPT has any potential whatsoever.


Okay, at long last, on to the cutting edge in A.I. text generation. GPT-2 is the technology described in the New Yorker article, which its creator, OpenAI, said couldn’t be released on schedule because “the machine was too good at writing” and the company had to slow down and “prepare for the potential threat posed by superintelligent machines that haven’t been taught to ‘love humanity,’ as Greg Brockman, OpenAI’s chief technology officer, put it.”

I was skeptical, curious, and a bit worried when I put this one through its paces. (I should point something out first: the website Write With Transformer notes that only three of the four GPT-2 “sizes” are publicly available. Presumably I’ve haven’t gotten my hands on the same version the New Yorker got to try out.) Here’s what this GPT-2 engine produced.

Clearly, this is way ahead of the others. The text it produced was coherent and it seemed to actually know certain things: that my cryptic sentence about jackdaws has all the letters in the alphabet; that this is useful for those using a keyboard; that the keyboard could be an old typewriter. It even figured out the most important difference between the jackdaws sentence and the old classic about the quick brown fox.

That’s the stuff it got right, anyway. Oddly, after I typed “old-fashioned” and hit tab, it almost suggested the right word, that being “typewriter.” But It only got as far as “typew.” I cannot figure that out. I hit tab again and it supplied “riter.” Just a glitch, I guess, and easily forgivable … but then it went on to suggest “cute” out of nowhere. It kind of went off the rails at that point.

Could GPT-2 help me write this blog? Not really … it could save a few keystrokes I suppose, like Smart Compose, but this is of limited value when you consider how I had to nudge it back on track a few times, and the fact that I ended up with needlessly verbose sentences (e.g., “writers who are learning to use a keyboard” instead of “budding typists”). Intrigued, I gave GPT-2 another try with the less abstract topic:

Again, GPT-2 seems pretty smart. It know that when flour is used, egg and water are probably needed. It couldn’t suggest “oil’ until I provided “olive” but at least it wasn’t suggesting “olive garden” or something. It really seemed hung up on the idea that a mixer is needed, but it got over it, and seemed to know there’s a such thing as a pasta machine. It couldn’t figure out how that machine is set up in a real kitchen, but I’m impressed that it realized the rollers would make the dough more uniform. It even had some suggestions on how to serve pasta, even if sauce and parmesan cheese are what I’d had in mind. On the basis of this performance, I’d say GPT-2 is pretty nearly ready to start writing for Real Simple magazine. But writing albertnet posts for me? I’ll wait for GPT-3.


Ultimately, I am really relieved at how poorly these technologies did—from Google Smart Compose to Gboard predictive text to GPT-2. I don’t mind if A.I. gives me some shortcuts around predictable words and phrases (even when it goofs), but the idea that it could create realistic prose really frightens me. I think we are entering a golden age for computer-generated writing, where everything gets so much better. The first major step in that direction is to be able to read your own code in the first place. I have no doubt that the technology is there to give us that, but until then, we are really on our own. We need an ecosystem that encourages creativity, so that our machine-generated creations can be a fun way for developers to express themselves, and also as a way to create better products for the world.

Did anything about that last paragraph bother you? Did it seem like my perspective and reason became inconsistent? What if I were to tell you that, for that paragraph, I allowed GPT-2 to generate at least one complete sentence on my behalf, based on how I started the paragraph, just like the New Yorker article did with the quotation from Steven Pinker? Well, I did. So now, go back and try to figure out where my text stopped and the GPT-2’s text began. Then scroll down to see how you did.


If you were to skim my essay and not really engage with my ideas, you might be fooled … but I’m hoping you quickly grasped that the A.I. was completely contradicting me. So here’s my actual, unassisted conclusion:

Ultimately, I am really relieved at how poorly these technologies did—from Google Smart Compose to Gboard predictive text to GPT-2. I don’t mind if A.I. gives me some shortcuts around predictable words and phrases (even when it goofs), but the idea that it could create realistic prose really frightens me. After all, A.I. has no concept of truth. It recommends olive oil, fresh garlic, and fresh herbs instead of sauce, without any idea of whether that simple fare would be tastier than a Bolognese Ragu or an Alfredo.

If the only point of A.I.-written text is to provide arbitrary text-based content to attach ads to, well … mission accomplished. But that has nothing to do with the real point of writing, which is to educate, entertain, and/or illuminate. If A.I.-written articles blithely trot out totally uninformed opinions and unverified information, with nothing guiding them but previously posted uninformed opinions and unverified information, who will lead humanity towards the light?

Other albertnet posts on A.I.

Email me here. For a complete index of albertnet posts, click here.

Monday, December 7, 2020

Could Artificial Intelligence Replace Writers? - Part 2


In my last post, I reacted to a 2019 New Yorker article about new machine learning technologies that, some say, will eventually enable A.I. to write magazine articles. Disturbed by this, I spent the next year gathering examples of Google predictive text and Smart Compose errors. My last post analyzed a few common failings. Below I continue the discussion, considering possible causes of stranger errors and delving into particularly problematic pitfalls.

Could A.I. be led astray by … humans?

Do you ever wonder if A.I. errs by replicating mistakes it learned from the content it trained on? Maybe that would explain this gaff:

I’m absolutely sure I’ve never typed “Logan” on my phone. So where did it come from? Well, the obvious next word you’d expect after “sawing,” which is “logs,” starts out pretty similarly to “Logan,” and the “a” key is right next to the “s.” So this could be a repeated typo, unless “sawing Logan” is actually a thing. (Except I just googled it … and it’s not.)

This is where the content produced by A.I. is on shaky ground. Vladimir Nabokov described pornography as “the copulation of clichés,” but he could have just as easily been talking about news, or what passes for it, with complete nonsense being repeated often enough to eventually be taken as truth. So it is, perhaps, with machine learning based on what humans are writing—or attempting to write. Perhaps if enough people mistype “sawing logs” as “sawing loga,” with predictive text suggesting “Logan” because that’s at least a character string the A.I. is familiar with, and enough people accidentally accept this suggestion, it could reinforce the “learning” to the point that the A.I. thinks “sawing Logan” means something. It’s a self-fulfilling prophecy, almost like “put the pussy on the chainwax,” except it’s not funny. A.I. text can’t be (deliberately) funny because there’s no creative human behind it to make it funny.

Apparently random gaffs

It’s not always possible to even hazard a guess at where A.I. came up with a suggestion. Consider the art of baking. There are a finite number of things you can bake: a cake, a pie, cookies. I don’t care what the context is … if you use the verb “bake,” these (or very similar nouns) should be the suggestions. But look at this:

Look, maybe there’s some psycho grandma out there who might bake, say, her grandson David. But bake another David? Okay, maybe she’s a serial killer out to get anybody named David. But “bake Livestorm”? How do you bake a webinar platform, especially if you’re a grandma who is presumably is not that tech-savvy? WTF!?

New check out this little zinger:

I’m not expecting A.I. to be well versed in Devo lore, but it could have reasoned this one out. Given the construction, “What, like you’ve never seen [whatever] …” most humans would correctly guess that the next word should be “before.” I don’t think any human would suggest “of diaphragm” in any situation. It is a decidedly useless phrase. Yes, we use our diaphragms whenever we breathe, but we never think about it. Nor, I expect, do the members of Devo.

My next exhibit is particularly damning when you consider what a terrible year 2020 has been, and how many times I’ve told somebody, via text message, “I just threw up in my mouth.” I guess my phone had been digging deep into widespread cardiopulmonary lore because it again went weird on me:

There are so many better candidates here. Threw up in my hands. Threw up in my hat. Threw up in my guest bathroom. And it’s trying to be helpful with “threw up in my respiratory?”

Failures of grasping context

This is where context becomes important. It could be that the A.I. was fixated on some Internet-wide phenomenon—say, articles involving COVID-related breathing issues—such that it ignored whatever I was writing about. This actually makes sense, because A.I. likely can’t consider all my sentences in the context of one another.

Here’s another suggestion that this Gboard predictive text application was preoccupied with respiration.

Look, Android, I’m talking about Amtrak! It’s a train! Clearly I was asking about the upper berth.

Now, I think we can all agree that really good A.I. should ideally look not only at who is composing the message, but whom he or she is sending it to. Why would the suggestion “me” ever make sense in the context below?

Android is driving the entire phone … couldn’t it easily rule out the possibility that my text recipient was also on the line with me? And isn’t that just common sense anyway?

In the case of my own texting, Android has an easy job because the majority of my text exchanges are with my daughter. With that in mind, the predictive text utility should be able to rule out any word that would take the conversation into an uncomfortable realm (i.e., that no father and daughter would ever pursue). Look at this:   

OMG. What kind of sick dad would pen something like that? “Congratulations honey … your birth control is working!” And here’s another doozy:

Look, Android … you should be able to figure out the gender of your phone’s owner. Well over half the population would never have cause to write “I guess I’m pregnant.” It’s not a very useful word most of the time, particularly not when a teenager is shooting the breeze with her dad. I’ve heard of people breaking up with a boyfriend/girlfriend via text, but who announces something as huge as a pregnancy via text?

Speaking of racy words that cannot apply to half the population, where was Android going with this?

It just makes no sense. Even if the pronoun in the sentence were “he,” at what hospital anywhere should the staff routinely wear a condom? And how did A.I. even think to suggest this word to me, when I haven’t had cause to think about, talk about , or use a condom in over thirty years? (Granted, as a father I do have a duty to mention birth control to my kids, but a) I’d never do it in writing, much less in a text message, and b) all males resort to euphemism in these cases, e.g., “No glove, no love” and “Remember the rule, protect your tool.”)

Now, let’s back up from an A.I. that considers both composer and recipient; can’t know the mindset of the people involved; doesn’t have the backstory; etc. The strange thing is, these epic fails even pop up when we’re only asking A.I. to go back to earlier in the sentence—that is, to not just consider the most recent word typed, but the one before that. It cannot always manage this. Look:

If you played a word association game with a human and started with the word “breast,” asking what word might logically come next, I can imagine that some would say “cancer” and some would say “milk.” But if you started with the phrase “chicken breast,” no human would think of cancer or milk. A human would say “sandwich” or something. Nobody in the history of the world has typed “chicken breast cancer.” (Okay, I just fact-checked this and it turns out there was a barbecue chicken breast cancer fundraiser once, in Nassau, Delaware … but this has got to be an edge case.) Ditto “chicken breast milk.”

When A.I. looks clueless and out of touch

As I stated before, A.I. couldn’t possibly replace real writers, who have insight and passion and actual intelligence, but perhaps it could do a journeyman journalist’s work someday. But for that to work, the A.I. must never seem clueless or out of touch. But look at these bonehead suggestions:

The A.I. didn’t parse the (non-) question beyond the word “what.” Thus, it completely missed the point—this was an observation, so appropriate responses would have been things like, “I agree,” “Word,” “I know, right?” and “Amen.”

Look, I get it that statements like “What grace and elegance” are easier in Latin, which has a whole construction (the vocative case) around such utterances (e.g., “O tempora! O mores!”), but any human could have grasped that my daughter wasn’t asking a question. There wasn’t even a question mark! I mean, duh!

But wait, it gets worse. My #1 predictive text pet peeve? I have a daughter named Lindsay, and look what the A.I. suggests every single time I type her name:

This is so maddening. I have typed “Lindsay” dozens of times in the past year and I have never accepted the suggestion “Lohan.” On that basis alone the A.I. should stop suggesting it. But there’s a much bigger reason to nix “Lohan”: Nobody is talking, emailing, or texting about Lindsay Lohan anymore. She is no longer a household name. This is not just my opinion. She hasn’t made a Hollywood movie since The Canyons all the way back in 2013. Was The Canyons the kind of critical and box office success that people are still talking about seven years later? Uh, no. It has an IMDB rating of 3.8 out of 10, and a Metascore of 36 out of 100; at the box office, The Canyons took in a domestic gross of just $56,825. The average movie theater ticket in 2013 went for about $8. That means this movie was seen by a mere 7,000 people. It almost couldn’t be a worse failure. Is this the kind of train-wreck movie offered to those who were once stars but have fallen too far to get a decent role and have to grovel in the gutter for anything on offer? Well, I haven’t seen the movie, but I have my guess, and it’s hell yes. The hapless Lohan’s personal and professional nosedive is just too sad for anybody to want to even talk about, so most of us are merciful enough to brush her under the carpet. But not Android! It’s all like, “Oh, did you just type ‘Lindsay’? You must mean Lindsay Lohan!” Puh-lease.

Is there an even worse way to show how out of touch you are? Actually, yes. Consider this final exhibit in the case of albertnet vs. predictive text:

What do you mean, “they say YOLO”?! Come on, nobody says YOLO! It’s like the poster child for being tone deaf culturally. Check out the urbandictionary.com definition of YOLO: it’s a feeding frenzy of abuse. One popular definition is “Carpe diem for stupid people.” Another is “the douchebag mating call.” A third definition simply states, “A term people should have stopped using last year.” The date-stamp of this third definition? 2014. The musician M.I.A. wrote a song called YOLO but by the time she was ready to record it, the term was already toxic so she rewrote the song as “Y.A.L.A.” (that is, “you always live again”). And that was in 2013.

First Lindsay Lohan and now YOLO? Is Android’s predictive text stuck in 2013? Machine learning, my ass!

To be continued…

Obviously, Google’s predictive text and Smart Compose aren’t the only A.I. text creation technologies out there, so this essay wouldn’t be complete without an exploration of other nascent platforms. Alas, I see I’m out of room here, so tune in next week when I’ll delve into my own experiments with these, including GPT-2, the technology that’s supposedly the closest to replacing writers.

Other albertnet posts on A.I.

Email me here. For a complete index of albertnet posts, click here.

Monday, November 30, 2020

Could Artificial Intelligence Replace Writers?


Over a year ago, I came across this fascinating article about whether or not Artificial Intelligence could write a New Yorker article. The answer was essentially “no” or “not yet,” but it got me pretty riled up anyway. Ever since, I’ve been evaluating A.I.’s ability to correctly suggest even a word or phrase as it parses my text. In this post I wade into that history as I examine the question of A.I.’s (supposed) ascendance.

The New Yorker article

The New Yorker article, “The Next Word,” was in the October 14, 2019 issue. The writer, John Seabrook, talked with A.I. experts at Google about their “Smart Compose” feature, which predicts how your sentence ought to end and suggests the remaining words, so you can just hit tab to accept the suggestion. (If you use Gmail, you’re already familiar with this.) Seabrook also talked with the folks at another company, OpenAI, about their GPT-2 engine, which composes entire sentences and even complete paragraphs, with made-up quotations no less, in the voice of a real writer it “learns” and then mimics. GPT-2 is still under development; OpenAI claimed it has been delayed because it’s “too good at writing” and they fear society isn’t ready. (Yeah, right.) Seabrook tried it out, and in the online version of the article you can see its efforts at contributions to his story.

Seabrook included an extended quotation from Steven Pinker, a Harvard psycholinguist, that had been appended with text generated by GPT-2, and challenged the reader to figure out where the real quote ended and A.I. picked it up. (You can take the challenge in the online article.) I found this exercise really easy, but Seabrook reported that almost everybody he tried the “Pinker test” on “failed to distinguish Pinker’s prose from the machine’s gobbledygook” and concludes, “The A.I. had them Pinkered.”

Does this scare you? It sure scares me. I have no doubt that great literature will always be written by real writers, no matter how good the A.I. gets, but run-of-the-mill journalism and magazine writing, which mainly exist to serve up ads anyway, might someday be written by a clueless A.I. that has no more grasp of insight and fact than do certain famous politicians. As Seabrook puts it, “One can envision machines like GPT-2 spewing superficially sensible gibberish, like a burst water main of babble, flooding the Internet with so much writing that it would soon drown out human voices, and then training on its own meaningless prose, like a cow chewing its cud.”

Hoping against hope that this A.I. capability is overrated, I have been paying close attention to how well it has done across the devices I use, and recording its more salient failures, over the past year. In Internet time, a year is a pretty huge span—in theory I should have seen marked improvement in that period. Well, here’s what I found.

Smart Compose

I have to confess, I haven’t grabbed many snapshots of Google’s Smart Compose behavior because I only use Gmail at work, and I’m fairly religious about separating work and play. I did grab a few examples though, because they flew in the face of how the function was reputed to work. Seabrook quotes Paul Lambert, who manages this feature for Google, as saying, “If you write ‘Have a’ on a Friday, it’s much more likely to predict ‘good weekend’ than if it’s on a Tuesday.”

Weirdly, this didn’t work for me at all. Check out these samples of what Smart Compose suggested to me on a Friday morning: 

Neither “great week” nor “good night” makes sense here. Meanwhile, the fact that “gr” invoked “great week” whereas “go” pointed at “good night” is illogical, since people use “great” and “good” almost interchangeably and there’s no reason to assume that we’d want somebody’s week to be great but their night to only be good. I decided to slip between the horns of the dilemma by starting the sentence over entirely. This produced:

Given that Father’s Day was more than seven months away, this suggestion struck me as totally moronic. So I added an “r” after the “F” to see how it would recover:

This would make sense if Chris had wished me a happy Friday … but he hadn’t. I decided to shoot for “weekend” to see how that would go:

Huh? Whose “week ahead” starts on Friday? Oddly, Smart Compose seemed to be utterly neglecting the context of my email.

In the year I’ve kept an eye on Smart Compose, I haven’t again seen anything as egregiously inept. Mostly what I notice is that it doesn’t suggest words or phrases all that often … perhaps my work emails are too technical or otherwise cryptic. (The A.I. that powers Smart Compose was trained on millions of real emails, but none from Google’s business customers.) The A.I. is pretty good about really basic stuff, like suggesting “you have any questions” after I type “Please let me know if,” but that’s about it. As an experiment, I composed this blog post in Gmail and it didn’t suggest anything. It’s like I overwhelmed it somehow. So much for that.

Gboard predictive text

Perhaps more useful, day-to-day, than Smart Compose is Google’s predictive text for the Gboard virtual keyboard, which is bundled with their Android operating system. Predictive text seems to come into play with every app on my phone that relies on typed input. Frankly, I don’t like typing much on the phone so I do most of my writing on the computer. The main thing I type on my phone? Text messages, which I mostly trade with my older daughter who is off at college. (Alas, texting seems to be her generation’s preferred method of communication, at least where their parents are concerned, and I’ve decided to humor my daughter in this.)

My experience? Naturally, predictive text comes in handy, usually in the context of completing words I’ve mostly typed. I hasten to point out this is a lot different from A.I. actually composing anything. If I type “has,” it’s going to suggest “has,” “was,” and “hasn’t” because those are the most likely candidates, and most of the time I’ll accept one of those. If I type “hast” it suggests “hast,” “host,” and “hash,” and if I type “haste” it suggests “haste,” “taste,” and “waste.” (After all, “haste makes waste,” we all know that.) Android is not going to suggest “hasten” because apparently not too many people use that word. This is the bulk of how predictive text behaves, and though it’s not as sophisticated as Smart Compose (much less GPT-2), it works a lot of the time. It also fails a lot.

If we’re really going to count on A.I. to create content for us at any point, I see three things it absolutely has to right. First, it needs to not make any grammar or spelling errors, obviously, since you can’t have it making the putative human author look stupid, or burdening an editor with fifty times the errors a real writer would make. Second, the A.I. can’t commit any serious gaffs that would render the text offensive or at least laughably ignorant. Finally, the A.I. will have to really understand context if it’s to reach its intended audience (well, our intended audience, since A.I. can’t really have anything like intention). An A.I.-written article for the slightly racy GQ or Men’s Health magazine better not sound like Good Housekeeping; Hunter S. Thompson shouldn’t come off like Heloise.

So here’s how the A.I. on my phone has stacked up in these areas over the last year.

Grammar and spelling

It’s kind of remarkable that anybody thinks we’re on the brink of A.I. being able to compose anything when it still doesn’t really do so well with grammar and spelling in its predictive text. I could supply countless examples of errors in this realm, but that would get dull, so I’m providing one example each of the main types of errors I see.

First, it breaks very basic rules about capitalization, failing to capitalize proper nouns or the first word of a sentence. (Predictive text’s cousin, voice recognition, screws this up quite a bit as well.) If the human fails to capitalize a word, A.I. should fix it, rather than expecting us to bother with the shift key a lot. Here’s an example:

It also screws up with predicting subject/verb agreement, so lots of its word suggestions wouldn’t work without my having to backspace and add an “s,” which is clunkier than just typing the word right to begin with. I fight with this many times a day. Here’s an example:

I mean, come on! “We’re huge fan.” That’s not very helpful. “We’re huge favor.” Look, Android, the verb is “are.” It needs a plural predicate nominative. This is not rocket science.

One of the most annoying things predictive text (and its sibling, auto-correct) does is to “fix” my errors for me on the fly, without asking. Usually I end up sending the text before I notice the problem, and then have to explain to the recipient that it’s not my error, which is way more work than for me to just type everything myself with no “help.” (Yes, I know that youngsters these days have no problem sending messages that are utterly littered with errors, but remember, we’re talking about A.I.’s ability to compose text one day ... the bar needs to be higher.) Look at this travesty:

Another category of failure is when predictive text doesn’t grasp what part of speech the next word needs to be. Consider this example where “very” has been set up, within the sentence, to be an adverb modifying another adverb. There is zero benefit in predictive text suggesting an adjective here.

It doesn’t take an English major to grasp that “Text messages don’t convey irony very groovy” simply doesn’t make sense.

Finally, suggesting anything that isn’t really a word is pretty pointless. One of the three choices typically offered up is the fragment of a word you’ve already typed. Why offer this? It’s a waste of screen real estate. And then I’ve seen suggestions that either aren’t words, or basically aren’t words. Look at this example: 

Obviously “trifec” isn’t a word, so why give me the option of accepting it? If I really wanted it, I could just hit the spacebar. And “triger”? It’s basically not a word. It’s not in Google’s own spell-checker dictionary; it’s not in the American Heritage Dictionary; and it’s not in the Wiktionary. Okay, I found “triger process” in the online Merriam-Webster dictionary, so maybe Android was setting me up for that phrase, which means “a method of sinking through water-bearing ground in which a shaft is lined with tubbing and provided with an air lock so that work proceeds under air pressure.” But what are the odds this is what I was writing about? Exactly zero. Android obviously should have guessed “trifecta.” If A.I. starts to write articles, will we have to suffer through tedious asides about digging through waterlogged ground?

Some seemingly phonetic goofs

Sometimes the A.I. seems to be working phonetically and makes a suggestion that almost makes sense—but of course almost doesn’t cut it when it’s supposed to save you work while maintaining (or ideally improving) accuracy. Check this out:

Any human could have guessed “all its splendor and glory” and Android almost got it. But “all its splendor and Gloria”? Really? (Could’ve been worse, I guess … it could have changed “its” to “it’s” again.) Here’s another failure:

Many American morons have called COVID-19 a hoax, but I doubt any have called it a Hoke. (If you’re wondering where it even got “Hoke,” I can tell you I’ve used that word in eight texts, referring to a character, Hoke Mosely, who appears in four Charles Willeford novels. Among book characters he resembles a virus in no way whatsoever.)

Here’s a final example of the A.I. seeming to fail via phonetic bumbling:

It’s almost as though the predictive text software heard somebody say “thirsty” and thought it heard “Thursday.” But of course that didn’t happen … you can see where I typed “thirs.” And how could anyone be Thursday? It makes no sense. On the other hand, if you told somebody to say the first thing that popped into their head when you gave them a prompt, and the prompt was “hungry and …” I’ll bet nine out of ten would say “thirsty.” (One out of ten would be somebody on a diet who might say something like “bitter.”)

It is hard to make a case that these goofs are truly phonetic in nature. But is it feasible these errors are simply random? Well … how the hell should I know? I never said I was a brain scientist or computer technologist. But I have a couple of theories, around context and … oops, unfortunately I seem to be out of space here. 

To be continued… 

Tune in next week for Part 2 of this essay, where I’ll explore some more ways A.I. can go wrong, with a number of wince-worthy predictive-text FAILs.