Monday, December 7, 2020

Could Artificial Intelligence Replace Writers? - Part 2

 Introductions

In my last post, I reacted to a 2019 New Yorker article about new machine learning technologies that, some say, will eventually enable A.I. to write magazine articles. Disturbed by this, I spent the next year gathering examples of Google predictive text and Smart Compose errors. My last post analyzed a few common failings. Below I continue the discussion, considering possible causes of stranger errors and delving into particularly problematic pitfalls.

Could A.I. be led astray by … humans?

Do you ever wonder if A.I. errs by replicating mistakes it learned from the content it trained on? Maybe that would explain this gaff:


I’m absolutely sure I’ve never typed “Logan” on my phone. So where did it come from? Well, the obvious next word you’d expect after “sawing,” which is “logs,” starts out pretty similarly to “Logan,” and the “a” key is right next to the “s.” So this could be a repeated typo, unless “sawing Logan” is actually a thing. (Except I just googled it … and it’s not.)

This is where the content produced by A.I. is on shaky ground. Vladimir Nabokov described pornography as “the copulation of clichés,” but he could have just as easily been talking about news, or what passes for it, with complete nonsense being repeated often enough to eventually be taken as truth. So it is, perhaps, with machine learning based on what humans are writing—or attempting to write. Perhaps if enough people mistype “sawing logs” as “sawing loga,” with predictive text suggesting “Logan” because that’s at least a character string the A.I. is familiar with, and enough people accidentally accept this suggestion, it could reinforce the “learning” to the point that the A.I. thinks “sawing Logan” means something. It’s a self-fulfilling prophecy, almost like “put the pussy on the chainwax,” except it’s not funny. A.I. text can’t be (deliberately) funny because there’s no creative human behind it to make it funny.

Apparently random gaffs

It’s not always possible to even hazard a guess at where A.I. came up with a suggestion. Consider the art of baking. There are a finite number of things you can bake: a cake, a pie, cookies. I don’t care what the context is … if you use the verb “bake,” these (or very similar nouns) should be the suggestions. But look at this:


Look, maybe there’s some psycho grandma out there who might bake, say, her grandson David. But bake another David? Okay, maybe she’s a serial killer out to get anybody named David. But “bake Livestorm”? How do you bake a webinar platform, especially if you’re a grandma who is presumably is not that tech-savvy? WTF!?

New check out this little zinger:


I’m not expecting A.I. to be well versed in Devo lore, but it could have reasoned this one out. Given the construction, “What, like you’ve never seen [whatever] …” most humans would correctly guess that the next word should be “before.” I don’t think any human would suggest “of diaphragm” in any situation. It is a decidedly useless phrase. Yes, we use our diaphragms whenever we breathe, but we never think about it. Nor, I expect, do the members of Devo.

My next exhibit is particularly damning when you consider what a terrible year 2020 has been, and how many times I’ve told somebody, via text message, “I just threw up in my mouth.” I guess my phone had been digging deep into widespread cardiopulmonary lore because it again went weird on me:


There are so many better candidates here. Threw up in my hands. Threw up in my hat. Threw up in my guest bathroom. And it’s trying to be helpful with “threw up in my respiratory?”

Failures of grasping context

This is where context becomes important. It could be that the A.I. was fixated on some Internet-wide phenomenon—say, articles involving COVID-related breathing issues—such that it ignored whatever I was writing about. This actually makes sense, because A.I. likely can’t consider all my sentences in the context of one another.

Here’s another suggestion that this Gboard predictive text application was preoccupied with respiration.


Look, Android, I’m talking about Amtrak! It’s a train! Clearly I was asking about the upper berth.

Now, I think we can all agree that really good A.I. should ideally look not only at who is composing the message, but whom he or she is sending it to. Why would the suggestion “me” ever make sense in the context below?


Android is driving the entire phone … couldn’t it easily rule out the possibility that my text recipient was also on the line with me? And isn’t that just common sense anyway?

In the case of my own texting, Android has an easy job because the majority of my text exchanges are with my daughter. With that in mind, the predictive text utility should be able to rule out any word that would take the conversation into an uncomfortable realm (i.e., that no father and daughter would ever pursue). Look at this:   


OMG. What kind of sick dad would pen something like that? “Congratulations honey … your birth control is working!” And here’s another doozy:


Look, Android … you should be able to figure out the gender of your phone’s owner. Well over half the population would never have cause to write “I guess I’m pregnant.” It’s not a very useful word most of the time, particularly not when a teenager is shooting the breeze with her dad. I’ve heard of people breaking up with a boyfriend/girlfriend via text, but who announces something as huge as a pregnancy via text?

Speaking of racy words that cannot apply to half the population, where was Android going with this?


It just makes no sense. Even if the pronoun in the sentence were “he,” at what hospital anywhere should the staff routinely wear a condom? And how did A.I. even think to suggest this word to me, when I haven’t had cause to think about, talk about , or use a condom in over thirty years? (Granted, as a father I do have a duty to mention birth control to my kids, but a) I’d never do it in writing, much less in a text message, and b) all males resort to euphemism in these cases, e.g., “No glove, no love” and “Remember the rule, protect your tool.”)

Now, let’s back up from an A.I. that considers both composer and recipient; can’t know the mindset of the people involved; doesn’t have the backstory; etc. The strange thing is, these epic fails even pop up when we’re only asking A.I. to go back to earlier in the sentence—that is, to not just consider the most recent word typed, but the one before that. It cannot always manage this. Look:


If you played a word association game with a human and started with the word “breast,” asking what word might logically come next, I can imagine that some would say “cancer” and some would say “milk.” But if you started with the phrase “chicken breast,” no human would think of cancer or milk. A human would say “sandwich” or something. Nobody in the history of the world has typed “chicken breast cancer.” (Okay, I just fact-checked this and it turns out there was a barbecue chicken breast cancer fundraiser once, in Nassau, Delaware … but this has got to be an edge case.) Ditto “chicken breast milk.”

When A.I. looks clueless and out of touch

As I stated before, A.I. couldn’t possibly replace real writers, who have insight and passion and actual intelligence, but perhaps it could do a journeyman journalist’s work someday. But for that to work, the A.I. must never seem clueless or out of touch. But look at these bonehead suggestions:


The A.I. didn’t parse the (non-) question beyond the word “what.” Thus, it completely missed the point—this was an observation, so appropriate responses would have been things like, “I agree,” “Word,” “I know, right?” and “Amen.”

Look, I get it that statements like “What grace and elegance” are easier in Latin, which has a whole construction (the vocative case) around such utterances (e.g., “O tempora! O mores!”), but any human could have grasped that my daughter wasn’t asking a question. There wasn’t even a question mark! I mean, duh!

But wait, it gets worse. My #1 predictive text pet peeve? I have a daughter named Lindsay, and look what the A.I. suggests every single time I type her name:


This is so maddening. I have typed “Lindsay” dozens of times in the past year and I have never accepted the suggestion “Lohan.” On that basis alone the A.I. should stop suggesting it. But there’s a much bigger reason to nix “Lohan”: Nobody is talking, emailing, or texting about Lindsay Lohan anymore. She is no longer a household name. This is not just my opinion. She hasn’t made a Hollywood movie since The Canyons all the way back in 2013. Was The Canyons the kind of critical and box office success that people are still talking about seven years later? Uh, no. It has an IMDB rating of 3.8 out of 10, and a Metascore of 36 out of 100; at the box office, The Canyons took in a domestic gross of just $56,825. The average movie theater ticket in 2013 went for about $8. That means this movie was seen by a mere 7,000 people. It almost couldn’t be a worse failure. Is this the kind of train-wreck movie offered to those who were once stars but have fallen too far to get a decent role and have to grovel in the gutter for anything on offer? Well, I haven’t seen the movie, but I have my guess, and it’s hell yes. The hapless Lohan’s personal and professional nosedive is just too sad for anybody to want to even talk about, so most of us are merciful enough to brush her under the carpet. But not Android! It’s all like, “Oh, did you just type ‘Lindsay’? You must mean Lindsay Lohan!” Puh-lease.

Is there an even worse way to show how out of touch you are? Actually, yes. Consider this final exhibit in the case of albertnet vs. predictive text:


What do you mean, “they say YOLO”?! Come on, nobody says YOLO! It’s like the poster child for being tone deaf culturally. Check out the urbandictionary.com definition of YOLO: it’s a feeding frenzy of abuse. One popular definition is “Carpe diem for stupid people.” Another is “the douchebag mating call.” A third definition simply states, “A term people should have stopped using last year.” The date-stamp of this third definition? 2014. The musician M.I.A. wrote a song called YOLO but by the time she was ready to record it, the term was already toxic so she rewrote the song as “Y.A.L.A.” (that is, “you always live again”). And that was in 2013.

First Lindsay Lohan and now YOLO? Is Android’s predictive text stuck in 2013? Machine learning, my ass!

To be continued…

Obviously, Google’s predictive text and Smart Compose aren’t the only A.I. text creation technologies out there, so this essay wouldn’t be complete without an exploration of other nascent platforms. Alas, I see I’m out of room here, so tune in next week when I’ll delve into my own experiments with these, including GPT-2, the technology that’s supposedly the closest to replacing writers.

Other albertnet posts on A.I.

—~—~—~—~—~—~—~—~—
Email me here. For a complete index of albertnet posts, click here.

No comments:

Post a Comment