Wednesday, January 31, 2024

Will A.I. Steal Our Jobs?


One of the great things about Artificial Intelligence is how well it drives hype. The media, instead of just delivering bad news about yesterday and today, can now get us really excited—and worried!—about the future. One of the most potent forms of this hype is the widespread suggestion that many of us may lose our jobs to A.I. (My mother-in-law was asking me about this just the other day.)

In this post, I’ll examine the topic. I’m not an expert on A.I., but I’m confident that my credentials as a male will serve me well in mansplaining this to you, regardless of your own sex. And actually, I’ve been blogging about A.I. for over ten years, having produced over a dozen posts (linked at the bottom of this one). I’ve even done some light research for today’s topic. I’ll wrap up by telling you what I think ought to be a much larger concern around A.I. than job preservation.

Which jobs?

Before I can answer your question, “Will A.I. steal my job?” of course I’d have to know what you do. Obviously I can’t just ask, so I’ll have to make some assumptions. I suspect you’re a college graduate to even be visiting albertnet, because my blog posts are long and difficult. Moreover, I researched the most common jobs in America for the non-college educated, and the results—which include “fast food and counter workers,” “home health and personal care aides,” and “stockers and order fillers”—wouldn’t leave any employee with enough energy to plow through this much text.

So, with that assumption in mind, I’ll go with the top five careers for college graduates based on current labor statistics. I’ll also cover the classic professions: medicine and law. It doesn’t matter if your career isn’t any of these; what I cover should be illustrative examples.

The top five careers

The five most abundant job prospects for new graduates in America, according to the U.S. Bureau of Labor Statistics, are as follows (along with the number of openings per year, average, for 2020-2030):

  1. General and operations managers          229,600 openings
  2. Registered nurses                                       194,500
  3. Software developers & testers                  189,200
  4. Accountants & auditors                             135,000
  5. Elementary school teachers                     110,800

I’ll explore, briefly, each of these careers in turn, before going on to doctors and lawyers.

General and operations managers

Now, I don’t know exactly what this rather general label means, but clicking on the BLS hyperlink for it takes me to a page describing what executives do. The summary is that they “plan strategies and policies to ensure that an organization meets its goals.” This would be very difficult for A.I. to do, because it cannot form opinions, doesn’t have the ability to effectively promote ideas and inspire people, and couldn’t have any clue about navigating office politics. The managers and executives at my company do a lot in person, which attests to the company’s conviction that this is necessary (vs. telecommuting). A.I. cannot, needless to say, do anything in person. It produces rivers of text on any subject by regurgitating gobs of highly masticated learning data from across the Internet, but this has nothing to do with forming and fostering creative ideas.

Much of the tech world, in my personal experience and as chronicled widely by the media, is devoted to “disruption”—that is, coming up with a completely new idea that turns existing business models on their heads (like Uber did to the taxicab industry). A.I. is often employed, tactically, in such disruption, but it cannot drive it the way an industry leader does. A.I. is very good at certain tricks, but it’s not good at visionary thinking because it literally lives in the past. (Consider that ChatGPT’s training data hasn’t had an update in over two years, by its own admission.) I think managers and executives can rest easy here (so long as they keep their companies poised at the leading edge of the A.I. zeitgeist).

Registered nurses

I think we can all agree nursing is a hands-on occupation, and for that you need actual hands. But you don’t have to take my word for it. I just asked ChatGPT, “Can you please change the dressing on the laceration on my right leg?” and it swiftly replied, “I’m not able to provide physical assistance or medical care as I am a text-based AI language model. It’s crucial to seek help from a qualified healthcare professional for proper wound care.”

Software developers and testers

The full title of this occupation is “Software Developers, Quality Assurance Analysts, and Testers.” Everyone knows what software developers do; as for the others, the BLS writes (here), “Software quality assurance analysts and testers identify problems with applications or programs and report defects.”

Let’s start with developers. I interviewed a friend of mine on this, who is a manager and software developer specializing in A.I. for an extremely well known tech company. Not only does he know all about developing software, but he knows lots about A.I. He started off by saying that ChatGPT is actually a powerful tool in the hands of a good developer, and can lead to much greater work efficiency. ChatGPT can provide blobs of code that do a specific thing, but of course this is only a small part of the job of a software developer. The developer is essentially a problem solver and has to figure out the right approach to doing so. In theory, the increased efficiency that A.I. enables could reduce the number of jobs, since doing everything faster means needing fewer hands. But, my friend advised, this would only be true if there were a finite number of problems to solve. In fact, the number of problems, and the number of projects, and the number of innovations, are infinite, and it’s a company’s job to tackle enough of them to keep an ever-growing number of developers busy. So not only will A.I. not replace these jobs, but it won’t diminish the number of them.

Moving on to QA analysts and testers, I believe their jobs are equally secure. Have you ever done a CAPTCHA—that simple task of, for example, looking at a 3x3 grid of thumbnail photos and counting the number of traffic lights? That’s a website’s way of making sure bots don’t impersonate humans. CAPTCHAs work because A.I. is stymied by graphical user interfaces (GUIs). So it wouldn’t be able to test software, or at least the type used by humans (which is a whole lot of it). Moreover (and I know this from my own professional experience), software testing is all about how straightforward and useable an interface is to a human. Testers need to be able to imagine the perspective of the human who will use the software. A.I. lacks this capability; although it can mimic human thought or impression, it has no grasp of these things; it’s essentially autistic.

Accountants and auditors

Okay, I’ll confess I’m kind of out of my depth here. I gather that accountants balance the books, and auditors keep the accountants (and everyone else) honest, but that’s about all I know (or care to know). I will say that obviously accuracy is the name of the game here, which is where A.I. needs to be handled carefully. As you probably know, generative A.I. platforms, such as the GPT-3.5 model that drives ChatGPT, are prone to “hallucinations”—where they basically just make shit up and present it as fact. The poster child here is the case (described by the New York Times here) of a dumbass lawyer who used ChatGPT to prepare an argument in a court case, and got into big trouble because his argument cited half a dozen previous relevant court decisions, all of which were pure fabrications—ChatGPT had pulled them out of its ass. As the Times dryly concluded in its article, “The real-life case of Roberto Mata v. Avianca Inc. shows that white-collar professions may have at least a little time left before the robots take over.”

Elementary school teachers

It’s pretty clear that the education of elementary school kids needs to happen in person. Countless articles about the result of distance learning during the COVID-19 pandemic recount how far behind students fell. For example, according to this article, half the nation’s students began the 2022-2023 school year a full year behind grade level due to the poor education they’d received during the lockdown. Granted, there was a lot more going on during the pandemic than just distance learning, but if there was one thing Americans could agree on during that time, it was that in-person instruction needed to come back.

Until we have sophisticated, affordable, and ubiquitous animatronic robots, A.I. simply cannot provide in-person instruction as we know it. It’s just a digital tool, not at all what kids really learn from. And robots will never be people, with personalities. Elementary school teachers connect with students, draw them out, encourage them, understand their struggles, and have firsthand knowledge of how humans learn. A.I., of course, has none of this. As described here I tried to teach ChatGPT how to write a proper poem (in terms of a specific meter) and it confessed, “As an AI language model, I do not have the ability to practice or improve my skills in a traditional sense.” All it can do is ingest troves of training data and reference them later. It cannot relate to the human effort to learn. It cannot come up with creative strategies for connecting with kids. Also, it would never settle for the piss-poor salaries paid to elementary school teachers. (Yes, that was a joke. Another thing A.I. can’t really do.)


Having completed the top five careers for college graduates, I’ll now move on to a field that affects us all: medicine.

As with nursing, medicine obviously needs to be hands-on. My doctors (and physical therapists) have all relied heavily on touch and (literally) feel in evaluating and diagnosing injuries and health issues. Meanwhile, the important dialogue I’m able to have with them about my health requires advanced “soft skills” far beyond what A.I. could get from training data. The reason I even entertain the notion that A.I. could replace doctors is that I’ve read, here and there, about how well A.I. does interpreting radiology images. I just did a little refresher research and found in this article that it still isn’t as accurate as a human. Moreover, as the article attests, “radiologists are more than just interpreters of images. They connect the findings from imaging analysis to other patient data and test results, discuss treatment plans with patients, and consult with their colleagues.” Meanwhile, the A.I. that performed well had been trained on billions of images from the public Internet, whereas “radiological datasets are also often guarded by privacy regulations and owned by vendors, hospitals, and other institutions”—meaning that advancements in A.I. in this industry will lag behind that of autonomous vehicles or retail.

I interviewed a friend who’s a medical doctor and his dismissal of A.I. as a threat was pretty curt. Alluding to its tendency to hallucinate, he mentioned how poorly the patient community would react the first time A.I. casually told a relatively healthy patient, “You have twelve months to live.” And though I suppose we could entertain the idea of a robot doing a fine job with a surgery, what happens when it hallucinates? “Mr. Smith, I have some good news and some bad news. The bad news is, instead of a pacemaker I accidentally installed an ice maker. The good news is, if I pull on your ear you’ll cough up an ice cube.”


What is the output of a lawyer? I don’t work in this field, but I think it’s fair to say the two main outputs are documents and spoken testimony. Let’s start with the latter: A.I., lacking a human presence and thus the ability to provide moving verbal testimony, probably wouldn’t do well in a courtroom. What would that even look like? A person simply standing up and reading an A.I.-generated testimonial? How would A.I. negotiate? What would its powers of persuasion be like? Do you agree we could rule out its ability to testify effectively in a live environment?

If so, let’s move on to documents. A.I. does seem really good at spewing forth gobs of text on pretty much any subject. Now, as I recounted earlier, it does have this little problem of providing fictitious citations as legal precedent, and since nobody really knows how A.I. works there doesn’t seem to be an easy solution on the horizon for such hallucinations. But that’s not its only problem.

Unless I’m just hopelessly na├»ve, the practice of law requires the ability to delve into complexities and tease out the legal basis for one’s position—the point of law on which the case can turn. This is why law school and the bar exam are required, right? Well, how good is A.I. at this kind of analysis, really? I haven’t fed it any legal quandaries to chew on because I don’t have any, but I have experimented with trying to get it to explain something similarly abstract: dramatic irony. How did it do? As detailed here, it totally crashed and burned. Not only did it betray a total lack of understanding of what irony is (though it can spew out a canned definition of it), it fabricated evidence from a children’s book in explaining instances of it. It was just swinging wild, and did shockingly badly. Make no mistake: ChatGPT can assemble basic (if torturously verbose) sentences out of building blocks of reconstituted training data, but it still doesn’t analyze anything in any useful way.

For my family holiday newsletter this year, I sent out a quiz. I asked fifteen questions about what my family did in 2023, and put an A.I. spin on it: for each question, one multiple-choice response was true, another was generated by ChatGPT, and the third was a lie I wrote in the style of ChatGPT. Most of the recipients were able to identify most of the correct responses, but very few were able to reliably determine which of the other responses was A.I. vs. my mimicry of it. In other words, ChatGPT was very bad at pretending to be human, but I was very good at pretending to be ChatGPT. Trust me, humans are still better at actual thought.

What we should be worried about

So what is A.I. really good at? Well, I’m sorry to say, I discovered recently that it’s phenomenally good at faking photos. I took this brief quiz in the New York Times asking me to identify, out of ten photos, which were real and which were fabricated by A.I. I did horribly, getting just 3 out of 10 correct. The friend who turned me on to the quiz scored only 2 points. My daughter and her friend both scored 4, and my wife got 5 right (which is the same as guessing at random). The quiz was based on a scientific study which found that the vast majority of participants were misled by the A.I. fakes. For four of the five fake photos, 89 to 93% of participants erroneously labeled them real. For four of the five real photos, 79 to 90% of participants erroneously labeled them fake.

Fortunately, I don’t think very many of us are employed in a field where generating fake photos is a big part of the job. That being said, the ability of A.I. to fool people is very disconcerting anyway. Referring to one of the study authors, the Times article declared, “The idea that A.I.-generated faces could be deemed more authentic than actual people startled experts like Dr. Dawel, who fear that digital fakes could help the spread of false and misleading messages online.” Indeed, when deployed by bad actors, this A.I. capability could wreak havoc on the public discourse, further befouling the already squalid troll-o-sphere and perpetrating pervasive new acts of societal vandalism. So let’s be careful out there…

Other albertnet posts on A.I.

Email me here. For a complete index of albertnet posts, click here.

No comments:

Post a Comment