Saturday, January 17, 2026

Old Yarn - The Dark Alley Incident

Introduction

This is the seventh “old yarn” on albertnet (following in the footsteps of “The Cinelli Jumpsuit,” “Bike Crash on Golden Gate Bridge,” “The Enemy Coach,” “The Brash Newb,” “The Day I Learned Bicycle Gear Shifting,” and most recently The In-Flight Voyeur). This is the kind of story that would normally be a “From the Archives” item, except I’ve never before written it down.


The Dark Alley Incident – ca. 1993

I was living in an apartment on Webster Street in San Francisco’s Western Addition, aka Lower Haight. This wasn’t one of your better neighborhoods, but wasn’t too rough either. The rent was low and the architecture was really good, mostly Edwardian/quasi-Victorian row houses that were built not long after the 1906 earthquake. Honestly, in those days—and to some degree even now—architecture and walkability were more important to me than safety.

Doing a little light fact-checking now, I see that in the early ‘90s this neighborhood was worse than I’d realized. It was considered sketchy by most San Franciscans, and compared to today had significantly higher rates of violent street crime, largely due to a crack epidemic that lasted until around 1994. The homicide rate was bad enough that the local media often described parts of the neighborhood as a “war zone.” If Internet research had been a thing back then, I might have chosen not to live there.

Still, it was cheap, I had my own room, it was a short walk to my favorite Thai restaurant, and it offered easy mass transit to my office job downtown. Plus, it was only six blocks to the Market Street Safeway where I shopped. And that’s  where this story (finally) gets interesting: if I was willing to take a little risk and go down an alley next to the tall, fenced-off hill where the San Francisco mint is, and then cross over the subway tracks at a place where you really weren’t supposed to, I could cut the walk down to just three blocks. It was practically a straight shot from my apartment, and from the standpoint of convenience, pretty much irresistible.

Granted, this was a pretty dicey shortcut. I can’t remember if there was an actual fence, but if so it wasn’t high (though looking at Google street view I see now that they’ve since built a giant fence, easily 12 feet tall). This wasn’t the classic dark alley of the imagination—long and narrow and dripping wet for some reason—but it was remote and kind of spooky, especially at night. It drove my girlfriend crazy that I took this shortcut, and she exhorted me to go the long way around, but I didn’t want to listen. Perhaps I had some premonition that one day she’d be my wife, with all the authority that goes with that, and then I’d have to toe the line—so I should enjoy my freedom while I still could.

On the night in question, as I headed toward the tracks well after dark, I encountered three shifty-looking characters off to my right in a narrower, perpendicular alley. They wore large black hooded Starter jackets, sagged pants, and ribbed wool watch caps. They were leaning in toward one another, speaking in low voices. Just as I passed their tight huddle I saw a flash of light, which had to be a cigarette lighter. Even back then, smoking a joint right out in the open in this neighborhood would have been perfectly normal, so their discretion meant something. Wow, I thought. How urban … they’re actually smoking crack!

Of course I didn’t gawk or anything—wouldn’t want to draw attention to having witnessed them. I kept my head facing forward, kept walking, minded my own business. I crossed the subway tracks, went around to the front of the Safeway, and did my shopping.

Now, given what I’d just seen down this alley, you’d think I’d finally go the long way around on the way home, right? But I was really loaded down with groceries and just didn’t feel like walking that far. I had multiple plastic shopping bags hung along each arm, all the way down to my wrists. One hand clutched a couple more loaded bags, and the other a big plastic jug of liquid laundry detergent. So I was lumbering along pretty slowly.

Past the subway tracks, when I was halfway down the alley, I saw movement out of the corner of my eye. I didn’t look over—just kept my eyes pointing ahead, seeing what I could with my peripheral vision. The movement was getting closer. I wasn’t sure it was the three dudes from earlier until one of them spoke. “Yo man,” he said, “hold up.”

I stopped and turned. It seemed important to act as casual as I could. Obviously these guys weren’t asking for directions … maybe they just wanted to mess with me. Could be they were sizing me up. I gave a quick chin-up nod, like, ‘sup? The dude looked me over and said, “Whaddya doin’, man?”

I glanced down at my bags, reflecting soberly upon how utterly impossible it would be to run with such a burden, and yet also how ridiculous it would be to abandon my groceries and flee, on the mere speculation that I was about to be mugged. I mean, I’d be pretty irritated if someone took one look at me and decided I was obviously a criminal. So I played it off. “Just doin’ my shopping,” I said. It’s funny: years before this, when I was an awkward teenager, suddenly called upon to speak, my voice would sometimes come out high and weak, like there was a reed stuck in my throat and I was short of breath, but now, when I really might have had something to be legitimately nervous about, I managed to sound fine. Casual, even, I thought.

“Hey man, I see you got detergent there, man, you gonna do some laundry?” the guy asked. He and his buddies were standing awfully close. Was this some preamble while they got ready to surround me? Or was I just being paranoid?

I glanced down at the laundry jug, wondering if it would make a good bludgeon but reflecting that the soft plastic would be more like a wiffle bat. Plus, loaded up as my arm was, I lacked the strength to even lift the jug that high, much less swing it. So I just looked back at the dude and gave another  upward nod. “Yeah. Of course.”

This is the part you’ll swear I’m making up, but I’m not: the dude reached into his coat and pulled out—not a gun, but a box of Bounce. You know, the fabric softener. “Man, you wanna buy some Bounce?” the guy asked. I couldn’t have been more surprised if he’d pulled out a bouquet, or a puppy. Where did he get that? I wondered. Did he roll some other guy walking home from Safeway?

I shook my head. He persisted: “For the dryer, man, make your clothes softer!”


I shook my head again. “No, man,” I said, feeling increasingly like I was part of some improv skit. “I don’t use that stuff.”

“Aiiight, ‘s cool,” he shrugged, and the three of them shuffled off without so much as a glance back. I faced forward again and resumed my slow trudge home. I resisted the temptation speed up, or to look back. Was that really it? Was the encounter truly over? The whole way home I was simultaneously a) braced for the other shoe to drop, and b) working hard not to burst out laughing. Were those dudes  just screwing with me, or had I passed some test? I will never know. And did I ever take that shortcut again? Honestly, I can’t remember.

—~—~—~—~—~—~—~—~—
Email me here. For a complete index of albertnet posts, click here.

Thursday, January 8, 2026

New Year’s Resolutions — AI Edition

Introduction

It’s that time of year again, when you start planning out how you’ll be better this year. This can be really annoying, especially when you have to read about “SMART” goals even though you know in your soul that “DUMB” ones are better. And now you’ve stumbled across this post. Well, fear not – I’m just as fed up as you, and will try to make this as painless as possible.

As with everything now, I will consider the topic through the prism of Artificial Intelligence. That satisfies the “T” in “SMART” because it’s timely—and for this I apologize. But I must press on, because there are many ways in which we could resolve to use AI better, more responsibly, or less annoyingly. I’ve managed to winnow this post down to five key resolutions.


Resolution #1: Do not pass off AI work as your own

This recommendation probably seems self-evident, and yet it needs to be said. How many times have you read something ostensibly written by a human but obviously ghost-written by AI? It’s kind of amazing to me how brazenly people will paste straight from ChatGPT or another AI chatbot and think they can get away with it. Only an AI could fail to spot the nuances that betray AI-generated text.

Even if AI worked perfectly as a ghost-writer, by using it you would suffer from the neglect of your own intellect. There is an intrinsic value in learning to represent your ideas in your own words, just as you would do when speaking. Ideally, over time, by doing your own work, you will develop a writing style—a unique voice—and this is what makes you you, and ought to keep you from being easily replaced by AI. And the more you write, on your own, the better this voice will be developed, and the better you will do at in-person communication, which is almost always ad hoc. This capability ought to take precedence over the convenience of outsourcing to AI.

I asked ChatGPT to weigh in on this matter, and it said, “If AI helped draft, summarize, or rephrase something substantial, acknowledge it—especially in professional, academic, or published contexts. For your blog, this might mean a light disclosure like: ‘Drafted with AI assistance; final edits and opinions are mine.’” I cannot imagine using AI for drafting, summarizing, or rephrasing anything substantial. I assume that you are reading albertnet right now only because you trust that I can do more for you than an AI chatbot, and that I’m willing to put in the hard work to make the writing witty, entertaining, and concise. ChatGPT’s disclosure would be like the cook greeting you at his or her restaurant and saying, “My gravy is from a mix and my apple crisp is Sarah Lee.”

Resolution #2: Do not trust AI with any info you cannot verify

Obviously one of the fundamental benefits of AI chatbots is that unlike a single-query Google search, you can have a dialogue to get very specific about your question or problem and provide all kinds of context. When this works, it’s great. The trouble is, as we all know AI hallucinates. And to make matters worse, it hallucinates very confidently and feeds you incorrect information very convincingly and assuredly. I spoke with a doctor recently who described a bizarre and yet increasingly typical dialogue with a patient: she asked him why she shouldn’t just take x amount of such-and-such medication, as ChatGPT suggested.  He replied that that dosage would be lethal.

Of course you, gentle reader, are too wise to use AI like that, but it’s surprisingly how poorly it can do even with very basic information. Recently I asked two different chatbots, ChatGPT and Copilot, if my Sigma Sport bike computer is compatible with the power meter on my new bike. I provided the make and model of each, and both chatbots assured me they were indeed compatible. So I tried to pair them and got nowhere.

Both chatbots dragged me further off into the weeds when I started troubleshooting. Copilot determined, definitively (or so it claimed) that my power meter was defective. Based on blinking colored lights, it declared that “your 4iiii is doing exactly what a unit does when it can power on briefly but fails its internal startup test” and that the cause is most likely “internal hardware failure” that is “unfortunately not rare with brand-new 4iiii units.” It then offered to draft a note to the manufacturer to get the device replaced. Its voice was that of the expert advisor, when it was actually swinging wild based on training data of unknown and unverifiable provenance.

Of course the whole reason I reached out to AI on this in the first place is that I’d never messed with power meters before, and knew nothing. And yet, just having human-grade intuition and skepticism ended up being more valuable than all that training data and lightning-fast research capability. I distrusted Copilot’s conclusion, figuring hallucination on its part is more likely than a hardware defect. So instead of wasting any more time online, I tried a different bike computer to see if it would sync, and it did instantly. What a relief, that I didn’t start some needless warranty replacement process and waste some tech support person’s time, only to end up looking like a jackass. I’m still grieving over the fifteen or twenty minutes I’d spent pointlessly troubleshooting with AI. I should have experimented with the second bike computer in the first place, before asking AI for help. (Why didn’t I? I was intimidated and wanted my hand held. This was a poor instinct.)

This isn’t to say AI is never an appropriate tool, of course. As I’ve blogged about before, it can be very helpful in all kinds of technical matters, such as scripting HTML. But you should only use it when you can verify its output experientially instead of blindly trusting it. For example, when ChatGPT helped me implement the copyright footer on this blog, I knew the instructions were valid because I could see the footer for myself (as can you, below).

Resolution #3: Limit the influence of “secondhand AI”

Now that we’re all using AI chatbots more and more, it’s easy to forget that most of the AI that affects our lives is behind the scenes. We think of AI as a productivity tool, but that’s just the chatbots; most AI is developed by corporations to drive algorithms that try to grab and hold our attention, which ends up reducing our productivity. To make an analogy, a chatbot is like smoking a cigarette and getting all the benefits it provides—e.g., the drug, the rich and satisfying smoke, and the coolness—while AI-driven algorithms are like secondhand smoke that doesn’t taste good, doesn’t make us cool, and just gives us cancer. I hereby nominate for widespread adoption the term “secondhand AI,” meaning the AI that drives us instead of responding directly to our queries. (Yes, I was being facetious about the “benefits” of smoking. Just making sure you’re awake.)

So the gist of this resolution is to try to limit our exposure to secondhand AI, or at least the extent to which we let it shape our behavior. Instead of looking at the books Amazon suggests, get more of them out of those little free libraries, or from the “Staff Picks” section of your bookstore or library, or ask your friends for recommendations. Stop letting YouTube and TikTok thrust content in your face. And instead of letting Spotify choose the music after your selected album or playlist is finished, configure it to just stop (i.e., turn Autoplay off). Why? Because the crap it chooses doesn’t belong in your ears or brain. All these algorithms share the same central flaw: they select for stickiness, not quality. They can’t judge quality because they have no taste  … just the ability to carry out endless A/B tests and learn from the results.

Resolution #4: keep AI out of your messy human stuff

The most sensitive human interactions—consoling, arguing, advising, listening—might present the most tempting use cases for AI. After all, here’s a platform that can give you guidance, suggestions, actual written content, etc. without judging you or getting distracted or running out of time or patience. But this is also the area where I exhort you to close the laptop, lock your phone, and sort things out on your own. (Using a close friend as a sounding board is fine.) Why? Three reasons.

First, what if AI came up with the perfect thing to say, and in just the right way, and you couldn’t resist and just delivered its sensitive message verbatim? This might work, but what if the person you’re having the difficult dialogue with detects the distinctive AI diction and figures out you used a chatbot? This sends the message that you cut corners, that you couldn’t bother being sincere and authentic—that you outsourced your role in the interaction. This could (and probably should) be deeply offensive to the other person.

Second, if you struggle during the dialogue, and the other person perceives your vulnerability, I think that can only help. Instead of being perfectly articulate and glib, why not let your difficulty be plain to see, so that the other person can tell the struggle is mutual? Meanwhile, if you have “your” thoughts perfectly rendered with the help of AI, won’t you be more inclined to doggedly stick to that script, instead of letting the dialogue go where it needs to?

Finally, engaging in this struggle on our own is good for us. Inhabiting this discomfort, instead of trying to settle the dialogue with maximum efficiency, is bound to lead to the kind of soul searching we ought to be doing anyway. And, like with anything, we get better with practice, which is important, because we won’t always have the opportunity to stop and consult AI during a social crisis. Thinking on the fly will go a lot better if you’ve done the time working through the messy human stuff on your own.

Resolution #5: don’t replace humans with chatbots

According to this article, about half of the teens in a Common Sense Media study reported they use AI bots “regularly, not just for entertainment, but for venting, emotional support, and companionship.” And according to this article, “About one-third (31%) [of American teenagers] actually claim that dealing with AI companions is more satisfying than talking to a human being.” It seems incredible to me—that is to say, I’m amazed that I find myself even  weighing in here—that anybody should need to be advised against using AI in this way. How did we get here?

I cannot get past the most obvious issue which is that every minute a person spends typing into a void (or talking, I guess they have voice mode now) is a lost opportunity to bumble around in the real world and have the opportunity to meet people, one or two of whom could potentially become a friend. Can’t we all agree that there is a nonzero chance of making friends just by leaving the house? And that the chance of forging a real friendship with an AI chatbot is zero?

Okay, fine, I don’t personally struggle with social anxiety, and I should try to empathize with those who do, but it’s difficult, particularly since I myself was a social pariah in grade school, and then things got worse in middle school, and I’m constitutionally shy, but I did manage to eventually learn how to get along. But setting all that aside, how good is the strategy of replacing human interaction with programmatically easy, safe AI companionship? I’ll cite one article, from the Columbia Teachers College:

According to research from MIT, for example, people who are lonely are more likely to consider ChatGPT a friend and spend large amounts of time on the app while also reporting increased levels of loneliness. This increased isolation for heavy users suggests that ultimately, generative AI isn’t an adequate replacement for human connection. “We want to talk to a real person and when someone's really suffering, that need to feel personally cared for only grows stronger,” says George Nitzburg (Ph.D. ’12), Assistant Professor of Teaching, Clinical Psychology. 

Gosh, this last resolution seems like the literary equivalent of a plate of bulgur wheat salad with a side of kale. I hope it has so little to do with your life that you can just flick it off your sleeve like a booger. And then resolve to get fewer boogers on your sleeve to begin with. In fact, why not resolve to get that number down to zero?

 Other candidates for New Year’s resolutions

If you don’t overmuch care about AI and are just looking for general inspiration as you contemplate your own resolutions, here is a wide assortment of suggestions: 

Further reading 

—~—~—~—~—~—~—~—~—
Email me here. For a complete index of albertnet posts, click here.