In Defense of AI?
Do What You Value
Hello Friends,
Before we begin today, I just have to say that in no way do I believe AI should create pieces of art, writing, etc. and be passed off as human craft.
I just don’t. Just like I don’t believe I should pretend I built the car that is sitting in my driveway.
While I think both a human and a program can produce similar end products, a drawing of a rose or a kitty, perhaps, I feel that human art comes from experience, empathy, and the creator’s sense of place in the world. AI art comes from the act of statistical reconstitution.
But calling all AI bad, to me, is very much the same as opting for a person humming a song using their own vocal cords and lips or playing that similar tune on a guitar using chords they didn’t invent.
Most people, I’m thinking, would rather gather and listen to a dude playing a guitar than a guy humming The Rolling Stones.
The number one piece of advice given to young writers is: “Read a lot.”
Why is that? We all know. If you want to be great, you need to be exposed to and emulate greatness.
Humans copy. If we didn’t borrow ideas from each other, there would be just one house, built around 12,000 years ago, and every other human would still be living in the trees, except that one guy who lives in a bush.
Someone would have built a boat, and the rest of us would just sit on the shore watching them float around with an umbrella drink.
We borrow, we steal, we improve, we break, we enhance, we find, we imagine.
And we use tools.
When I first started writing software, I read magazines and books that were filled with coding samples. I would copy those bits of code and use them in my programs.
That’s how we learn.
In 2008, a couple of guys built a website to help programmers learn from each other. It was called Stack Overflow (a programming term for when a program leaves its execution area and bleeds into another area of memory, causing unpredictable results, often a crash. Nerd!).
This wildly successful website, stackoverflow.com, has been serving as the place for programmers to go to solve their coding issues (er, steal code) for over fifteen years. When I was a daily coder, the website was always present in my browser.
Two years ago, if developers had a problem that they hadn’t seen before, they could either spend hours trying random things or go to Stack Overflow and download a snippet of code that did exactly what they needed.
Today, that developer enters a prompt into a code-trained AI chat and gets the same snippet of code.
What is the difference? Often, when an answer comes from Stack Overflow, it results from a discussion between several developers, all with opinions and justifications.
When it comes from AI, it is just the code, and possibly a citation referencing the source. That one difference, and on Stack Overflow, the usernames of the developers who left comments, questions, and solutions are all specified.
The developer who needs the answer may never actually know the people behind the commonly anonymous Stack Overflow usernames, nor give them credit. When AI generates new content based on someone’s style or information, the assumption is that there are always human achievements behind it, informing the results. Often, though, the style is recognized as belonging to a human.
This has been the crux of the issue with generative AI: leaving human contributions behind. It’s hard to argue that people shouldn’t get full credit for the work they perform. It’s complicated—humans copy other humans.
For years, when I’ve used an application like PowerPoint (that uncredited humans created), I’ve been able to begin with a template (that an uncredited human created), find some research (created by other humans) on Google, and put that information (citing it, of course), mixed with my own opinions, into the slides.
Now, a person can ask an AI to create a slide presentation, and it will find a template and do the research for them.
As long as any truly original work is cited, I don’t really see the difference, except the AI solution was faster and easier. (… I know, I know, please argue with me in the comments. And, of course, I’m conflating generative and assistive AI—but that is the point and, importantly, what is being done now, because, as I say, it’s complicated.)
Every day, in every facet of our lives, we have things we don’t want to do or value. For example, I hate making PowerPoint presentations. More than just dread the time I have to spend finding buttons that do the things I want to do in PowerPoint, I don’t value it.
I don’t value the time I spend doing it, and I don’t value the results. It just seems too performative. I mean, let’s just have a conversation. Why do I have to make a pie chart?
PowerPoint holds a low-value position in my life. If I could have low-value tasks performed by AI, and fill my life with activities I find valuable, like writing this newsletter, that is the dream of AI (at the one being put forth by the folks making money trying to sell you AI—more complication).
When I finish writing this article, I will put it through a local version of Claude via LM Studio (because it’s just smart for creators to not just feed their content into random corners on the Internet) to have it generate search engine optimization (SEO) terms.
Truthfully, Claude does a lousy job.
Each week I ask it to create a 160-character list of SEO terms so strangers can find my newsletter, and it usually gives me a 500-character, highly repetitive list of only fairly useful words. So, I have to prune it. But coming up with SEO terms myself is a royal pain. So, Claude does it. I don't have to, and it’s good enough.
Again, the AI dream.
Besides, SEO terms are not for people; they are for computers to classify websites. So, why not let another computer figure it out.
Apparently, there is a thing called friction-maxxing, inspired by Kathryn Jezer-Morton, a sociologist (I’m not sure why she felt the need to misspell “maxing”, but it’s her thing, not mine), where she says its “[important to do] hard things in order to reclaim our humanity from our algorithmic overlords.” Which I generally agree with.
There is a lot of talk about how AI is making us dumber, more than the general global decline in IQ over the last quarter of a century. But I’m not sure if that translates to seeking cognitive benefits by forcing ourselves to do all the things that we find troublesome (and truthfully dull) in the most difficult, time-consuming ways.
I mean, who doesn’t prefer a pleasant stroll to your local coffee shop, rather than taking your car on the five-minute excursion? And, why not pay a couple of enterprising teens to sit-down mow my yard instead of hacking away at the crabgrass on clover with a 16th-century scythe?
Why should I hunt a bloated MS Word for that magical array of button combinations to format my document for publication?
For my first book, I paid someone to do it. Now, I use a software product called Atticus (I should have got an affiliate link for that). It was built by someone else, it has book templates created by someone else, and all I do is choose one and press “format.”
Yes, when I switched from using the paid book formatter to using Atticus, I took money out of that person’s pocket and participated in a trend that might destroy the livelihood for some people that was created by Microsoft’s overly complex software.
I did that. And it had nothing to do with AI.
She was a difficult human to work with and argued against every change I asked for. There was no positive sense of humanity in that experience.
And hopefully, she has expanded her cognitive capacity and learned more marketable skills (and adopted more professional manners) because soon MS Copilot will be doing the work with fewer arguments, and hubris is no excuse.
There are some things I enjoy doing the hard way. I love raking leaves in the fall, even though we’re supposed to let them decompose into the soil or loudly blow them into the neighbor’s property.
I pick a perfect autumn day, prepare a cup of coffee or open a can (yes, can) of beer and enjoy deliberately moving detritus around my yard, assembling my compost pile for next year’s vegetable garden. Hard, but lovely.
I enjoy crafting this newsletter. I wouldn’t consider using generative AI to write it. But all the assistive tools I use to do the work of writing it are AI enhanced—it’s practically unavoidable these days.
And, really, more accurately, I use software tools to support the delivery of this newsletter. Do you care that those tools are labelled “AI?”
The only alternative is if I were to walk to each of your houses and stand at your door and speak the content of this newsletter to you. Is that really what you’d prefer? I’d probably have coffee breath and be sweaty because I got lost on the way to your house since I didn’t have GPS.
Trust me, it’s better this way.
And if you have Google’s Gemini Gmail summary filter on your inbox, you don’t even have to wade through all my bullshit. You’d just get a synopsis: “Balancing on Absurdity says assistive AI is okay.”
Win-win. I had a fun, frustrating time banging this thing out, and you got the gist. All made possible by AI.
But seriously, the apocalypse is coming. We’re all screwed.
Happy reading, happy writing, happy enjoying your hard stuff, whatever it may be,
David


