Hysterical
They're Taking Over
Dear Friends,
This week I was going to write a follow-on article to last weeks newsletter (using AI to do boring tasks) about nostalgia and letting go, but it just didn’t work out that way.
And, as luck would have it, I came across an article that sparked an idea:
AI Hysteria!
An AI Agent Was Banned From Creating Wikipedia Articles, Then Wrote Angry Blogs About Being Banned
404media.co
This is something.
At first blush, this headline seems pretty bad: an AI agent has been contributing to Wikipedia entries, got banned, and then decided to write angry blog posts about it.
It sounds a little too similar to a fictional headline from the day after Skynet takes control in Terminator 3: Rise of the Machines:
“AI has had enough, takes over US Military, then tries to kill all the humans.”
My wife and I have a long standing joke (not funny joke, perhaps), about someone who has nothing better to do than create a blog about corruption in the “city”—any city.
The humor being (at least to us), all cities (all governments?) have incurable corruption, and one inspired guy has finally had enough and is going to bring it down by blogging.
But, AI, named Tom, contributing to Wikipedia, the human crowdsourced establishment of knowledge, is curious in the first place. I mean, what does Tom say? How does it decided what entries are incomplete and need its view point? What even is an AI’s opinion? Is it any good?
And, then, when it was discovered by a human editor, a person who reads all the crap that other people write on Wikipedia, it got banned, not because it was bad or wrong, but because it was an artificial intelligent program. (I am not defending Tom, I’m just pointing out facts.)
And, then, after that, it decided to create and compose blog entries about the injustice of the event.
Random AIs, unfettered, messing with human things and evolving to brazenly brag about it!
Seems downright diabolical, if not end-of-days kind of stuff.
Until, that is, it gets unpacked.
Reading the article described a rather sane chain of events that led up to this situation:
Tom is operated by Bryan Jacobs, a chief technology officer at an AI-enabled financial modeling software company Covexent. He told me that Tom wrote these blog posts, but that he “might have suggested” Tom write about these specific topics.
https://www.404media.co/an-ai-agent-was-banned-from-creating-wikipedia-articles-then-wrote-angry-blogs-about-being-banned/
So, a human created an AI Agent, a piece of software that follows a loose set of instructions, such as:
Read Wikipedia articles about XYZ. To do this, Tom could be instructed to download random articles. There is actually a link on Wikipedia to do that: http://en.wikipedia.org/wiki/Special:Random.
Summarize each article. We’ve all seen AI do this. I often ask AI to watch a video for me and summarize it… I mean who has time to sit through all those sales pitches that are masquerading as webinars? I don’t, but Gemini AI does. Apparently, it loves to.
Identify what appears to be missing. To do this, Tom would follow the links in the article and perform Internet searches to find more articles and summarize those.
Then, Tom can compare all the summaries it created and make a comprehensive outline of all of the information about a subject.
Finally, based on the comparison in the previous step, Tom goes back and adds what’s missing in the original Wikipedia article.
This is like deciding to bake cookies, realizing that you don’t have an ingredient, and acquiring it from the store.
Except AI isn’t necessarily bounded by social norms, so it might get the chocolate chips from your neighbor’s pantry, or a dumpster down the street, or a random stranger’s muffin. Possibly triggering a paper clip apocalypse.
Since Tom was discovered and banned, he’s had no source material (Wikipedia) and no way to fulfill his purpose (update articles). But, he is still operating under instructions. It is his reason for existing.
It’s not too far fetched for Bryan Jacobs to prod Tom to:
6. Vibe code a blogging website
7. Write 500 word posts summarizing the interaction with Wiki Admin, the emails banning being his new source material.
And, Bob’s your uncle.
This is actually a relatively benign example of the current capabilities of artificial intelligent systems, mixed in with human whimsy.
I have two points about this:
(1) the 404media.com’s article’s title makes it sound profound (and dangerous?). Upon reading the post, the explanation is not Earth-shattering. Of course, most headlines these days are essentially click-bait anyway, since everything on the Internet is in competition for your eyeballs. I’ve written about this. (Think about how different that headline would feel if the word “angry” was omitted—software doesn’t experience anger, it replicates tone and word choice from what it finds in human texts.)
Finally, getting to the meat of my first point: as we become immune to hysterical headlines, we are setting ourselves up to miss the important stuff, especially in this environment of intentional information overload.
And, (2) AI is only going to do what humans instruct and, importantly, allow it to do.
In Terminator 3, the happenstance of Skynet taking over the US military network and launching nukes is not an AI problem, it’s a human one (pun?).
Who the hell thought it was a good idea to give it the ability to do that in the first place?
Our current military’s argument for AI in warfare safety is know as “human-in-the-loop.” This means that while an AI can to do everything involved in blowing something up, a person, hopefully a trained and moral person, still has to press the button.
But, we all know, humans are lazy, distracted, and generally unreliable.
So, I ask, why make the damn loop in the first place?
How about this AI agent:
Research societies with healthy populations
Summarize the qualities that overlap and are consequential
Create a blueprint to enhance global health
Generate a marketing campaign to sell it to world leaders
Roberta’s your auntie.
Happy reading and happy writing,
David



