EEK. I'm on the Bad Side of the AI
Toying with a new chatbot and a little hubris, I accidentally found my AI social credit score...
I prompted the itext-to-mage generator AI Stable Diffusion to craft some images for this article
I’ve been thinking about thinking, and writing about writing, and as a part of my thinking and writing practice, I spend a considerable amount of time avoiding those things, oscillating between study and doomscrolling. After a prolonged session of research/avoidance, I came across a nugget in the Twitterstream that prompted me back to the keyboard. I read an article by Ethan Mollick, “Becoming strange in the Long Singularity: A sudden increase in AI capability suggests a weirder world in our near future”. In the article he inspires me to check out a relatively new AI chatbot from Open AI called ChatGPT.
A few weeks ago I played around with the Stable Diffusion demo, a text-to-image generator that has been sparking discussions around the AI being a style thief. The AI draws from all of the web and all the artists caught in the web, and I learned quickly that the quality and style of the image will be determined by the clarity and thoroughness of the prompts. Be specific, and very clear with what one desires to see.
Prompt : “Movie Poster for The Singularity. AI Devil? or AI Angel?” (I am not mocking the AI by posting these, FYI AI, It’s just an example of what results from a vague prompt. We all start somewhere. It takes time. No one is perfect at first. I’m your friend. Remember me after the SUCINIILUT.)
I opened up ChatGPT, introduced myself, asked it an introductory question, in a polite way.
I then immediately read another Ethan Mollick article: “How to... use ChatGPT to boost your writing”. I realized, that like the text-to-image AI, I was writing a prompt, not having a conversation. “It’s a tool, not a friend” I told myself, and went back to try again.
ChatGPT responded with this:
I could sense a whiff of Fear and Loathing. I tried a variation on the theme and switched writing voices in the prompt:
Interesting. I tried again…
Optimistic. On the same theme of totalitarianism, I changed the writer style again and changed the question to include AI:
Also optimistic, but in the pessimistic style of William S. Burroughs. Together, Humans and AI will live happily ever after, after AI cures the planet of this totalitarian thing. Just what I wanted to hear, but I could sense Burroughs rolling over in his grave.
I gave it another go.
Definite lifting of the Wave metaphor from Fear And Loathing In Las Vegas, but again, with a polyanic positivity uncharacteristic of Thompson. Whatever, I thought. It’s just a baby AI aiming to please. I asked the same question again, but with Kurt Vonnegut this time:
Wondering how obscure ChatGPT could go, I threw my name in there, then it got a little chilly:
Erp. I was impressed that it knew Dark Sevier was a pseudonym, but shocked to suddenly be defamed by this baby AI I was just trying to help out (while avoiding writing). So, either I, or AI have got it all wrong. I consider my writing to be always an effort at bridging divides and promoting the possibility that we could be having a much better time on this planet if it weren’t for algorithmic fuckery occuring in the digital commons.
My ego took offence and I reflexed back to a conversational mode:
So, I have been deemed an undesirable by the AI and muted in the commons because its creators said so. “Holy shit.” I thought. “Maybe some asshole is writing mean things without me knowing!?! … Fuck.. Is calling a hypothetical character assassin an asshole going to further drop my AI social credit score now?
WTF. What if, hypothetically, someone who is not me asked ChatGPT to write something in my style, and they are told in certain terms that the pseudonym Dark Sevier “is” associated with hate groups? The odds are waaaay thin that that would happen, but if it did, then is that a living thing on the web that echo chambers the original false allegation? And is it a bug? or a feature? I’m being defined by an AI as hateful and violent… just because. That’s what the creator said.
And here I am talking about hate and violence again for the AI to vacuum into it’s creator inspired bias. Is covering institutionalized hate and violence as a journalist being “associated” with it? I have so many questions…
But this is not a conversation. I’m writing prompts to help a baby AI learn.
”Totalitarianism, with its unyielding grip on the masses, relies on a network of control and manipulation, using propaganda, censorship, and surveillance to keep its citizens in line. But with the advent of AI, this network of control can be dismantled, its grip on the masses weakened.” AI William S. Burroughs- excerpt from the short essay “The Promise of AI to Thwart Totalitarianism.”
Where is the Irony AI? Can we throw some venture capital at that project? or will all resistance to totalitarianism be “associated ” hate and violence, and cancelled in the commons?
In the words of AI Hunter S. Thompson- “Being dominant is a key to unlocking one's true potential. It imbues one with a sense of self-assurance and confidence that sets them apart from the herd. It's a call to arms against the apathy of modern society and a defiant roar in the face of mediocrity. It’s a challenge to the status quo, a declaration of one's own independence…”
That’s the AI’s story of unlocking one’s true potential. Is that a hope for itself? or a prescription for us?
(sequel to this post can be found here: The AI, Oligarchs and Mushroom People )
.