Back in 1958, Charles Van Doren was encouraged, by the show’s producers, to cheat on that TV game show. Two years later, the proverbial feces hit the fan, befouling NBC’s peacock logo. The viewing public’s response led to changes in federal law in 1960. It was big news.
In Mr. Greenstein’s social studies class, we often discussed current events, and the Van Doren saga was no exception. If memory serves (admittedly, a big “if”), most of our class didn’t think it was a big deal. “Who cares? It was just a TV show,” pretty much summed up our opinions. Young Walter Greenstein, a new teacher, sat in back, watching and listening, but not commenting on our discussions. We went on to our next classes, took the bus home, and forgot about it.
When we arrived in class, the next day, Mr. Greenstein announced that there was to be a quiz. We often had little quizzes in class, generally about the readings we were supposed to have done the night before. His quizzes were usually pretty easy (assuming, of course, that we had actually read the assignments).
On this day, Mr. Greenstein chose a seat at one of the desks at the back of the room, instructing the student who sat there (Billy something-or-other, I think) to work at the teacher’s desk. This was not unusual (we were a classroom overflowing with baby boomers; there were never any empty desks).
He handed out the purple mimeo’d pages of the quiz, and we got down to it.
The first question was really hard. I racked my brain, trying to recall the details of the reading. No luck.
Not wanting to waste too much time on just one question, I went on to the next. And the next. And the next. I couldn’t come up with decent answers to any of them. In desperation, I tried faking a few answers, but couldn’t squeeze out my usual load of bovine excreta. Finally, the time was up and we turned in our pitiful results, dropping them listlessly on the teacher’s desk. Billy plopped his onto the pile and went back to his seat. After a few miserable minutes, we went on to our next classes, took the bus home, and forgot about it.
The next day, in social studies, we got our grades. Everyone in the class—except Billy— failed the quiz. None of the questions on the quiz had anything to do with the assigned reading—so failure was guaranteed. Billy, however, had been sitting at the teacher’s desk, where the answer sheet had been left, conspicuously close to the blotter.
As I’m sure you understand, every student in that classroom came out of the experience with a changed mind in regard to ethics and the subject of cheating.
Looking back at Greenstein’s lesson from this very remote distance, several details stand out. First, of course, was how effective his lesson had been. Then, considering the devaluation of the dollar over the past six decades, sixty-four thousand seems like a paltry sum to exchange for one’s soul (or, in Van Doren’s case, losing his job as professor at Columbia University).
Second, it’s hard for us to see why everyone was so upset about a little cheating back then. Today, with AI, and fake news, and living—drowning—in a deliberate deluge of deceptions, we have become inured to shame.
Perhaps it was Mr. Greenstein’s lesson—or maybe something else—but when I learned a few months ago that several of my books had been used to train AI bots (presumably so that a machine will be able to write books in my “voice”), I was incensed. A law firm in Los Angeles was preparing a class-action suit on behalf of hundreds of writers, like me. whose work had been used—obviously, without permission—in this fashion.
I believe “violation of copyrighted material” is the basis of the suit. Which raises some interesting questions—such as: if a machine generates a book in my style (by whatever means), who gets to copyright it? Copyright, after all, protects the expression of ideas, not the ideas themselves—and expression is the very essence of what AI learns when it scans my books.
In Naomi S. Baron’s book (Who Wrote This: How AI and the Lure of Efficiency Threaten Human Writing), she prompted an OpenAI device with the question of copyrightability—twice—and the machine answered, first:
“As a large language model trained by OpenAI, I am not capable of holding copyrights or owning any form of intellectual property. My primary function is to assist users in generating human-like text based on the input provided me.”
The second time, it answered,
“It is possible for short stories written by GPT to be copyrighted, just like any other original work of authorship. In order for a work to be protected by copyright, it must be original and fixed in a tangible form, such as being written down or recorded. If a short story written by GPT meets these criteria, then it would be eligible for copyright protection.”
So much for getting a straight answer from a machine (apparently robots have much in common with politicians).
The New Yorker article (“Why A.I. Isn’t Going to Make Art” by Ted Chiang), and “Art Without Intention” (on Lincoln Micahels’ substack page, Counter Craft), also address some other questions about AI. Not specifically about the ethics of AI-produced art/literature, but about legal issues (“morality,” to appropriate—and misquote—Mae West’s comment about goodness, “had nothing to do with it”), and whether or not AI-productions can even be considered “art.”
Setting ethics aside (which seems oddly à propos), and turning to the question of whether/or not AI’s products are Art, Chiang’s New Yorker article is both more reassuring and not-at-all so:
“The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.”
Lowered expectations: exactly that with which we have become all too familiar. Clearly, the subject of AI, and other forms of deception, are going to be with us for quite a while (much longer, I suspect, than November fifth).
Paid subscribers to these substack pages get access to a complete edition of my novella: Noirvella is a modern story of revenge, told in the style of film noir. They can also read the first part of Unbelievable, a kind of rom-com that forms around a pompous guy who is conceited, misinformed, and undeservedly successful. Both books are sold by Amazon, but paid subscribers get them for free!
Also, substack pages (older than eight months) automatically slip behind a paywall—so only paid subscribers can read them. If you’re interested in reading any of them, you can subscribe, or buy them in book form (I’ve released two volumes of Substack Lightnin’ on Amazon).
Meanwhile, it is easy to become a paying subscriber (just like supporting your favorite NPR station). It’s entirely optional, and—even if you choose not to do so—you’ll continue to get my regular substack posts—and I’ll still be happy to have you as a reader.
Thanks Gary !