Controlling the Means of Production
In the café adjoining the AI art gallery, I’m devouring a robot-made tofu banh mi. Perfectly crispy, savory and spicy, the sandwich is a wondrous feat of flavor and mouthfeel that I suspect could only have been possible through the use of taste preference patterns identified by my food logging app. If this superlative gastronomic experience is the result of a meal-prepping AI unobtrusively pulling dining data from my infome, I can’t complain. Far from it. My compliments to the data-harvesting chef.
As I crunch my way through the last third of the palate-enthralling sub, you plunk a book down on the table beside my crumb-strewn plate. You tell me that I have to read it and that you have to buy the bespoke algorithm that authored it. I’m not surprised by that last part. The sway story has over us is powerful, so once this one struck a chord (as it was likely engineered to do in the kind of person who frequents venues like this), of course you’d become keen to possess the algorithm behind it, to secure this inexhaustible source of fiction that suits your sensibilities.
You tap the book emphatically with your index finger as if to stave off my skepticism. Then off you go, back to the gallery—to no doubt follow through with the purchase you’re intent on making—leaving me to enjoy the last bites of my sumptuous sandwich and consider the AI-generated novella that has so thoroughly enchanted you. I could do with some after-lunch reading. So once the banh mi has exhilarated my palate one last time as a final morsel of baguette with bits of seasoned tofu and cilantro, I open the slim paperback, and it catapults me right into another kind of devouring.
With compelling interiority (impressive for a “writer” without any), the story centers on a young, earnest chronopsychographer eager to do impactful work at the cutting-edge firm she’s recently joined. Soon, her diligence, acumen and insights secure her a string of early accomplishments that garner praise from management but also pique the jealousy of a coworker. To the degree that he surreptitiously sabotages her epiphany timer so it goes off too early, before her epiphanies are ready. Oblivious of this, the protagonist treats her inchoate epiphanies as fully formed and is puzzled when they don’t make immediate sense and can’t be easily used in her psychography projects. It isn’t long before she becomes frustrated and distraught—worried she is choking under pressure, fearful that if she can’t succeed at her job with the help of her epiphanies then she will never be successful.
After a woeful chapter, in which she mopes about and laments openly in the company of a friend, the young chronopsychographer figures out that she has become so despondent not because the future of her career is imperiled but because her epiphanies haven’t let her down like this before. Thinking, then, that perhaps she’s been overlooking something, she redoubles her efforts at the firm. Trusting her unfinished insights, she soon finds—no, makes them applicable to her work. This turning point coaxes the reader into the realization that because incomplete insights require effort to understand and harness, they are conducive to mental engagement that ultimately allows them to surpass the efficacy of full-fledged epiphanies, which in their crystallized form are well-suited to play particular roles in a project and are therefore less flexible. Now jealous of the protagonist’s greater success, her unscrupulous coworker makes the same alteration to his own insight timer, hoping this will benefit his work as it has hers. But with only a myopic perspective on what’s happened, he doesn’t know what to do with the partially formed epiphanies he ends up with.
When I emerge from reading this engrossing philosophical and moral tale (notably—cleverly?—concluding with near-literal self-sabotage), you’ve been the owner of its author for a short while. Your receipt is a bright rectangle of creamy beige on the green-gray tablecloth, angled beside the bowl of potato salad you are now gleefully eating across the table from me.
Regardless of the merits of this technoliterary accomplishment, I think you’re playing right into Big Tech’s hands, paying an exorbitant amount (ostensibly to cover R&D costs) for haute entertainment that they’re billing as revolutionary artificial creativity. But I mention nothing of my suspicions that you’re being duped by a computer program purporting to reveal humanity (or at least designed to appeal to your humanity) when it has none. This isn’t the place for a discussion of my concerns.
As we carpool to work the next morning, you rave about the novella the AI author generated overnight, thrilled by the outcome of the first run you’ve had it perform. Though you only read a few pages of the new novella this morning, you’re enamored with their description of the main character’s job as a patience tuner, making careful adjustments to people’s attitudes towards delays, incompetence, etc. Redolent with infatuation, your ebullient words fill the car’s interior.
“There’s clearly a degree of superficiality to this,” I protest. “It’s all ideas combinatorially assembled with statistical models, not trenchant human thought refined through multiple drafts and feedback from editors.”
“So the process is different. That doesn’t mean the result can’t be as valuable,” you rebut. “Or insightful. So the AI doesn’t develop plot and characters and theme through deliberation and dialogue. It’s doing what none of us can, producing work based on a training set comprised of countless literary masterpieces. Finding and expressing patterns in human nature this way could be just as valid. Its own kind of dialectic. Different, sure, but still meaningful—even trenchant.”
There’s a plausibility to what you’ve said that I can’t easily overturn, but I’m unconvinced. Something still feels wrong. My gaze goes to the thin strip of bright sky beneath the clouds in the distance. That could very well be our destination—your autonomous car ultimately carrying us into the morning light.
“So what if you look at a mirror that wasn’t made by you. It still reflects you,” you add, the analogy the indisputable icing on your argumentative cake.
Then, it’s as though you greedily eat that cake by saying, “And with this, I may never have to buy another book again. The algorithm can create all the literature I want.”
“Yes, want, but what about need?” I lob at you in attempted repartee. “Even if there’s no issue with the origins of all the work it produces, the direction you’re going with it is problematic. You’ll be losing the richness of human creativity.”
“Is it really that rich? For society, certainly, but do all those acclaimed books actually enrich me? It’s an understatement to say that I don’t enjoy mainstream artistic tastes. You know how hard it is for me to find literature worth my while, and now I have a guaranteed source of it. There’s no more harm in this than in any other literary poison people pick. We’re not in high school anymore. Adults have no required reading.”
“Okay, you’re right. It’s your information diet. You do what you like with it.”
And you do, to tremendous success, reaping varied benefits: your vocabulary expanded, mind refreshed by intriguing ideas, social circle widened by the inclusion of other artificial creativity early adopters and “book club members”—people with whom you share freshly synthesized novellas and hold weekly meetings dedicated to discussing this literature.
It’s the last part that worries me now. By finding people who have the same taste as you, and reinforcing that taste with them, you’re thickening the cultural insulation you’ve begun wrapping yourself in. Or is it worse than that—are you cocooning yourself from reality? Will you soon be reading these novellas at every opportunity, feeling an ever greater affinity for their worlds over ours?
When it does come, however, trouble befalls you in a way neither of us expected: one of your book club members becomes obsessed with the AI-composed novellas. She frequently sends you lengthy analyses, elaborating on points mentioned during book club meetings. Her messages often end with requests for any additional novellas you have or could generate. You alternate between politely evading and aloofly disregarding these solicitations for further discussion and more reading material, but she is undeterred by your lack of enthusiasm. So you need to temper—or better yet, quell—her oblivious persistence.
After this has gone on for a few weeks, you talk about the situation at length when we meet for one of our after-work dinners.
“I’ve considered leaving the algorithm running 24/7 just to make a whole library for her,” you tell me over biryani and saag paneer. “But then she’s bound to bombard me with dissections of all the new novellas.”
“I would’ve thought you’d get along with her,” I reply. “You both love the same literature.”
“But not to the same degree,” you tell me after hurriedly swallowing a mouthful of naan and saag. “She’s a full-on connoisseur who makes me look like a hobbyist—a casual enthusiast swilling what I like while she deftly peels through layers of terroir, cataloguing tasting notes.”
The urgency in your voice makes your irritation unmistakable, especially in the restaurant’s dim lighting.
“I’m guessing her… zeal is also off-putting to other book club folk.”
“Yes, to the point where some no longer attend our meetings, claiming other commitments—though they want me to continue sending them new novellas, which is telling.”
“Speaking of getting new novellas, if she’s so eager to get a hold of more reading material, why doesn’t she buy herself one of these algs?”
“The price tag.”
“Oh, a captive market situation,” I remark, going with the analogy that has come to mind. “Simple solution: make the consumer their own supplier. Tell her that you have to disband the book club—work getting busy and all that—then give her the AI Scheherazade. But in actuality, a different one.”
“So shell out for another algo.”
“Or make one.”
“Like… mod an open-source bedtime story bot and run it through a training set of my AI’s novellas.”
“Precisely. Tweak some output parameters yourself for a human touch and also refine it with some literary critic bot.”
“Nice, but it would be much easier just to ghost on her.”
“Sure, but I don’t know how well that will go over long term. Didn’t you say she was obsessed?”
“Good point. Who knows what will happen if I cut her supply cold turkey. I mean, she’s even considered becoming a chronopsychographer or an epiphanist. That’s how into this stuff she is.”
“That definitely puts this in perspective. Best to get her off your hands by putting a story-spinning AI in hers.”
“I’ll work on that,” you say, tone resolute.
With the matter settled for now, you resume eating with gusto. I continue eating too but with less vigor. I have some misgivings over enabling this technocultural addiction, but if the choice is between your wellbeing and hers, it’s obvious where my loyalties lie.
Spooning some paneer on to my plate of biryani, I wilfully overlook the fallaciousness of the dichotomy here, to disregard the possibility of a solution in accord with my allegiance to humanity. Perhaps, you might say, enacting the deficiencies borne of my training sets.