“On November 1, 1959, the population of New York City was 8,042,783. If you laid all these people end to end, figuring an average height of five feet six and a half inches, they would reach from Times Square to the outskirts of Karachi, Pakistan. I know facts like this because I work for an insurance company – Consolidated Life of New York. We’re one of the top five companies in the country. Our home office has 31,259 employees, which is more than the entire population of uhh… Natchez, Mississippi. I work on the 19th floor. Ordinary Policy Department, Premium Accounting Division, Section W, desk number 861.”Calvin Clifford (CC) “Bud” Baxter, the protagonist of Billy Wilder’s 1960 film The Apartment, interpreted by Jack Lemmon.
There’s something irresistible about Billy Wilder comedies, especially their sense of carefully produced, uniquely scripted scenes; also, their uninterchangeable characters.
They were among the first pop culture formulas of the modern era, yet they kept their worth: after watching them, we feel replenished; we feel it was both fun and worth it.
When formulas worked: Hollywood Golden Age
Billy Wilder’s mass cinema imbroglios are not at all different from the popular plays from the past: Aristophanes in the classical world, Lope de Vega in Spain’s Siglo de Oro, Shakespeare in the Elizabethan Golden Age, or Corneille and Molière in France a bit later: they entertained above all, but they also offered a hilarious mirror to the societies of their time.
I remember watching Billy Wilder’s The Apartment on my own a little after college. I was hesitant at the beginning but ended up watching it again with somebody else to test if I was alone in thinking it was that good.
Watching a comedy from 1960 set up in Manhattan is also a portrait of professional work before the era of electronics and computers, one in which lots of professional “menial” jobs weren’t menial at all: people needed skills to keep up with records and communications, and companies that could afford it had entire building floors stuffed with armies of people dedicated to administrative tasks that soon became automated. Decades later, the connection between place and economy (as theorized by Richard Florida, for example) is less apparent thanks to telework and unconstrained collaboration.
Yet, when we watch The Apartment, we guess that CC “Bud” Baxter, the office drudge at an insurance corporation interpreted by Jack Lemmon, had no clue that the world was about to change for an entire category of repetitive work, effectively executed by simple computer scripts. The cold and luminous open offices with cubicles of that era lacked “humanness,” and, in that sense, they feel as alienated as our society has become since.
When automation replaced clerks
As computing and automation became more sophisticated, experts kept nonetheless the confidence that Jack Lemmon’s character in The Apartment, a mere insurance clerk, was doomed to lose his job and salary, with which he can afford the Upper West Side old-bachelor apartment that becomes the epicenter of the romantic comedy-drama (Bud lets the place to higher-ranked co-workers so they can follow up with discretion on their extramarital affairs, in the hope of advance in his career).
We get the feeling that the same company managers will retain their jobs in the future, whereas the office clerks endured the transformation of their work. Things got so bad so quickly that, in 1962, somebody asked President Kennedy:
“Mr. President, our Labor Department estimates that approximately 1.8 million persons holding jobs are replaced every year by machines. How urgent do you view this problem–automation?”
To that, JFK responded:
“It is a fact that we have to find, over a ten-year period, 25,000 new jobs every week to take care of those who are displaced by machines, and those who are coming into the labor market … in particular industries we might get special structural unemployment. We have seen that in steel, we have seen it in coal, we may see it in other industries … I regard it as the major domestic challenge, really, of the ’60s, to maintain full employment at a time when automation, of course, is replacing men.”
Yet these questions seem to be already implicit in Billy Wilder’s movie. Any spectator senses the empathy with the protagonist and comrades trying to get by despite the senseless character of their task and the alienation it nurtures: those unwilling to accommodate to being a mere automaton, a cog on a machine that doesn’t require thinking but the regular performance of menial operations, suffered back then in the original “bullshit jobs.”
All-you-can-eat content and AI
Billy Wilder’s ability to trick us into finding romantic comedy formulas fresh was the magic of one era in which cinema held a confident unrivaled status in popular entertainment. Now, The Apartment seems to be not only about the end of menial jobs but also a cautionary tale about the golden era of formulaic mass content, which evolves into content farms of churned “units of production” to feed video streaming services.
Good movies and decent to good documentaries survive, albeit the exceptional ones are rare. Netflix promotes local audiovisual production across the world (and therefore funding interesting projects that didn’t get the attention and backing they deserved). Yet, a big chunk of the available content seems to go beyond good formulas and consists of serialized filled-up templates with rushed-in production (and its ambient frenetic library music trying to confer a sense of purpose to it all).
The risk of the Golden Era of TV series is its closeness to the risks that cultural critic Walter Benjamin saw in artistic expression when it’s mass produced and loses its freshness and uniqueness, what he called “aura.” Benjamin was referring to art in the age of mechanical reproduction in 1936, but the way that Netflix offers classic TV series together with newer, more formulaic productions tests Benjamin’s theories.
In the era of pre-AI in audiovisual production, things seem to converge analogically with commoditization: a diluted sense of meaning and narratives filled with platitudes, then blended with frenetic music in the background, perhaps with the idea to compensate for the lack of rhythm.
For example, when Netflix notices that watchers (say, middle-aged men from affluent countries) are interested in watching true crime or drug-cartel documentaries and TV series, the platform mass-produces them with the zeal and predictability already present in the insurance clerks performing their tasks in The Apartment.
Content at double speed
Some technologists (most of them with vested interests in generative AI or enthusiast early adopters of such tools) argue that artificial intelligence is a democratizing tool that will ultimately inspire people to do a better job in scriptwriting, or in writing investigative journalism, making art, designing buildings, improving industrial processes, designing better technical garments, or anything that we may need open counseling on.
It’s too early to tell whether generative AI represents a challenge to the thousands of creative writers striking to improve their working conditions, which they claim to have worsened as streaming media takes center stage. Generative AI lurks in the background of the 2023 Writers Guild of America strike as writers try to limit the future impact of ChatGPT in the sector by channeling its use to a research tool that speeds or eases script brainstorming and not as a replacement for screenwriters themselves.
The question to President Kennedy regarding the impact of automation in clerk jobs seems to acquire a new meaning now: with generative AI improving exponentially as it’s trained and refined by users themselves, everyone should feel as Bud Baxter in Willy Wilder’s mentioned comedy.
With no AI in place yet for most creative tasks, streaming services already feel like content farms. When we watch a movie like The Apartment, most of us feel uplifted, have a great time, and learn something. But when we watch content based on generated templates, we feel depleted and exhausted, with little to learn from an experience designed to get only our superficial attention that self-assumes we may tune out: all-you-can-eat content is often consumed at double speed, as if the audience assumed the podcasts and videos they click on have some nuggets of signal in a sea of formulaic noise. Will the number of platitudes in the content produced reach saturation levels anytime soon?
Now, generative AI tools threaten to change even the work that theorists considered especially difficult to perform by machines due to the need for improvisation, originality, and the ability to cross references with newly acquired information: and so, we read almost daily how ChatGPT or similar tools could change scriptwriting, computer programming, teaching and mentoring, writing articles and long-form text, or even law.
On the importance of knowing how to write
Anybody can realize the potential of generative AI by querying chatbots with questions that deliver ways to strategize thinking and writing. Instead of asking for a full text on something we need to write, we can first inquire about the general topic and angle we are taking to receive immediate responses that can save time and (more controversially) a big part of our thinking.
My oldest daughter missed a recently a couple of classes of AP chemistry, a challenging class, and needed to catch up on the way some molecules behave under unstable circumstances; she had no book to rely on, and by the time she needed to study, she only had the notes of a classmate and the assignments that her teacher had posted online, so she decided to query ChatGPT with the partial assumptions she had. Generative AI walked her through the parts she had missed in class.
Whether we are using it to write, study, to come up with serial ideas that need to be reworked to transform their platitudes into actual insights, generative AI has already become a de facto tutor or assistant to a lot of people who assume that the risks it represents (in unreliability, in preventing us from use critical thinking) are lower than the actual advantages.
There’s little or no friction in users of ChatGPT for them to avoid the tool for counseling or for creating their mentor-on-demand, and some technologists argue that the switch to mass adoption will likely change more than the way we perform searches and query for information on the web.
Y Combinator founder Paul Graham has repeatedly defended in his essays that learning to write is one of the most important skills to get to know things, to generate new ideas, and to critically examine our own thinking, as well as reflecting on the thinking of others.
“I think it’s far more important to write well than most people realize. Writing doesn’t just communicate ideas; it generates them. If you’re bad at writing and don’t like to do it, you’ll miss out on most of the ideas writing would have generated.”
But, to Graham, writing also needs to be enriching and useful by being as precise and meaningful as possible. In that respect, ChatGPT hasn’t been trained to elaborate unequivocal (and ultimately correct) theses but plausible ones: the texts sound about right despite their usual vagueness and, sometimes, outright falsehoods. Now, Graham suggests, students are developing lazy ways of getting by with the help of generative AI, and the strategy could be detrimental to their potential:
“Observation suggests that people are switching to using ChatGPT to write things for them with almost indecent haste. Most people hate to write as much as they hate math. Way more than admit it. Within a year, the median piece of writing could be by AI.”
Ability to write and ability to think
Writing is not an automatic process. It’s built upon critical thinking and skills that anybody has to develop mainly on their own. Clear, concise writing is hard, and it’s built upon skills and an ear to cut out verbosity, redundancy, and yes, platitudes and things that “sound great” but don’t say much when analyzed (which is the main body of work delivered by generative AI). Advice on how to write concisely by, say, George Orwell, seems to warn against the prose that a tool such as ChatGPT is capable of writing:
“Probably it is better to put off using words as long as possible and get one’s meaning as clear as one can through pictures and sensations. Afterward one can choose—not simply accept—the phrases that will best cover the meaning.”
Paul Graham believes that some people will think that there’s no need to make an effort to clarify what we want to say and then to try to structure it in a text on our own, a skill that anybody can benefit from:
“I warn you now, this is going to have unfortunate consequences, just as switching to living in suburbia and driving everywhere did. When you lose the ability to write, you also lose some of your ability to think.
“I don’t have the slightest hope of averting this switch. I often tell startups it’s safe to bet on laziness, and this is one of the biggest bets on laziness in history. The switch is going to happen, and we won’t know the consequences till it’s too late.
“I’m not warning about the switch to AI in the hope of averting it, but to warn the few people who care enough to save themselves, or their kids. Learn to use AI. It’s a powerful technology, and you should know how to use it. But also learn how to write.”
Reinvention in the era of generative AI
Like Graham, most analysts and commentators give for granted that, in ideal circumstances, the potential advantages of generative AI are far superior to its risks to society. Essayist Nassim Nicholas Taleb, known for his unrestrained commentary and his work on black swan events, from which he theorized about the resilience concept of “antifragility,” puts it this way:
“Let me be blunt. Those who are afraid of AI feel deep down that they are impostors & have no edge. If you have a 1) clear mind, 2) a deep, not just cosmetic, understanding of your specialty, 3) and/or are original enough to reinvent yourself when needed, AI will be your friend.”
The risks are real, as former Google AI advisor and key pioneer of the current technology, Geoffrey Hinton, has expressed when leaving the company recently. “It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton said.
The technology’s role in spreading misinformation, making millions of jobs obsolete, or even matching human intelligence seems to be taken seriously by generative AI’s biggest advocates, some of whom are responsible for the technology’s rush to market.
OpenAI’s chief executive Sam Altman, a key personality in speeding the commercial release of OpenAI tools such as GPT-4 and DALL-E, is moving fast but is sure that public opinion won’t tolerate any fast-growing technology firm break things nonchalantly in the process. The company’s urge has already prompted Alphabet to launch a more aggressive strategy to release Bard chatbot and other generative AI tools.
In parallel, Altman, who previously had run the incubator Y Combinator after Paul Graham’s retirement, assisted on May 16 to a Senate hearing in which he agreed with the members of the subcommittee for privacy, technology, and the law in need to regulate the technology due to its potential harms:
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening.”
“Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past. Congress failed to meet the moment on social media.”
Calling for regulation, but who pays for the negative externalities?
To Sarah Myers West from the AI Now Institute, a policy research center, Altman’s proactive approach in calling for generative AI regulation could be a strategy to avoid more punitive legislation in the future. She believes Sam Altman’s suggestions for regulation don’t go far enough and wouldn’t prevent the use of AI in profiling people with the help of biometric data:
“It’s such an irony seeing a posture about the concern of harms by people who are rapidly releasing into commercial use the system responsible for those very harms.”
Meanwhile, the use of ChatGPT is already transforming universities, writes Owen Kichizo Terry for the Chronicle of Higher Education:
“The common fear among teachers is that AI is actually writing our essays for us, but that isn’t what happens. You can hand ChatGPT a prompt and ask it for a finished product, but you’ll probably get an essay with a very general claim, middle-school-level sentence structure, and half as many words as you wanted. The more effective, and increasingly popular, strategy is to have the AI walk you through the writing process step by step. You tell the algorithm what your topic is and ask for a central claim, then have it give you an outline to argue this claim. Depending on the topic, you might even be able to have it write each paragraph the outline calls for, one by one, then rewrite them yourself to make them flow better.”
ChatGPT is already taking control of the critical thinking and structuring needed for essays and scientific articles.
“The vital takeaway here is that it’s simply impossible to catch students using this process, and that for them, writing is no longer much of an exercise in thinking.”
In the new context, traditional assignments lose their value.
“Colleges ought to prepare their students for the future, and AI literacy will certainly be important in ours. But AI isn’t everything. If education systems are to continue teaching students how to think, they need to move away from the take-home essay as a means of doing this, and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.”
It may be time to query ChatGPT-4 for strategic ways to prevent generative AI from impairing students’ writing and discourse structuring abilities by offering essentially free generative AI assistants, then switching the model to create a (literally) dependent market of users unable to write critically on their own.