Chat this, chat that. Roseanne Roseannadanna would wonder why suddenly everyone is buzzing about French cats. We’re already in Stage III of L’apocalypse des Chats. In Stage I, OpenAI released ChatGPT as a service via the web, and 100 million people signed up, making it the most rapidly adopted software application in history.
In Stage II, Microsoft updated its search engine Bing to include an AI chat feature based on ChatGPT, and the beta testers selected to bang on it had serious fun, none more so than Kevin Roose at the New York Times, with whom Bing transitioned into Sydney and fell in love. In Stage III, OpenAI has released an API, allowing other applications to integrate their AI chat service. Suddenly, Slack, Salesforce et al are bringing this new fascinator to their customers.
Scads of companies already use dumb chatbots in their interactions with customers, employees, and vendors. Most will race to roll these over to something smarter, despite the risk of what’s called chatbot hallucination—bots making stuff up. Microsoft is adding it to Windows and inevitably to Office. Startups are already offering to help harried writers of “content” meet deadlines, it’s catching on fast with middle school teachers (surely the most harried of all), and, it turns out, nothing is sacred: there’s a boomlet in using ChatGPT to ghost-write sermons.
IF you find any of this—or all of this—confusing, here’s a reading list to help your comprehension, or at least help you toss the right jargon around in social settings, much as ChatGPT itself might do. First, let’s review what exactly this new form of chat is (spoiler, it’s a lot more than the old chat). Here’s Stephen Wolfram, self-admiring polymath, describing what Large Language Models such as ChatGPT do and how they work. It’s important to understand the basics, so you won’t be tempted to ascribe actual human-like intelligence to these applications:
- For many, the first question that comes up as they grok ChatGPT is, “Whose jobs are at risk?” After all, lots of workers earn at least part of their salary writing text: lawyers, marketers, screenwriters, and journalists, to name four species of crosshair inhabitants. Here’s a more systematic assessment of the risk by job type.
- There’s also the upside, however, that the very workers whose jobs would seem to be most at risk might fight back by using the same tools. Perhaps chatbot-powered humans can outperform either chatbots alone or humans alone.
- In fact, among white-collar writers, those with white clerical collars seem divinely inspired to seize the opportunity. In the beginning was the word, perhaps, but from there on out it’s Reverend Chatbot.
- If you prefer your salvation to come from therapy rather than theology, there’s AI-generated psychotherapy on offer.
- Not to be outdone by their fellow comforters of the afflicted, middle school teachers are out-chatting their own tech-native adolescent charges.
- And, naturally, the human content creators who currently grind out vast seas of blog posts, customer support text, online reviews, and inane memos are receiving offers they’ll be afraid to refuse—wield this sword or it will be wielded against you.
- Needless to say, this chatmospheric river of machine-generated text is generating serious concerns among those best at expressing concerns. Here’s linguist Noam Chomsky in the New York Times about the likely failure of AI chat to fulfill its apparent promise.
- Another linguist, Emily M. Bender (a computational linguist at the University of Washington), argues that we’d all be better off if AI technology wasn’t deliberately presented as if it were increasingly human: it would be better to have great machines acting like machines than to have machines affecting human behaviors that give the unwarranted impression that they have an interior life comparable to our own. She’s interviewed in New York Magazine by Elizabeth Weil.
- Speaking of U-Dub, UW computer science professor and 2022 McArthur Prize winner Yejin Choi and several colleagues recently tried to teach a Large Language Model to compete in the New Yorker Magazine’s cartoon caption writing contest. In a cleverly-titled paper (Do Androids Laugh at Electric Sheep?) riffing on a Philip K. Dick novel (Do Androids Dream of Electric Sheep?), they conclude that AI falls well short of human capacity for humor.
- A more substantive concern than humorlessness concerns Atlantic writer Matthew Kirschenbaum, who warns of the risk that AI-generated text could absolutely flood the internet, getting high on its own supply as it continues to train itself on internet text which is increasingly being generated by bullshit-happy chatbots. He fears that such text will drown us in the intellectual equivalent of gray goo, wrecking the utility of other Internet features in the process.
- A little less apocalyptically, his Atlantic colleague Ian Bogust tests the depth and breadth of the gulf between what ChatGPT can do for journalists and what The Atlantic actually requires.
To relieve the concern you may now be feeling about a chat-infested future, I’ll leave you with a book of AI generated pictures by Seattle artist (and accordionist) CT Chew. Use the Preview feature and choose to view full screen:
When you’re done with this list, I’ll post another.