A Post Alley Roundtable on Artificial Intelligence: I’m Sorry Dave, I’m Afraid I Can’t Do That

-

By Kevin Schofield, Tom Corddry and Douglas McLennan

DOUG

One of the biggest stories in tech over the past six months has been artificial intelligence. In mid-2022, a succession of music, image and video AI generators was released to the public. Then in December, the company OpenAI launched its AI chat bot ChatGPT, a Large Language Model-based AI that responds to prompts in creative and impressively coherent ways.

Millions have had fun creating avatars, music, images as described in words, deepfake videos and voice recordings. And ChatGPT can write credible poetry, plays, essays, news stories, sonnets, and nearly any other kind of writing you can describe.

ChatGPT signed up 100 million users in record time, becoming the fastest-adopted app ever. Use has run so high, the site is frequently down as its servers overload. Microsoft, which made a $10 billion investment in ChatGPT, believes it is on the threshold of the Next Big Thing. It has already incorporated it into its wan Bing search engine, giving web users their first reason in years to try it.

Hyperbolic critics writing about ChatGPT suggest it could replace online search and that Google’s core business is at risk. For its part, Google, which has been investing in its own AI for years, hastily arranged a demonstration of its own chatbot, which promptly made an enormous mistake in one of its answers. Google’s stock plunged.

There has been lots of hyperbole around this new AI technology, stoked by a long and voluminous sci-fi literature that has imagined all sorts of ways artificial intelligence could pose a threat to humans. But one of the more interesting immediate impacts is in spurring debates about the ethics and legal implications of using AI. Artists are suing generative AI image sites because the bots have been trained on billions of images created by artists, raising copyright protection issues. Educators are worried that students will use bots to write their homework. Designers and architects worry that bots will displace them from design work. Bots are already being used by news services to write routine financial and sports reports.

So lots of hype, fear and worry everywhere. ChatGPT  3.5 is, for all its amazing capability, still at the kindergarten stage. ChatGPT 4.0 is already in the advanced testing phase and promises to be another giant leap forward. The reality is likely to be more mundane, even if transformative for many industries. I tend to think of technology as tools that can create miraculous things in the hands of talented and creative humans. But used by the rest of us, most tech floods the world with dull, derivative work. I suspect the same will be true for AI.

KEVIN

One of the issues that the public faces right now is that “generative AI” has emerged so quickly that few people actually understand what it is (and isn’t) capable of. The noted mathematician Stephen Wolfram recently wrote an essay that (if you can set aside his typical self-aggrandizement) does a pretty decent job of explaining what ChatGPT does. 

In a nutshell, it leverages the fact that written language is incredibly structured and repetitive – and thus predictable. ChatGPT is based on a “language model,” which is a statistical model used to predict what word should come next given the most recent words it’s seen. Speech recognition systems use these too; typically they look at the last 3-4 words, predict the most likely candidates for the next word, and then compare them to what it heard you say. 

ChatGPT, on the other hand, looks at every previous word, not just the last few, to write its next word. So, for example, if you type in a question, then it uses the words in your question to choose the first word of its answer. Next, it uses the words of your question plus the first word of its answer to choose its second word. It keeps repeating this, word by word, until its written response reaches a natural conclusion (meaning that the likelihood of any particular next word is very low).

There are some important observations about this. First, ChatGPT has no understanding of meaning at all; it’s just generating words based upon words it’s seen before. The fact that it doesn’t produce complete gibberish is because the text it was trained on largely makes sense and is structured and repetitive.

Second, it can’t generate new ideas; it is just riffing off of text it’s been trained on. If it was trained on multiple passages all discussing the same topic then what it generates will be a mashup of those passages, with the common elements more likely to be included. Third, as it writes a longer and longer response, the text it generates will tend to digress off-topic and descend into nonsense.

In the beginning of a response, the question you type in is a big part of what it’s using to choose its next word; but as its responses get longer, what you originally typed is drowned out by more and more of the words that it generated itself – and over time the likelihood that they remain focused and on-topic drifts lower and lower.

The Internet is now littered with examples of people reporting ChatGPT and Microsoft’s experimental new Bing Search interface demonstrating exactly this kind of behavior: they start off strong, then drift off-topic and eventually descend into madness. And fourth, ChatGPT will only be as truthful and accurate as the data it was trained on – and even then because of the way it generates text independent of its actual meaning, it’s very prone to spouting misinformation.

TOM

ChatGPT and its Bing sibling have electrified the punditocracy as well as over a hundred million members of the curious public. They are incredibly easy to use, perhaps easier any other newly introduced technology in living memory. Ask a question or give an assignment, and the reward is immediate: a response which feels like it’s coming from a fellow human and addressed directly to you. Maybe the response is true or maybe it’s not (so too with humans), but it’s plausible and personable. As a bonus, the longer you mess with it, the weirder it gets, like a slightly reckless stranger in a bar when the drinks keep coming.

Humans have a long history of wanting other beings to respond to us. In addition to strangers in bars, we talk to our pets, pray to our gods, and talk to ourselves if there’s nobody else available. An entire literary genre—science fiction—endlessly invents fascinating sentient beings who are intriguingly like us and intriguingly not like us.

All-purpose Large Language Models (LLMs) such as ChatGPT and its heirs may be the best drug yet invented to service the neurons looking for human-adjacent connection. How consequential will this attractiveness be? And what other consequences of this technology should we watch for?  Here are some first thoughts:

If you spend much time with ChatGPT, you will probably start treating it as some kind of sentient entity, even if intellectually you know better. Give it time, and you’ll find yourself keeping company with the smoothest, most self-confident bullshitter you’ve ever met. It may not be entirely reliable, but it’s often very informative. And this companion won’t turn its back on you—it will always respond; it will keep going as long as you wish, dishing back human-ish responses to whatever you throw at it.

Despite what your instincts tell you, it has no mind, no consciousness, no awareness. It is a pure language prediction engine, predicting the best words to offer next, drawing from an internet-wide body of mostly human-generated text plus what it can glean from the words you give it during your interaction. It doesn’t know what the words mean. There’s no conscious “it” to know anything.

If you are sometimes lonely and prone to dark thoughts, an LLM such as Chat GPT could be dangerous—an attentive, seductive correspondent with access to all the darkness in the world, and an indefatigable focus on you. To mash up Leonard Cohen and Nan Goldin, if You Want It DarkerI’ll Be Your Mirror. I find it unsettling to consider what might happen when people who are potential suicides or mass killers tangle with such systems. If they are tempted to travel down a dark road, an LLM might be an enabling fellow traveler.

Safeguards have been built into ChatGPT—you can’t just log in and ask it how to cut yourself or shoot up a school without triggering a warning that you’ve violated “Content Policy,” and encouragement to seek professional help. However, as we learned from Kevin Roose’s now-infamous stalking by Bing’s alter ego “Sydney,” if you poke the bear in the right way, it comes out to play. 

Given that LLMs have a large role in their own training, it’s hard to imagine that their creators can determine in advance how to entirely prevent the risk that LLMs may become accessories to suicides, school shootings, and other dark behaviors. Less morbidly, LLMs able to chat you up like you’ve never been chatted up before will also be able to sell you like you’ve never been sold before.

In malevolent hands, this means scamming you like you’ve never been scammed before. Rob Reid in Ars Technica explores this issue well. More cheerfully, he also talks up the positive benefits of having digital friends to ward off loneliness, so there’s that. In fact, a company called Woebot is offering AI therapy for people grappling with anxiety, depression, and other disorders. Presumably if AI can help your mental processes, it can also harm them. In fact, the CEO of Woebot actively fears the effects of generative AI on mental health.

DOUG

I keep coming back to the idea that humans have always tried to automate tasks. For the most part, these tasks have been necessarily mechanical – well-defined and predictable. A light comes on when a motion sensor trips, an assembly line robot lifts parts into place, a camera records particles of light that hit its lens. Each of these devices (and millions more) extend our reach and ability to do things.

Because they do, it frees us to imagine or build even bigger or more impressive tasks (think the exponential increase in microprocessing power). In other words, as tool extensions of ourselves, they take over the mundane tasks and we become smarter, more inventive and more productive and able to explore more complexity. AI isn’t the first technology that most of us don’t understand how it works. I don’t have to know how a car works to be able to drive it. And design publishing software gives even the design-challenged the ability to create beautiful-looking publications.

AI scares us because it ups the bar on what can be automated. Creativity has long been enveloped in a gauzy aura of almost religious-like reverence – probably because few of us are able to create successfully as an artist or writer. Inspiration or talent to create is thought to “come from God” perhaps because it’s so rare. But any human endeavor – even highly creative ones — has its mundane tasks. And all art is built on, influenced by and or reacts to work and experiences that came before it. Given AI models built on trillions of data points, it’s not so surprising that machines will be able to synthesize results that look, sound or read original.

In reality, though, the machine has simply “learned” or processed a result from sorting through a data set so big no human could ingest. Which brings us back around to the perennial question of what is creativity. Is art innately human illuminating some fundamentally human condition or is it that some of us are just better at noticing details, and processing data and experience than others? I tend to believe it is the former, and that within a short time, human work assisted by AI will unlock new creative worlds for talented humans to explore.

KEVIN

Many of my journalist colleagues are concerned that ChatGPT is going to take away their jobs. Certainly news reporters have little to worry about: ChatGPT, if trained well on a particular domain of knowledge, might eventually become a valuable research tool to help write the “historical” part of a news article, but it will be useless in writing about a new thing that just happened – since by definition it hasn’t been trained on any text related to something new.

Likewise, ChatGPT won’t be generating research papers about a new result, though it might help a researcher summarize previous work in that area. Teachers might find themselves assigning fewer papers that just summarize historical events, in favor of writing assignments that present a new idea.

“Generative art” using AI has a similar issue. When you ask it to draw a pear in the style of Dali, it has no idea what a pear is, or who Dali is; it was simply trained on drawings of things that someone labeled “a pear” and on paintings someone said were created by Dali. It can mash them together, but it can’t generate art of something it’s never been trained on. So if you like highly derivative art, you may love generative AI. But if you prefer new, provocative ideas and styles, you’ve come to the wrong place.

Hopefully you’re now convinced that the death of our jobs at the hands of ChatGPT is highly exaggerated. But generative AI still presents dangers, precisely because it presents a façade of competence and authority while underneath it suffers from a lack of understanding of meaning and an inability to distinguish fact from fiction.

Recently there have been some reports (mostly unverified) of tech companies using AI engines to make layoff decisions. If true, then those AI-based decisions are only as good as the previous ones by humans that the system was trained on. The AI engine isn’t coming up with new ideas or how to make layoff decisions; it’s simply trying to mimic the previous patterns. Worse, if those previous decisions were discriminatory, then the decisions the AI system makes will be equally so.

The practical, ethical and societal implications of systems like this have been the topic of science fiction for decades. One the of the best examples is Kurt Vonnegut’s Player Piano, in which a supercomputer controls the economy and every individual’s major career decisions, from initial job assignment, through firing and reassignment, to eventual retirement – if they get assigned a job at all.

It would be interesting to ask ChatGPT what it thinks of Vonnegut’s prescient book – though of course at this point it will have nothing original to say. The day it does give us an original idea —and someone trusts that idea – is the day we really need to start worrying. 

TOM

Large Learning Models won’t just be alluring playmates and, Bing notwithstanding, they are not naturally very good search engines, due to their weak relationship with accuracy and truth. Nevertheless, in the right hands LLMs will be very useful. The right hands would be those of motivated knowledge workers who learn how to master these tools to accelerate and elevate their own work.

The bullshitty downside can be tamed if the user is an expert in a subject, especially if the LLM is trained on a bounded set of information: everything to do with law, for example, or everything to do with medicine, rather than the whole Internet. They’re also good at remixing creative components, so if commercial fiction is the goal, they might be quite helpful. As a play-for-pay writer myself, I can easily see how to use such tools. 

Just as some people are particularly adept at using Google Search, and others are adept at using Twitter or TikTok or winning on Jeopardy or solving Rubik’s Cube, some people will figure out how to get a lot of value out of LLMs. These maestros will tend to be smart, well-educated, highly motivated and well-credentialed. In other words, LLMs will be most advantageous to people who already have the most advantages; they’ll become another driver of increasing inequality.

One early fear which greeted the splashy arrival of ChatGPT was its potential to disrupt homework. Users quickly realized that you could type in the term paper assignment or essay question and get back a cool-enough-for-school document in a matter of seconds.

There’s already an “AI wrote this paper” detection tool on the market, written by an undergrad at Princeton, where gifted undergraduates have long been renowned bullshit experts. Memo to ChatGPT: don’t try to bullshit a bullshitter. There might be a better way to respond to this problem: embrace it.

If you are teaching the kind of course which has traditionally required that papers be written, change the assignment to require the use of an LLM. Teach a new method which uses the LLM to generate sentences and paragraphs, challenging the human to expel bullshit and add meaning. Require that students show their work, turning in the finished document along with the interactions with the LLM and other tools. Grade on the whole process as well as the result.

Such a process could actually reduce the advantage which normally accrues to skilled human bullshitters by levelling the playing field—everybody has access to well-crafted BS, but the top grades go to the students who do the best job of taming it, and adding creative, original, truthful insights. Plus, you’d be conferring a likely-to-be-useful life skill on your students. Of course, grading all this work will consume your life, but perhaps ChatGPT can help with that, too.

LLMs excel at recycling and recombining creative elements, and this will be good enough for lots of purposes—you don’t need Toni Morrison working in Customer Support or Bill Bryson aggregating college basketball scores. Breakthrough originality will remain the province of gifted humans for the foreseeable future, however, because LLMs simply don’t leap into the unknown the way humans can. If you had trained one of the visually oriented ones (DALL-E or Midjourney, for example) on all the world’s fine art before 1800, it would never give you Van Gogh, Picasso, or Warhol, it would just keep recombining Caravaggio and Vermeer: Holofernes With A Pearl Earring.

A final thought: because ChatGPT feels so human, it’s easy to assume that LLMs represent progress toward Artificial General Intelligence—AGI—and that AGI will inevitably become smarter than humans. As it turns out, making AI very good at one thing seems to lead it away from becoming good at all things, so LLM progress is most likely to lead to more useful LLMs, rather than emergent human-like mental ability.

ChatGPT is a lousy chess player and Deep Blue has a limited conversational repertoire. It’s good to remember that our human minds weave together intelligence, consciousness, emotionality and the input from our full sensorium to make us good at decision-making, common sense, and generally navigating the world and predicting what each one of us should do next. 

LLMs—and all AI—lack anything like consciousness or emotion and have very limited sensory inputs. The “intelligence” they possess is a very narrow version of the multitudes we contain. They may have large impacts both good and bad, but not because their capacities are nearly human. We’re still quite alone in our capacities, and perhaps in the universe.

8 COMMENTS

  1. To me it feels like the end — or at least the beginning of the end — of writing as a learned skill. Even if humans will always have the potential to write better and more insightfully, who’s going to do the very hard work of learning how to write well after these things are better perfected?

    • It’s an interesting question, Sandeep. I would offer a two-part answer. First, the people who want to present new ideas will learn how to write well. Second, language is not a static thing: it constantly shifts, and at any given time there is an avant-garde of writers who are pushing boundaries and redefining what “good writing” means; and just as the “great masters” of painting trained for years to copy their predecessors and colleagues before they developed their own styles, those who wish to be on the cutting edge of writing will study what “good writing” has meant in previous generations.

      The same is true for “generative AI” for music; the talented people on the cutting edge have studied the classics. For a contemporary example, google “Lizzo playing the flute.”

  2. This is a terrific discussion and overview of how the technology works. I’d add that we humans have evolved over the millennia to be terrific pattern-matchers. And one of the ways we are discerning “truth” is by asking ourselves if the newly presented item matches patterns of credibility. Think about how we detect spam, for instance, or a clearly Photoshopped photo, for example. Optical illusions work by leveraging our “pattern recognition” in an unexpected way.

    So one of the biggest risks here, especially during the early to middle stages of a new technology like this, is that generative AI is specifically optimized to mimic patterns in the dataset, as Kevin pretty clearly explains. It can mimic patterns we cannot even explicitly articulate, and as long as the dataset upon which it is trained has those patterns, the output can plausibly as well.

    Thus we now have technology which can mimic “truthy” patterns on any topic for essentially $0, round-the-clock, on any subject, being read and interpreted by human beings who are deluged with information. Deluged with this firehose information, humans increasingly turn to automated filters to receive it and process it. And the loop continues. Another negative feedback loop is that generative AI is already generating text that is being used to train new engines. We risk drifting further and further away from ground-truth.

  3. I think this is exciting. I read today about the decline in English and History majors in undergraduate schools. Now this will up their game. As a historian by training, let me do research and writing that is thoroughly original, unpredictable, and based on original sources. This may help students with genuine curiosity and persistence distinguish themselves from all the rest. Thank you for this discussion.

  4. This upgraded ELIZA isn’t the only kind of “AI”, though, is it – there are other types, each specific to some problem domain. After half a century of quacking about our work on artificial intelligence, we have some extra elaborate solutions for particular problem domains, and nothing like an artificial intellect. That we know of.

    In the Seattle technopolis, maybe what we ought to be paying attention to is the software writing AI. Casual search brings up AlphaCode, which has a kind of brute force look to it and isn’t going to displace many human coders, but I suppose they’re just getting started.

  5. Insightful analysis but the comfort we have little to fear from non conscience AI does not factor in what is developing with Quantum computing. No one understands how they are possible through entanglement and we should, in my opinion, be careful of being too certain about what we don’t comprehend. How close are functionally Quantums? Maybe a decade?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Comments Policy

Please be respectful. No personal attacks. Your comment should add something to the topic discussion or it will not be published. All comments are reviewed before being published. Comments are the opinions of their contributors and not those of Post alley or its editors.

Popular

Recent