Journalism is in Dire Straits. It’s About to get so Much Worse

-

Nearly everyone reading this has played with an AI chatbot like ChatGPT. We’re in the midst of the fastest adoption of a technology in the history of the world, already with 900 million daily users. Contrast that with the spread of the World Wide Web, which took 10 years to reach 100 million users (in 2001). It’s like magic to be able to ask any question and in seconds have a detailed answer. If you have more questions about the answer, just ask again and the chatbot will be more specific. For many of us, chatbots like ChatGPT or Perplexity or Claude have replaced Google Search as the way we find things online.

And good riddance. Over the years, Google choked its search results in jungles of weeds in the form of ads and “sponsored” sites. When every search turns up lists of millions of “results,” and you have to hack your way through the overgrowth to get to what you’re looking for-–not even the answer, mind you, but websites where you might find an answer-–you are experiencing what the tech writer Cory Doctorow calls enshittification.

Google has responded by adding AI-produced blurbs at the top of its search results that attempt to summarize answers to your queries. The company also seems to have dramatically trimmed the ad-tech thickets. And there is now an “AI Mode” option, Google’s own chatbot, that will try to answer your question directly.

So the death of the traditional search engine is at hand. It’s worthwhile for a moment to remember how revolutionary Google Search was when it launched in 1998. In the early days of the web, with millions of webpages coming online and information being freed from its analog print prisons, just finding useful sites was a problem. Companies like AOL and Yahoo! were founded on the notion of making sense of the web by organizing it. Their answer was to create lists of useful sites—guides, really–that recommended where the good stuff was. To do this, they hired armies of humans who scoured the web daily.

Then Google came along, and the process was automated by key words and algorithms that measured how people were using the web. If more people visited a site, it probably had more useful content and ranked higher in Google’s search results.  Of course, website owners wanted to rank higher, so a new multi-billion-dollar industry was born: Search Engine Optimization, which is just a fancy way of saying find-out-what-Google-Search-looks-for-and-make-yourself-look-like-that.

Wonder why most websites look and act more or less the same way? It’s because to get visitor traffic, you have to be visible in Google’s search results, meaning be on the first page. Show up on even page six or seven (let alone page 1,387,289) and you don’t have a business. So Google has enormous influence over how the web looks and works, and thus over what we see and hear, and thus over our economy.

Some of this was good – the web is safer and easier to navigate because of Google’s standards. But it also gives enormous power (and profits) to a single company, which it has used over the decades not so much to continue to innovate but to buy up promising technologies that made the company even richer. Among these acquisitions were YouTube and ad-tech companies that fed off Google’s search muscle to allow it to dominate web advertising. This market dominance is the basis for big government antitrust suits against the company in recent years.

News organizations have had a fraught and conflicted relationship with Google over the years. Journalism always has been something of a bait-and-switch business. Not the news itself, but the business that supports the gathering and making of it. The real business, the one where the real money changes hands, is selling ads, not selling news. So advertisers are the real customers, not readers, and advertisers aren’t necessarily interested in paying for the best news or the most news, but the biggest number of readers or viewers they can buy access to. This has an influence on the kind of news that’s produced.

Add to this that core to the traditional print newspaper model was to bundle stories and features across a variety of topics and interests to aggregate readers — comics, sports, puzzles, games, recipes. But Google and the social networks stripped away that bundling strategy, sending readers directly to specific content rather than having to go to a publication’s front page. In the digital age, no publication “owns” the loyalty of its readership in the way it once did.

Social networks (that’s a whole other story) and Google are the biggest sources of traffic to news sites. At its peak, traffic from all social networks accounted for 20-30 percent of traffic, while Google Search on its own directed 10-20 percent. News organizations, whose traditional business model ran on ad sales (subscriptions in the print era accounted for only about 20 percent of revenue) had virtual monopolies on print ad sales in their markets, and were desperate for digital eyeballs in the new economy, particularly since online ads sold for a fraction of the cost of print ads.

So journalists optimized their sites like everyone else, and when ad revenue continued to tank, they looked at the billions of dollars Google and Facebook were making and started demanding the companies pay them for showing links to their stories. That’s a neat trick-–news orgs desperately trying to get their work featured high in Google Search while meanwhile insisting the company pay them for the privilege to be there.

If journalism stories dominated web traffic, then the newsies might have had leverage. But journalism accounts for only about 3-4 percent of overall web traffic, so Google, while making some small gestures towards supporting news gathering, largely ignored the established journalists. Facebook went further, de-prioritizing news links on its site.

An added complication is that Google and the other platforms continually tweaked their proprietary algorithms for what would be shown to their users, claiming to “enhance” user experience. Whether or not “enhancement” was the reason, the companies fiercely guarded their algorithms so publishers had to play a constant guessing game of trial and error in order to get their work seen.

Add to this that algorithmic tweaks made it obvious after a time that paid placement got more reach, and eventually the only way most sites could get their work in front of audiences was to pay for it by buying better placement. No surprise then that Google and Facebook had their fingers on the toggles determining content spread. They owned the ad platforms and ultimately controlled both the buying and selling of ads (the cause of even more government antitrust suits).

That’s the power Google Search has had until now – to both make and break businesses and entire industries. And this is the landscape in which news publishers have struggled.

An Even Bigger Threat

Believe it or not, this dismal state of affairs may soon be regarded as “the good old days.” AI has taken aim at Google’s search business model. Sending users to sites where they might find the answers to their questions is demonstrably a worse user experience than simply answering the question a user asks. And since Google had prioritized making money from advertisers over making its search business a better user experience, you have a technology ripe for disruption.

For citizens of the web this looks to be a good thing. For publishers, though, it may well be an extinction-level event.

If web users can get answers from chatbots directly, why should they visit websites? Since Google introduced its AI summaries atop search results, publishers such as CNN, ABC and others have reported visits from search results are down 30-40 percent (though Google claims this isn’t true). Less traffic equals lower advertising revenue.

And this is only the beginning. Agentic and generative AI promise custom-served content from across the online world in which web browsing is replaced by summaries and mix-and-match content and news to evolve with your interests. You can see examples of this now.

For example, Perplexity is an AI startup founded in 2022, currently valued at $20 billion. It makes a chatbot that will answer your questions from real-time content on the internet and link you to its sources. It will summarize news taken from multiple sources, potentially providing wider context and detail than any one report. Delve into those sources, however (which apparently few users do) and alongside traditional news sources such as the New York Times, Reuters and NBC, you’ll often find sites you’ve never heard of. How accurate is that?

As a frequent user of Perplexity, I love the contexts of the answers, the suggested follow-up questions at the end of stories, and the explanations of how/where the summaries came from. And I like that I can interrogate the summary for what interests me about the story (which you can’t do in a traditional news story). If my questions extend beyond the initial summaries, Perplexity will go out again and interrogate other sources and background histories. This makes consuming the news an interactive experience it never was before. It’s like having lived your whole life in black and white, and now you see color.

But none of the reporting that made this possible is paid for by my consumption of it. I no longer have to go to The Times or CBS or AP websites to get the stories. And even if I did, I’d likely see a more one-dimensional version than the synthesis Perplexity offers.

Publishers, perhaps in anticipation of where the technology is going, have pivoted away from relying so heavily on advertising and begun emphasizing reader subscriptions. The New York Times, which had been mired in the advertising-first model, has aggressively built up its digital subscriptions. It currently has about 12 million digital subscribers and 600,000 print subscribers. At its all-time high in the print-only era, the Times topped out at about 1.2 million print subscribers in 1992. So the Times now has the greatest reader reach in its history and a bulked-up newsroom to match. And for the first time, income from subscriptions tops that from advertising.

Alas, few regional or local news organizations have seen corresponding success. The Seattle Times, one of the more successful regional newspapers in America, currently has ~77,000 daily print subscribers and another 91,000 digital-only subscribers. This is far below the 232,000 print subscribers the company reported in the mid-1990s. And its slimmed-down news offerings illustrate the economics.

The real boom in subscriber-supported content is on micro-publishing newsletter sites like Substack, Ghost, Patreon and Medium. Podcasting, both audio and on YouTube, has become a flourishing business. Popular podcasts and Substacks now attract audiences that dwarf those of traditional cable news channels and newspapers. And owners of these publications are currently making handsome profits. Some 50+ Substack newsletters earn $500,000 or more per year. The biggest podcasts earn 8-figure annual revenues. Talent such as Steven Colbert will likely earn more as some version of a podcaster than he currently makes for his show at CBS.

It’s probably fair to say that there has never been so much news content being produced, much of it more informed and better conceived than in the “Golden Age” of mass media of the 1960s and 70s. But there’s also never been in our lifetimes such a fraught media landscape filled with misinformation and uninformed opinions that get traction more because they are provocative or incendiary than because they are true. The big problem for the consumer and our broader culture is sorting out what’s true and what isn’t, and this has become a threat to our democracy.

It’s no longer a spicy take to observe that the traditional broadcast network TV model is dead.  And the business model for regional news is barely on life support. Though the Substack and podcast economies are currently thriving, they’re already showing signs of oversaturation and decline, as so many models before them have. Besides, the Mosquito Fleet-ization of the news is unlikely to provide the institutional muscle it often takes to hold powerful forces to account.

So if technology and audience habits are destroying traditional news media, and what’s replacing it doesn’t seem up to the job, what could be?

Given that the power of AI has been doubling every 3-6 months or so, it doesn’t take much to imagine a new kind of journalism that could upend the traditional reporter/editor model and even the very definition of what constitutes traditional news. Instead of discreet narrative stories, imagine news as dynamic, 360-degree, constantly-updated data models built around events or ideas or trends. Such models would have the ability to aggregate and compile sources that human reporters would be hard pressed to match. Users could interrogate “stories” and interact with them, and fact-checking could be part of the data process, potentially cutting off the legs from under purveyors of incendiary misinformation.

Here’s a crude example of how this might work, and something I have been experimenting with lately. Copy this next snippet and paste it into your ChatGPT window. This is a link to a recently-released report on the state of journalism in 2025. With access to the report, you can explore whatever parts of it interest you:

Or try the same exercise with my story here on Post Alley. I’m writing this story based on my research, observations and experience and analyzing and drawing conclusions from my reporting. But don’t take my word for it. Use the prompt and see what it says. Challenge what I wrote. Ask follow-up questions. (Here’s the URL)

For example, earlier in this story I wrote a version of Google Search history I’m sure the company would dispute. If you also wonder if my characterization is plausible, dispute it with the Chatbot. It will give you counter-arguments and tell you where my version may be wrong. Journalism goes from being a story one guy or one news outlet posted into a discussion-opener that can be explored depending on your interests.

Of course this will spawn all sorts of new problems, such as where the data comes from, who controls it and how it is vetted. And there’s the question of what or who referees information. But we’re already having to grapple with the massive disputation of what’s true and what constitutes facts. One of the reasons our political discourse has become so fractured is that we can’t seem to agree on basic truths. An erosion of authority opens up an endless, unresolvable free-for-all if we can’t agree on some basic truths. Could AI help to re-establish confidence in basic facts, or would the machines pour gasoline on conspiracy theories?

As we’ve seen with streaming services like YouTube and Spotify, an enormous number of people are more than willing to let algorithms determine what they see and hear. Browser start pages have for some time offered streams of news “stories,” often from sites you’ve never heard of and with peculiar names. Consumers are historically not as likely to note the source. But they’ll click on an incendiary or unusual headline and the information becomes part of their consciousness, whether true or not.

There will always be a need for reporters who observe and interrogate and dig. A new model of journalism might also create a whole new breed of journalists/story-tellers/pundits who would build off data that’s transparently available to all (think open-source code) to add value for users.

Who owns this content and who pays for it to be created seem well beyond the capacity of current copyright and intellectual property laws. There is no hotter topic right now in AI than the who-owns-it debate. And here is the battlefield where the new business models are being fought over.

It seems likely, given copyright cases that have been settled so far, that Big AI, having ingested pretty much the entire internet without asking permission, will ultimately need to agree to some sort of licensing agreement with creators of that content. There is precedent. When radio stations began playing music on their airwaves in the early 1900s, musicians protested the theft of their copyrights. Within a decade, a licensing agreement was created through the creation of ASCAP. Stations buy licenses to play whatever music they want, ASCAP tracks what’s being played and distributes royalties to artists.

AI engines already track where their answers come from. Creating a tiered system that pays creators according to their contributions to answers to queries would be complex and require a new system, but it could be worked out. Algorithmically-determined payments could go to individuals, but also to institutions that added value and resources.

And where would the revenue come from?

The easiest way to see the stakes is the astonishing amount of capital being invested in AI. OpenAI is currently valued at $500 billion and will soon top $1 trillion. In 2024, $110 billion was raised by AI companies. So far this year, VC funding for AI has topped $200 billion, for the first time exceeding investment in all other global venture-capital businesses combined. That capital has been invested with the expectation of enormous returns, so it gives you some idea of what’s at stake.

The who-owns-it issues around content AI companies have hoovered up offend artists and journalists who believe their copyrights have been violated. But, at least in court rulings so far, courts seem to be inclined to say the companies have a case around fair-use. That is, while the companies consumed copyright-protected work, they transformed it into new work rather than simply stealing and reproducing it.

This is another slippery slope for journalists and artists. Journalists themselves consume and collect information from others, then synthesize and transform it into stories. Artists experience the world around them, including the work of other artists, and draw on those experiences to create new work. Both “train” on the work of others to become better at what they do and create value. Outlaw the machines from training on the work of others and you tread into the moral expediency of journalists and artists using the work and experiences of others for their own gain.

Anthropic recently settled copyright lawsuits with authors by paying a record $1.5 billion for the rights to use work their computers trained on. The company acknowledged having ingested 482,460 books, and each author will get a payout of about $3,000 under the settlement. But the tech industry’s history of rapacious consumption of everything in its path, fueled by a scaling mindset that believes that getting bigger and richer is the only path to domination and success, suggests $1.5 billion is little more than a speedbump. The companies are unlikely to share in the fortunes they will earn from collected human knowledge unless they are forced. And that will require a new system far beyond current copyright law for transferring value.

Is there enough to meaningfully support a new business model for journalism? Empirically, of course. Whether there is the will to create one, it’s difficult to say. But with the AI monster knocking at journalism’s door and an extinction event for traditional journalism imminent, there are few other options.


Discover more from Post Alley

Subscribe to get the latest posts sent to your email.

Douglas McLennan
Douglas McLennanhttps://www.artsjournal.com
Doug is a longtime journalist who writes about journalism, the arts and technology. He's the editor and the founder and editor of ArtsJournal.com and co-founder and editor of Post Alley. He's a frequent keynoter on arts and digital issues, and works and consults for a number of arts and news organizations nationally.

1 COMMENT

  1. A worthy review of where we are and how we got here. While many complain about the current (former?) aggregators of news, a world where the flow of easy information is random and subject to manipulations of data quants gives me a dystopian unease. We trade in news editors for Google and now AI companies run by the super wealthy who need to monetize the new business. Seems like we are on the edge of a rabbit hole.

Leave a Reply to Geoff Spelman Cancel reply

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Comments Policy

Please be respectful. No personal attacks. Your comment should add something to the topic discussion or it will not be published. All comments are reviewed before being published. Comments are the opinions of their contributors and not those of Post alley or its editors.

Popular

Recent