X, the social media company formerly known as Twitter, is suing Media Matters. The advocacy organization recently published research showing that ads from major brands appeared next to antisemitic content on X. The complaint from X argues: “Media Matters has manipulated the algorithms governing the user experience on X to bypass safeguards and create images of X’s largest advertisers’ paid posts adjacent to racist, incendiary content, leaving the false impression that these pairings are anything but what they actually are: manufactured, inorganic and extraordinarily rare.”
Media do matter. But not like they once did. Let me explain.
In the late 1980s and early 1990s, I worked in marketing for tech companies. I purchased media space for advertising messages based on such factors as editorial reputation, reader characteristics, and price. PC Magazine was our “gold standard.” It was known for its editorial integrity and had a large base of readers who made purchasing decisions. It also had a very high cost per column inch. I included it in my media plans, but also included other magazines to optimize my limited advertising budget.
In the 1990s I started teaching advertising to university students. In media planning classes, I stressed the link between media content and advertising messages. Buying space in newspapers was ideal for “serious” products, I felt. Buying “run of show” time on cable television was a cost-efficient option for brands with a broad appeal. I also taught about gross rating points that “normalized” all media types so cost could easily be compared between column inches in print publications and seconds of airtime on broadcast media.
Then social media and smartphones happened, and those calculations changed a lot.
Starting in the late 1990s social media sites like Myspace, Facebook, and Twitter enabled anyone to be a content creator. Over time, advertisers begin to realize that social media and smartphones both generated new forms of data that could make advertising more efficient, more targeted. Marketers and media platforms developed algorithms built on an ever-growing mass of user-generated data and then used those algorithms to precisely target individuals. Algorithms, for instance, could deliver the same ad to someone viewing the New York Times website and the site of a 13-year-old blogger. Different site visitors can often see different advertising messages based on how these new algorithms interpret their data.
X is probably correct in contending that algorithms rather than advertisers or media executives drive placement of ads on its site. For someone to see an ad next to an antisemitic message, that individual’s behaviors probably suggest an interest in both the antisemitic content and the product/service being advertised.
Major advertisers have announced they are pulling away from X in response to owner Elon Musk’s support of antisemitic posts. Their announcements ring hollow, however. Their behaviors for the last 20 years have made it clear that media content no longer drives their message strategy. What really matters to these advertisers is that potential consumers are abandoning X. By “taking a stand” against what they see as objectionable content, marketers are hoping to benefit from a “halo effect” of good behavior. But their primary motivation is cost effectiveness.