Shaken to our Core: AI as a Change Agent


Since the release by the San Francisco tech outfit OpenAI of its first public product, Chat GPT, reports and debates about artificial intelligence – AI – are filing the news pages and Internet news feeds with a range of reactions from excitement to — sometimes — screaming alarm. Scan the comments:  A quick read will tell you there’s little doubt that AI is going to be an unprecedented change agent, possibly shaking human society at its core.

Just looking at the early stuff, ChatGPT has already shaken education. It’s a tool students can use that will produce research papers not easily distinguished from something high school and college kids can produce themselves. (Though the giveaway will probably be that they’re better.) On the plus side, though, ChatGPT is a helluva researcher, compressing the answer to questions into tight narrative – though sometimes with errors and stuff made up — saving the time taken by a Google search where you open page after page to get it all.

Then there’s college applications. ChatGPT can make up a great essay on a star volleyball player for a kid who’s never played the game.

More importantly, AIs – not ChatGPT – have helped with scientific research, even building protein molecules that otherwise would take extensive work with electron microscopes to identify.

 Yet the likely downside is that AI such as ChatGPT will replace workers, primarily in the professions. That worry is already filling the business pages. One of our PostAlley writers put it simply: “It’s the end of writing.”

But that’s just the short-term stuff, here and now. Going forward, quietly in labs, Microsoft, Silicon Valley – Open AI and all the others, Google, Meta, and some that aren’t household names or not yet bought by those guys – are in a race to produce AGI, artificial general intelligence, leading to a superintelligence beyond human capabilities. That’s really what they, the Silicon Valley guys who brought you social media algorithms that feed off hot emotions, want to do. And it’s what the worries are about.

Remember Google’s early motto: “Don’t be evil”? With superintelligence they might be. And there’s Facebook’s motto, “Move fast and break things.” They’re racing to develop superintelligence and this kind of thinking still animates the tech companies. With AI they’re pretty likely to break things.

 Bottom line: Our social media world is what it is because tech companies built it that way without any societal – yes, political debate about what it should and should not do. It’s turned out disruptive and polarizing, at least in the political world, leading net searchers into conflict laden worlds. Now, we can’t get the web under control and politicians focus on the (long lost) issue of privacy when the real issue is the power of mega corporations providing services we’ve come to think we can’t do without. It’s certain they’ll add AI, very powerful AI to their portfolios.

What’s next is uncertain. But what’s for sure is that big tech is working like hell to develop the next level of AI – and doing that without really knowing what capabilities it will have. In the September issue of Atlantic Magazine, writer Ross Andersen (not PostAlley’s Ross Anderson) looks at what’s happening in a wide-ranging interview with Sam Altman, CEO of OpenAI, the creators of ChatGPT, now working on next generation of the AI computer that supports it, GPT-5.

Back when Open AI was formed in 2015, the founders (at the time including Elon Musk, now separated and starting his own AI enterprise) “wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human,” Andersen writes. To this day that remains Altman and Open AI’s goal, despite the uncertainty of the outcome, mainly the fear that an Artificial General Intelligence, a superintelligence based on massive computing power could, well, take over the world. Can it be built with guardrails based on human values to prevent that?

And there are some who don’t care: life is anything that can reproduce itself and work toward goals, has a purpose in mind. For an opening to this thinking, you might read “Life 3.0: Being Human in the Age of Artificial Intelligence” by MIT physicist Max Tegmark from which that definition of life comes. He postulates a future where what we would call “artificial” superintelligence spreads to the stars, though us carbon-based life forms are not on board. You can hope that’s a long way off.

Altman, according to Andersen, is the worldwide spokesman and leader through his company OpenAI in the development of systems beyond ChatGPT, closer and closer to artificial general intelligence (AGI), which begins to sound like life as Tegmark defines it. But we’re not there yet and though Altman acknowledges they’re working as fast as they can scale up, he acknowledges the need for political controls. He told Andersen that “In a properly functioning world, I think this should be a project of government.” But it’s not, although in Europe there are some controls on AI as there are on social media. Altman also told Andersen that in the face of AGI he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI. Andersen’s own conclusion is that “It will take a vigorous new politics.”

But nothing like an international agency or even much U.S. regulation is on the horizon, much less “a vigorous new politics.” Absent such forces, AGI will be developed without controls by the mavens of Silicon Valley (and Redmond, and Austin) and the massive technological corporations they command. They are, in fact, largely people who believe technology can solve all our social problems if government just gets out of the way.

Dick Lilly
Dick Lilly
Dick Lilly is a former Seattle Times reporter who covered local government from the neighborhoods to City Hall and Seattle Public Schools. He later served as a public information officer and planner for Seattle Public Utilities, with a stint in the mayor’s office as press secretary for Mayor Paul Schell. He has written on politics for and the Seattle Times as well as Post Alley.


  1. As the author points out, AI can be a powerful tool of research, digesting and analyzing large amounts of information much faster than humans can. The crucial issue is whether or not that analysis is acted upon.

    Let’s say an AI-enabled computer could somehow manage to take in all the factors impacting global climate change and all the factors involved in making life on earth a better, fairer, ore sustainable deal for our 8 billion people. Would political leaders, bureaucracies and the great mass of the world’s people be willing to follow the advice of a computer ( especially if it meant surrendering some of their privilege or autonomy)?

    I think it is more likely that a few, but not a majority, of national or corporate actors will value the upside of embracing AI technology and try to apply it in innovative ways, while not turning over everything to silicon and algorithms. Those who are successful will gain an advantage over their competitors, laggards will have an uphill climb to catch up. Small size — and a smaller decision-making bureaucracy – will make it easier and less risky to integrate AI into existing systems.

    In the case of the US, given our increasingly dysfunctional national politics, I doubt the ability of our political gerontocracy to reach consensus on dealing with AI — whether it is a boon or a burden, whether as regulators who need to corral and control AI or as promoters of its potential benefits. One hopes that clever software engineers don’t forget to work on the AI/human interface – and leave the ultimate decision-making to flesh and blood, not silicon.

    Those of us who saw “2001, A Space Odyssey” in theaters when it came out in 1968, just a year before the first moon landing, may well remember that with all the whiz-bang super-duper stuff that technology would do FOR us, we’d also face the question of what it could do TO us. One line sums it up for me:
    “Open the pod bay doors, please, HAL”

  2. An excellent, and thoroughly depressing, essay, Dick. Given broken politics here and abroad, there will never be an IAEA equivalent to oversee AI/AGI. One need only look at how poorly climate change is being addressed. All this makes me glad that I’m old!

  3. Thanks Dick,
    t’s good to remember that the strongest believers in AGI as a near-term possibility tend to be computer scientists, with a computer-ish idea of what intelligence is, and also a belief, often implicit, that intelligence is a complete thing in itself. The strongest skeptics tend to have backgrounds in biology and neuroscience, with a tendency to see human intelligence as an embedded part of a larger whole in which intelligence, consciousness, emotionality, sociality, embodiment (as in, being in a body that operates in the physical world) and access to input from all senses are tightly intertwined. Seen through the latter lense, AGI seems like a harder problem.

    It’s also good to remember that for Altman, talking up the dangers of AI is good marketing–it accelerates FOMO among the enterprises he’s trying to lock into his platform.

  4. Reading this thoughtful piece brought Larry Wall to mind, and his recommendation to programmers, during the 1990s, of three great “virtues”: laziness, impatience, and hubris. It is possible to see, in the headlong race to create a master machine, these “virtues” come to their fruition. Especially the impatience and hubris. Also called to mind: Frank Herbert, and the principal commandment of his “Orange Catholic Bible”.

    We have long credited “unidentified flying objects” with abilities that no carbon-based lifeform known to us could possibly survive. It can be imagined that these things are Tegmark’s spacefaring constructed intelligences, wandering among the galaxies in search of meaning, the societies that created them having collapsed of their own weight.

  5. I read a lot of science fiction, too, but … for me, to worry about is not AIs, so much as people with AIs.

    We aren’t going to be overmastered by a super intelligence, acting on its own, but we could easily be in big trouble if individuals became really intelligent thanks to a reliable AI. Imagine Musk, for example, acquiring reliable intelligence and becoming routinely able to foresee the consequences of his decisions.

      • Musk has Asperger’s Syndrome. I don’t think his brain is wired to see outcomes and consequences; it’s all about completing the task at hand, such as building electric cars and taking over Twitter.

    • I suspect Elon Musk is quite capable of foreseeing the probable consequences of his decisions, and I imagine him sitting at his desk, tapping his fingers in a Monty-Burns fashion, exclaiming, “Ex-cellent!”

  6. Dick,
    An excellent and thought-provoking piece. After reading it and Tegmark’s book, my reactions are along the lines of Tom Corddry’s above. Intelligence, as conceived by computer scientists, is a largely cognitive function operating in semi isolation from the more complete intelligence of the universe. One can refer to Howard Gardner’s multiple intelligences theory to realize that there are many more ways of sense making than cognition, and to others (some of Tom’s biologists and neuroscientists) whose philosophies recognize that our individual egos and their products are only part of a greater systemic wisdom in which we are a only a part.

    In my opinion, any effective regulation of AGI needs to be anchored in this recognition, as well as in articulated values that serve as the guardrails to its potential abuse. I wonder if John Rawls’ theory of Justice as Fairness to all might serve as one starting point.

  7. Thanks for all the thoughtful comments. Here’s just a final note to underscore my point: It’s not what AI can or might bring about, it’s who’s in control. The development of social media wholly in the hands of the techbros (and now with Elon Musk on the team) should give us cause for caution.


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Comments Policy

Please be respectful. No personal attacks. Your comment should add something to the topic discussion or it will not be published. All comments are reviewed before being published. Comments are the opinions of their contributors and not those of Post alley or its editors.