Needed: Intelligence of the Non-Artificial Kind


Wow, the number of op-eds, articles, books, podcasts and blogs about AI (artificial intelligence) in just the last week is staggering. Maybe the resignation of Geoffrey Hinton, the godfather of AI, from Goggle and his expressed concerns about AI have gotten people’s attention?

Hinton voiced a number of concerns, including, that with AI the internet will be flooded with false photos, videos, and text, and that the average person will “not be able to know what is true anymore.” I share that concern, and also feel that the horses are kind of already out of the barn on that one. He’s also concerned about the “Scorcerer’s Apprentice” scenario (my words not his), of the AI taking over.

Some of the debate is on the order of “should we go ahead with AI?” which suggests there is a choice in the matter. There isn’t. AI is here, though its incarnations are probably far more diverse than the single term “AI” would suggest. I thought this piece by the economist Tyler Cowen was good. Cowen writes, in part, “The question is no longer ‘go ahead?’ but rather ‘given that we are going ahead . . . do we at least get something for our trouble?’ . . . With AI, do we get positives? Absolutely,” answered Cowen. “There can be immense benefits from making intelligence more freely available. It also can help us deal with other existential risks.” I assume he means things like climate change or pandemics.

But on the other hand, there are reasons for concern, such as Hinton’s lament above. I, for one, have heard way too much techno-utopianism over the years to believe the most exuberant claims for this or any other technology. The former CEO of Google, Eric Schmidt, has acknowledged that he and other techies were naive in their claims for salvation by technology:

“When I look back, I see two ways in which we in the tech community were naively optimistic. First, like many in Silicon Valley, I had an overly rosy view of human nature. Most of us thought that it was inherently good to just connect everybody and everything. But now I can see that even though most people are good––or, at least, they behave well when interacting with strangers––a small number of trolls, foreign agents, and domestic jerks gain access to the megaphone that is social media, and they can do a lot of damage to trust, truth, and civility.

“Second, I didn’t fully understand human tribalism and the way that social media could supercharge it. All platforms wanted to grow their user bases and increase their engagement, and we all thought that social media was a healthy way to help small communities form and flourish. But as political polarization rose steadily, not just in the USA but in many parts of the world in the 2010s, we discovered that issues of partisanship, identity, and us-versus-them were among the most powerful drivers of engagement.”

Schmidt and Jonathan Haidt have published an article on AI together in the Atlantic warning that AI will make social media even more harmful to American democracy and to the mental health of teenagers, which has been Haidt’s focus of late in various blogs, interviews, and books. The flip, so to speak, from the flip phone to the smart phone, along about 2010, says Haidt marked the onset of the current mental health crisis for teens and young adults.

I can tell you what I am worried about: human intelligence. That seems to be in increasingly short supply! Would that some amount of the money and effort now being expended on artificial intelligence would be directed to good old human smarts and its development! In the end, the use and deployment of AI will be directed by human intelligence . . . or the lack thereof.

And another thing: as with most (all?) technology the promise here is that the realm of human control and dominance will be extended by AI, reducing contingency, uncertainty and tooth decay. The assumption is that such control and mastery is always and in every instance a good thing.

About that, I’m not so sure. Our eagerness to make life controllable, the Tower of Babel all over again, seems frequently to end up biting us in the butt. We keep trying to master everything, to control everything, plan it all out.

Maybe we would be better off, and more truly human and humane, if we accepted that lots of life — and lots of what is most important in life — is beyond our control. You don’t get to control life. That’s not in your job description, mine either. You get to respond to it, to rise to the (unforeseen) occasion, to answer to the ways that we are addressed and called by life and by God. Control is, mostly, an illusion.

Control less, live more.

Anthony B. Robinson
Anthony B. Robinson
Tony is a writer, teacher, speaker and ordained minister (United Church of Christ). He served as Senior Minister of Seattle’s Plymouth Congregational Church for fourteen years. His newest book is Useful Wisdom: Letters to Young (and not so young) Ministers. He divides his time between Seattle and a cabin in Wallowa County of northeastern Oregon. If you’d like to know more or receive his regular blogs in your email, go to his site listed above to sign-up.


  1. Good piece, Tony. An “overly rosy view of human nature” has caused a lot of problems over the years. We need a realistic view. Human intelligence is capable of that, but the rose-colored glasses are tempting.

  2. The invention of printing dramatically expanded the amount of human knowledge that was accessible. But it sort of demolished human memory. The creation of moving images and television as a passive information vector sort of demolished reading. The constant availability of recorded music demolished the listener’s immediate retention. Calculators demolished mental math. GPS demolished way finding. Smart phones demolished personal interactions. It looks like AI will demolish creating, researching, and even what we would consider thinking. Is our paradox that the more advanced our technology becomes, the less capable the average human becomes? The dystopian vision of AI overlords controlling a dim and incompetent humanity doesn’t seem completely outrageous. More like a logical endpoint of our technological trajectory?

  3. “Artificial Intelligence” is called artificial because it’s not real intelligence. Thus, it’s perfect for use in Washington where real intelligence is in short supply.


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Comments Policy

Please be respectful. No personal attacks. Your comment should add something to the topic discussion or it will not be published. All comments are reviewed before being published. Comments are the opinions of their contributors and not those of Post alley or its editors.