I have a love-hate relationship with technology in general and artificial intelligence (AI) in particular. Or I should say, a hate-love relationship, because I find myself hating these things by default and loving them only out of necessity. Like a relationship being maintained only out of convenience.
I’m scared of AI for the same reason everyone else has been scared of science since Frankenstein was written, which is that the grand experiment that promised so many good things will instead go horribly wrong. That HAL-9000 or Siri or Alexa will refuse my reasonable requests. That Skynet or OpenAI or SpaceX will morph from a successful corporation into the Singularity/Arnold Schwarzenegger.
Those are the scenarios where technology has simply grown more powerful than us and escaped our grasp. Of course, there is also the interior aspect of it, the way that technology shapes our own outlook and behavior, like the mental health problems caused by overconsumption of social media, which has been a popular concern at least since WALL-E came out in 2008.
But those are dangers most of us are at least conscious of. They are a dime a dozen in movies and books. My worry is that the overall takeaway is reduced to: Technology can be either good or bad, and so we just have to be careful. I think it goes way deeper than that, although I’m still trying to figure it out myself.
My recent attention to this came from a book I read a few years ago by conservative author Rod Dreher called The Benedict Option, in which he presented his ideas for how Christians should live in a post-Christian world. Although not the main focus of the book, there is a section titled flatly “Technology Is Not Morally Neutral,” in which Dreher argues that technology is “a comprehensive worldview” born from “the sense, beginning with nominalism and emerging in the early modern era, that nature had no intrinsic meaning. It’s just stuff” (220). The material world became an incidental collection of atoms created by a conception of the will of God as arbitrary, rather than a Thomistic or Aristotelian conception of a natural order imbued with meaning by God in relationship to man. And it was something to be manipulated at will in order to show the potential of man. “To the technological mind,” Dreher writes, “questions of why we should, or should not, accept particular technological developments are hard to comprehend” (220).
Of course, like I’ve said, even our modern, technological worldview is already self-aware of the dangers of not questioning technology, as many a sci-fi book will attest. However, what I think is not questioned as often is that underlying philosophical, consequentialist assumption that the material world is just so much “stuff.” Despite all the literary prophesies of the coming technological dystopia, we seem incapable of finding principles to prevent it from coming. When something like biological cloning or AI come on the scene, all we can do is smile wryly (“It’s like in the movies!”) and hope for the best.
As I’ve hinted at, there are attempts at principled responses, given by Dreher and many other conservative Christian writers and thinkers I follow, ones that center around trying to recover an older sense of the natural order created by God or a greater awareness of the symbolic meaning of existence. And it’s not just conservatives or certain Christian groups. I have met people in academia and read of various environmentalist movements that go so far as to call into question “anthropocentrism,” or the bias we have toward human flourishing versus animal or environmental flourishing. While I disagree with their rejection of humanism, I think these ideas represent an understandable impulse to recover a sense of meaning and purpose behind the material world. James Cameron’s Avatar movies might be the most popular expression of this trend, as they paint a world that is interconnected and transcendent, containing an orchestra of creatures that ultimately point to and partake in Mother Earth, or Pandora as it were.
But these ideas, conservative or liberal, Christian or non-Christian, medieval or pantheist, seem to be rather esoteric and removed from everyday experience. Meanwhile, tech moves forward, undisturbed by our petty philosophical or religious or literary misgivings, as Bill Maher recently indicated in an interview with Elon Musk:
That has always been my view, is that, I was a history major, and when you study history, what you realize is that, you know, there’s the Great Man theory, and they talk about kings and princes and queens and presidents. It’s really the people in tech who change the world. They’re the people who deal the cards. Whether it’s fire or electricity, for good or bad, or the cotton gin or the iPhone or the atom bomb, those are the cards, and the rest of us just play it.
Later in that interview, the two men talk about the need for regulation of AI, which is probably good, but both men are dealing under that fundamental assumption that technology represents in many ways the most important endeavor in society. Everything else is just talk. Technology deals the cards. For consequentialist or practical reasons, it should be regulated, sure, but look at all the good things it can do. In some sense, it is the most real thing. The world is neither the enchanted one of Avatar nor the created order of Thomas Aquinas nor yet the Force-full world of Star Wars, but rather it is the universe of the starship Enterprise, with it’s 3D-printed food and a holodeck to make up for the tactile world that has been lost. The way out of the bad parts of technology is to boldly use it only for good, guided by the ethics of the rational Federation.
I’m skeptical of this purely practical approach, because I don’t think it draws very clear parameters around what should or should not be done. As long as something seems good in our eyes, no matter how unnatural it may be or how much we don’t like it, we can’t find a good reason to be against it. Should we make glow-in-the-dark dogs or clone our pets or let AI write blog posts for us? Sure, I guess, as long as it “doesn’t hurt anybody.” In this view, the world quickly becomes a funhouse of material particles to play with.
While I’m skeptical of welcoming technology as long as it simply “doesn’t hurt anyone,” I wrestle with the fact that my own life is so clearly dependent on it. As a result of being a type 1 diabetic with a muscle-wasting neuromuscular disorder, I have depended on the latest medical breakthroughs my entire life. I move via a wheelchair, I stay connected to an insulin pump without which I would die within 24-48 hours, I receive gene-splicing medical treatment for SMA, and I’ve taken many rounds of life-saving antibiotics throughout the course of my life. Not to mention a career change to web development that is probably the only reason I can still physically practice in the job market.
But an event in my life that made me think the most about my need for technology and AI in particular occurred in the fall of 2013. I was living on my own at the time, with the help of a rotating schedule of paid attendants. I would (foolishly) always let the overnight attendant leave before the morning attendant came, while I remained asleep in bed. I messed up one morning and didn’t have anyone scheduled to come that day. My overnight attendant left as usual, and I woke up alone in my apartment, stuck in my bed and realized immediately that no one would be looking for me until my evening attendant would come 12 hours later.
An angel must have been watching over me, because I easily could have experienced excruciating pressure pain due to my inability to move, but I was in just the right position to lie comfortably for 12 hours. My blood sugar got dangerously low due to diabetes and not being able to eat, but I never passed out.
I woke up that morning alone, stuck, afraid and unable to yell loud enough for help from the passersby whose footsteps I heard go by my apartment window throughout the day. One of my first thoughts was to say “Hey, Siri” to my phone lying a few yards away on my computer desk. But Siri didn’t answer. I kept calling for Siri anyway, until I was sobbing uncontrollably.
It would be another year before Apple introduced a new feature that allowed users to activate Siri by voice, without having to press a button. That feature and a myriad of other available voice controls for iPhone has revolutionized mine and many other people’s lives. Siri is as much AI as anything else. If only that feature had come out a year earlier I would have been saved from that nightmarish experience.
The world of voice and speech technology seems to have exploded in the last decade. My mind was blown away the other day when my friend needed to transcribe and translate a recording of a meeting in Spanish, and I was able to do it for him automatically, in five minutes or less, with Microsoft Word. That reduced days of tedious labor to just a short proofreading task, especially considering both of us are disabled and cannot efficiently use a keyboard.
When it comes to disability, the smallest advancements in technology often have the most far-ranging effects in one’s daily life. I can’t find the post, but Jeffrey Zeldman once wrote about how the mere existence of credit and debit cards made it possible for some disabled people who can’t handle physical cash (like me) to go to stores and restaurants on their own and pay for things on their own, something everyone else takes for granted.
When I worked at MSU’s student newspaper in 2007, they had only recently, perhaps less than a decade prior, switched to computers with InDesign from the old method of literally cutting and pasting articles together and photographing them to send to the presses. I don’t even know if I could have done that job otherwise.
The expansion of distance learning and remote work over the past two decades, and especially since COVID, has been for most people a convenient way to balance family/school/work at best and a raging annoyance at worst. I’d be willing to bet that for many people shut in by disability, it has exponentially increased opportunities beyond their wildest dreams. That has definitely been the case for me.
And yet, I grew increasingly frustrated and infuriated by the prolonged lockdowns of 2020-2021. I’m worried about the serious privacy issues with big tech. The more disabled I become and the more my world is mediated through a screen, the more I miss real life.
So here I am, as reliant on technology as ever but with deep misgivings that I find hard to put into words. All I can say is, it’s more than just “we have to be careful to use technology only for good.” It has something to do with the way the creation and consumption of technology both reflect and advance a view of the world that’s more flawed than we might like to think and into which we are all entrenched whether we like it or not, whether we are conservative or liberal, atheist or religious, luddite or futurist. We cannot be so foolish as to think we are somehow better than the technological apocalypse coming for all of us, yet it would also be wrong to just sit back and be complacent.
In a YouTube video I linked to above on technology and Christianity, the Orthodox artist Jonathan Pageau says an ancient Jewish retelling of the story of Noah and the flood relates that the beginning of the corruption of mankind was begun by demons who taught men the various crafts, or you might say, technological innovations. After learning the arts for manipulating the world to human advantage, humans quickly descended into godlessness and were destroyed by God via the natural world they had learned to dominate. Yet God permits a few people, namely Noah and his family, to survive by way of one of those very same technologies, that is, shipbuilding. Maybe that presents some amount of hope for us, some assurance that technology can be developed safely after all, and just might save us, God willing, from the very world it has helped to create.