20 Things To know About How digital technology is destroying our freedom

How digital technology is destroying our freedom





There’s a whole genre of literature called “technological utopianism.” It’s an old idea, but it re-emerged in the early days of the internet. The core belief is that the world will become happier and freer as science and technology develops.
The role of the internet and social media in everything from the spread of terrorist propaganda to the rise of authoritarianism has dampened much of the enthusiasm about technology, but the spirit of techno-utopianism lives on, especially in places like Silicon Valley.
Douglas Rushkoff, a media theorist at Queens College in New York, is the latest to push back against the notion that technology is driving social progress. His new book, Team Human, argues that digital technology in particular is eroding human freedom and destroying communities.

We’re social creatures, Rushkoff writes in his book, yet we live in a consumer democracy that restricts human connection and stokes “whatever appetites guarantee the greatest profit.” If we want to re-establish a sense of the community in this digital world, he argues, we’ll have to become conscious users of our technology — not “passive objects” as we are now.
But what does that mean in practical terms? Technology is everywhere, and we’re all more or less dependent upon it so how do we escape the pitfalls?
I spoke to Rushkoff about this and much more. I wanted to know why he thinks the revolutionary potential of the internet was destroyed by commerce, why the tools that ought to liberate us often imprison us instead, and what we can do to restore a sense of connection in a world of alienating technologies.
A lightly edited transcript of our conversation follows.


Sean Illing

You write in the book that our society is being threatened by a “vast antihuman infrastructure that undermines our ability to connect.” What does that mean?

Douglas Rushkoff

What I mean is that before digital technology, we were already on the brink of extreme alienation. We already had an economic system that was starting to burn itself out, which was built increasingly on constant consumption and the exploitation of labor.
And then digital technology came later in the century and offered an opportunity to do things differently. It offered the possibility of retrieving a common space and a way for people to share and connect. It was a chance to build an economy that wasn’t based purely on the extraction of resources and capital.
But that’s not what happened. Instead, digital technology was used to double down on industrialism. And industrialism was always about getting the human being out of the equation. It was about assembly lines and automation and separating workers from the value they’re creating. It was about business owners paying their workers less money and gaining more control over them at the same time.
Once we decided that digital technology would be used in further service of that, in further service of extracting value from labor and manipulating consumers into buying stuff they don’t need or doing behaviors that aren’t in their best interests, we created a worse monster than we had before.

So, I’m not saying that we invented this anti-human agenda with digital technology, but that we’re using digital technology to perpetuate an anti-human agenda that was already there.

Sean Illing

At some point, technology ceased to be a tool to help us get what we want and instead became the only thing we actually want. We stopped using it and it started using us.

Douglas Rushkoff

I think that’s basically right. In some ways, we’re all hostage to our technologies, or we’re simply at the mercy of this system. We’re being steamrolled by our devices, and the result is a kind of emotional slavery. And we know that billions of dollars are going into applying everything, every nasty trick we know about behavioral finance, to the digital realm.

This is what I mean when I call digital technology “anti-human.” If we were using digital and behavioral technologies to help people eat better or not smoke, then at least we could be arguing that it’s intended to help people. When we’re using technology to get people to revert to their most reptilian impulses, to get them to buy stuff they don’t need or to react angrily to stories, we’re in deep trouble.
“We’re being steamrolled by our devices, and the result is a kind of emotional slavery” Sean Alling

You argue that using technology in this way, and really, we’re talking about algorithms here, effectively destroys human autonomy. Can you layout the case you make in the book?

Douglas Rushkoff

The easiest way to understand what’s happening is to think of something like autotuning. Autotuning works by quantizing the human voice into the particular correct notes. Without getting too artsy here, I’ll just say this process shaves off the weird peculiarities that make humans human. It strips the human expression of its unique weirdness. It obliterates what makes us different from a machine, which makes life different from plastic.
In a more practical sense, the way it works with individuals is you go on a platform like Facebook, and Facebook is using data from your past to dump you into a statistical bucket. Once they know what bucket you’re in, they do everything to keep you in that bucket and to make you behave in ways that are more consistent with all the things about that bucket.
So, if they know there’s an 80 percent chance, you’ll go on a diet next month based on your search habits, then they’ll start peppering your newsfeed with articles and stories that are designed to get you to really go on that diet. So, you’ll see stories of people getting too fat or whatever. And that’s to get you to behave more consistently with your statistical profile.

Sean Illing

The algorithm thing is tricky for me. On the one hand, algorithms are making our lives easier by predicting what we want and giving it to us. On the other hands, our wants are so manipulated, so curated, that at some point it’s no longer a meaningful choice and the algorithms are just doing our thinking for us.

Douglas Rushkoff

And what if you don’t want anything at all? That’s the thing: That’s not one of the choices you have online. So, in that sense, they’re not even giving us what we want. They’re trying to trigger whatever they can get us to want. It’s about stoking consumption, about convincing us that we need another gadget, another toy, another device that will make us happy.

Sean Illing

And part of your argument is that these forces are turning us into atoms of consumption and consequently eroding our connections to other people.

Douglas Rushkoff

Right, and again, the roots of this go back way before digital technology emerged. TV and consumer advertising want us to be unsatisfied and disconnected from other people so that we look to products to fill that void. And products can never fill that void, which is great for the marketer because then we’ll keep buying stuff to fill an ever-expanding void.
So digital technology comes along and, rather than trying to replace our human connections with product purchases, it mediates our human connections in ways that make them less satisfying. So, if you engage with someone, certainly via text and email, you’re never going to get fulfilled. If you engage with them even on Skype or video, you don’t get the same rapport.
When you engage with someone in real life, the oxytocin rushes through your blood when you see their pupils getting bigger and their breathing rate syncing up with yours. These are painstakingly evolved mechanisms for achieving social harmony. And we’re losing them by spending all our time buying shit on Amazon or poring over our newsfeeds.

Sean Illing

Can we overcome the anti-human agenda embedded in our technology without also overturning the civilization that produced those technologies? Because we seem to be stuck in this paradigm. And the values of that paradigm are reflected in our technologies, so it’s difficult to get rid of one without getting rid of the other.

Douglas Rushkoff

It is, isn’t it? I mean, what do we go back to? People have been doing shitty things to each other since the beginning of time. The thing is, we’ve never quite had the capacity to destroy ourselves like we do today. And we’re doing it in this slow, cigarette-like fashion by gradually eroding our connections to one another. This is what makes our technological threats so insidious.
But I think each of us, in our daily lives, can experience a different view of each other and humanity. There is a moment on the subway when you see a person and make eye contact and smile for a second. And I know right now it’s the exception rather than the rule. And you’re just like, “Oh, I’m never going to see that person again in my life, but there was something there, something real.” And it feels exhilarating.
We have to build on these experiences because it’s the only way to engender a real community over time. I’ve written previous books about how we need systemic changes to corporations and government regulations and tax policies and all that. But this book is more about the way individuals experience the world, and why we need to stop seeing the object of the game as trying to earn enough money to insulate ourselves from reality, and instead realize that that’s not even possible.
I guess what I’m saying is that we’ve pushed far enough in this direction with mechanism and capitalism and industrial progress, and I believe we will be on the beginnings of a path toward something better if we start engaging with one another again and looking at each other and talking in real ways.

Sean Illing

Is capitalism the fundamental problem here? Because this seems to be baked into the argument you’re making, and it’s the primary force driving all these technologies.

Douglas Rushkoff

Capitalism is the actual problem. But it’s not a matter of getting rid of capitalism so much as balancing capitalism with some other “ism.” Capitalism was originally a way of getting funds to businesses that needed them to start up, and now businesses and people are serving capital. It’s fine to capitalize on a company. It’s not fine to surrender an entire civilization to one financial principle.

Sean Illing

I guess this is where I have to push back a little because I’m not sure capitalism can coexist with the sort of morality or ideology, you’re after here. Capitalism only works in a society drunk on consumerism, and consumerism is unavoidably “anti-human” in the way you’ve defined the term here.

Douglas Rushkoff

It’s a fair point. Capitalism is the closest “ism” to mechanistic world domination. It views human beings as resources to be exploited, not served. It’s inherently cannibalistic in that way. And it allows us to convince ourselves that we’re actually advancing civilization when, in reality, we’re destroying the very things that make civilization possible in the first place.

Sean Illing

I keep wondering what we’re supposed to do. These digital technologies have uprooted us from each other, from our communities, and yet it feels unavoidable. How do we participate in this system without reinforcing it? How do we escape the machine?

Douglas Rushkoff

Slowly. There are a lot of people who don’t want to believe that reconnecting with other people in real life will make a difference. And this sort of nihilism is part of the problem. If you really want to believe that you can’t make a difference, if you really want to believe that connecting with other people in the real world is going to have no effect on the real world, then you’re welcome to believe that. But you’re wrong. You’re absolutely wrong.
Every connection with other people is an opportunity for conspiracy. We have to believe that. We have to believe that gathering together locally can make a difference. If we stop believing that, then we’re truly lost. Then humanity doesn’t matter anymore. But I don’t buy that. I think every classroom is an opportunity for conspiracy. Every town meeting, every street corner. I think it happens that way.

Sean Illing

Technology is going to progress whether we want it to or not. So how do we fold the human values we need into the technologies we’re clearly going to keep building?

Douglas Rushkoff

Well, one way is if they are owned by the workers. I do believe that platform cooperatives offer an alternative to these platform monopolies. If the workers own the company, then they’re going to care about how the company functions. Especially if it’s local to them. It’s going to matter in that way.
And then it comes down to whether or not people are willing to stand up for what it is that they believe in. When Google’s workers walk out because they don’t like the company’s China policy, that’s a good sign. That’s the workers standing up saying, “No, fuck this, we won’t be complicit.” We need more of that.

Sean Illing

Does that mean you’re optimistic about the future?

Douglas Rushkoff

I don’t know if I’m optimistic, but I do feel hopeful. I think there is a strong possibility that we can avert the collapse of our civilization. But it starts with every day people waking up and taking a stand.

Post a Comment

Previous Post Next Post