Ultimate magazine theme for WordPress.

We need to talk about techie tunnel vision

0 9

Last year, the powerful US data company Palantir filed documents for an initial public offering. Included was a remarkable letter to investors from Alex Karp, the CEO, that is worth remembering now.

“Our society has effectively outsourced the building of software that makes our world possible to a small group of engineers in an isolated corner of the country,” he wrote. “The question is whether we also want to outsource the adjudication of some of the most consequential moral and philosophical questions of our time.”

Karp added, “The engineering elite in Silicon Valley may know more than most about building software. But they do not know more about how society should be organized or what justice requires.” To put it more bluntly, techies might be brilliant and clever at what they do, but that doesn’t make them qualified to organise our lives. It was a striking statement from someone who is himself an ultra techie and whose company’s extensive military and intelligence links have sparked controversy.

It is a salutary sentiment for at least two reasons. First — and most obviously — a new wave of controversy has erupted around the company formerly known as Facebook, after the whistleblower Frances Haugen released documents suggesting that the social media giant ignored internal warnings about the social harm created by its products. CEO Mark Zuckerberg has diverted attention by rebranding the company Meta and plans to invest billions in creating a “metaverse”, which he says will be built in an ethical way.

However, the revelations about Facebook do not inspire confidence in his ability to do this. Not least because most people haven’t the foggiest idea what a metaverse might look like, let alone how anyone might code it. The temptation here is that, yet again, we will “outsource” the key decisions to the “engineering elite”, as Karp calls them.

The second, potentially more serious issue is artificial intelligence. This autumn, an impressive trio of writers — Eric Schmidt, former CEO of Google, Henry Kissinger, former US secretary of state, and Daniel Huttenlocher, an AI professor — launched The Age of AI, a book that warns that “as AI’s role in defining and shaping the ‘information space’ grows, its role becomes more difficult to anticipate [and] the prospects for free society, even free will, may be altered”. 

Yet the development of AI mostly remains in the hands of those “engineers in an isolated corner of the country”. And most of us seem happy to outsource the decisions to them, since we have little idea of what is involved.

Don’t get me wrong; I do not hate the idea of AI. On the contrary, this tech can be an extraordinary force for good, helping doctors to screen for disease, say, or investors to scrutinise corporate balance sheets for risk. Last week, a senior Facebook official insisted to me that AI can also be a powerful weapon to fight misinformation, since it can scan an unimaginably vast amount of data, hunting for abusive posts and removing them.

It is also entirely possible to believe that, on occasion, engineers are better placed to make decisions about the use of AI than the wider public or politicians, given that the latter tend to show a woeful sense of statistics and probability, as the cognitive psychologist Steven Pinker points out in his new book Rationality.

Take self-driving cars. If these kill a small number of passengers, the knee-jerk reaction of most politicians — driven by the public — might be to ban such cars. An engineer might retort that humans actually kill far more humans on roads, making it “rational” to embrace AI-driven cars, even with the inevitable risks.

But in other cases, the engineers may not get it right; they can be blind to social context or mores precisely because they see life through the rigid lens of tech. As the anthropologist JA English-Lueck has observed in a study of Silicon Valley: “In a community of technological producers . . . technology itself [becomes] the lens through which the world is seen and defined.” In this environment, she argues, “‘useful’, ‘efficient’ and ‘good’ merge into a single moral concept”. This, incidentally, is why Karp’s comment seems so important now.

The good news is that people in his position are finally prepared to talk about it. The even better news is that there are experiments under way to combat techie tunnel vision. In Silicon Valley, for instance, Big Tech companies are hiring social scientists. Other innovation hubs show promising signs too. In Canberra, Genevieve Bell, a former vice-president at Intel, has launched a blended social and computer science AI institute. These initiatives aim to blend AI with what I call “anthropological intelligence” — a second type of “AI” that provides a sense of social context.

The bad news is that such initiatives remain modest, and there is still extreme information asymmetry between the engineers and everyone else. What is needed is an army of cultural translators who will fight our tendency to mentally outsource the issues to engineering elites. Maybe tech innovators such as Karp and Schmidt could use some of their vast wealth to fund this.

Follow Gillian on Twitter @gilliantett and email her at gillian.tett@ft.com

Follow @FTMag on Twitter to find out about our latest stories first

Leave A Reply

Your email address will not be published.