solutionism
More or less a typical Ted Talk. There is a space rocket, a drum kit, Einstein, Darwin, etc. Photo credit: Reflecting on the Joy of Rockets. Steve Jurvetson photo, Ted Talk, March 10, 2007

…an inefficient democracy is always preferable to an efficient dictatorship.

Evgeny Morozov, Click Here to Save Everything (2013).

In the first installment of my “Tedd Talk,” which I called “The Darkening of Silicon Valley,” I sought to draw attention to a certain body of tech industry introspection and related reporting that was published across 2017, and which seemed to herald the dramatic dimming of Silicon Valley’s signature internet-centrist, techno-topian solutionism.

The basic point of this piece, published here on InDarkTimes four days before the breaking of the Facebook-Cambridge Analytica story, and largely inspired by Evgeny Morozov, was to show how a certain naïve and utopian optimism about the ability of tech solutions to banish our societal problems—presumably through transforming our user experience–has definitively “jumped the shark.”

I contended that this occurred on the day in late 2017 when a group of tech giant company execs were brought before Congress to talk about their role and responsibility in the Russian hack of the 2016 presidential election.

Beyond registering the steady drumbeat of hapless mea culpas coming from the large Internet and OEM platform executives both before and after that day, however, I also sought to marry this news digest with a contemporaneous set of dire warnings about the reach and significance of Big Data algorithms across all sorts of economic sectors and dimensions of human flourishing in modern societies.

To this end, I foregrounded some remarks by Elon Musk before the National Association of Governors (AI is a fundamental threat to the existence of human civilization) and former financial industry “quant” Cathy O’Neil’s account of the growing shadow of toxic feedback loops in the “big data economy” in her book “Weapons of Math Destruction.” I concluded by saying that what united these two is a shared and urgent concern about the dangers of handing over our human future, as an open site for cultural and political decisions, to the vagaries of wholly cybernetic functions (algorithmic decision making).

To properly understand Click Here, one has to grasp the primary emotional register of Morozov’s 2013 book, which is a very distinctive sort of exasperation.

At the very start of the prior piece, however, I didn’t just say that my intention was to show that Silicon Valley had jumped the shark—I said that I wanted to characterize “the meaning of Silicon Valley as a comprehensive phenomenon—cultural, socio-political, and economic now that it has definitively jumped the shark.” Having identified the idea of Silicon Valley as such through the utopia of technological solutionism, I thought that it should be possible (post-shark-jumping of course) to specify a set of its generally dystopian characteristics and go on to hazard a very specific interpretation of Silicon Valley’s culturally significant idea.

As will become clearer below and also especially in the final installment, the interpretation that I want to offer here, taken from somewhere entirely outside the insular, self-referential and self-congratulatory technology business milieu, is that the best way to think about Silicon Valley’s solutionist dystopia is in terms of what I wish to call the “hyper-capitalist dictatorship over needs.”

Morozov: Not All Bugs are Bugs; Some Bugs are Features

Before taking this heretical step (to define Silicon Valley by means of concepts that are not its own) it is useful to take a closer look at aspects of Evgeny Morozov’s critique of solutionism in Click Here to Save Everything.

To properly understand Click Here, one has to grasp the primary emotional register of Morozov’s 2013 book, which is a very distinctive sort of exasperation. Consistently across the work, Morozov reserves his greatest ire for the aspect of “drinking one’s own Kool Aid” that always seems to be on display when Silicon Valley types open their mouths. “In this book,” he writes in the introduction, “I want to question both the means and the ends of Silicon Valley’s latest quest to solve problems.” Where Silicon Valley always directs our attention to solutions, Morozov is resolute in continually returning to the scene of the problems themselves. “…in our political, personal, and public lives, not all bugs are bugs; some bugs are features.” Or this: “…how problems are composed is every bit as important as how problems are resolved.” Or finally, “…inefficiency, ambiguity, and opacity—whether in politics or in everyday life—are not necessarily problematic.”

Throughout the book, Morozov details examples in various domains that expose the dangerous reductionism at the heart of tech solutionism, and the various unintended consequences and side-effects that flow from it. There is the notion that everything gets better when performed socially, including even throwing out the trash (something called the BinCam app). Is it really a good idea to have an app where people can monitor one another’s recycling compliance online? Do we really want to introduce game incentives to something that was previously motivated primarily by people’s sense of obligation to others?

If this is still too abstract, just consider the dread you feel anytime you are confronted with a customer service telephone tree.

There is also the assumption that tech solutions that allow for decentralized decision-making are better than those that rely on traditional centralized or hierarchical decision-making. Morozov calls out Stephen Johnson’s book “Future Perfect” for asserting that Kickstarter should replace the NEA (now defunct) in funding the arts. But this assertion, Morozov points out, relies on a callow internet-centrism—”the putative values of the internet, whether it be openness or participation, become prized yardsticks for assessing every field of human endeavor, regardless of its own goals or standards.”

The fact that Kickstarter offers a more efficient platform for some project to raise more money (bypassing the bureaucrats) doesn’t mean that it will yield better and more innovative art, or support art that might be irrelevant (cat videos, etc.). Viral mass support for popular projects might not be the best thing for the arts overall, especially since it isn’t clear that this sort of populist appeal results in something like procedural justice (since PR is easily used to game the system) and its less likely that new histories of World War I (not popular) or exposes of the oil industry (needs deep pocket legal support) are going to get made.

In another especially fascinating chapter, Morozov makes similar arguments about openness and transparency, values that have still not been sufficiently reflected even after the 2016 election. Arguing that the value of Internet-enabled transparency is not absolute, Morozov gives examples to show (already in 2013) that the notion that that openness/transparency necessarily lead to a more vibrant and responsible civic life is not a foregone conclusion.

Bouncing, Highlighting, Shading, and Snowing

The overall point is that information systems which provide access to aggregate data also mediate this data, and so are more like houses of mirrors than glass houses. A shallow, optimistic solutionism under-estimates the total set of consequences and effects. Morozov discusses the phenomenon called “bouncing” where information collected for one purpose, is used for a different purpose in another place where it gets framed differently. Morozov gives the example of “eightmapping” where, for example, you visit sites like La Raza, Planned Parenthood, or the Council on Islamic-American Relations, and then you get the location of your home address and or your place of business publicized by right-wing goons.

He also discusses related phenomena that get called “highlighting and shading” where information or data get played up and/or virally propagated in such a way that some parts of the information are given an inappropriate prominence (highlighted) to the detriment of other parts that get played down (shaded) leading to unfair or unbalanced adverse reputational impacts. He also talks about “snowing” where a knee-jerk commitment to openness and transparency takes on the character of “the audit society” and leads to so much information being made available that nobody (who is not a bot) can make any real sense of anything anymore. Data sets nowadays are increasingly ends in themselves; by this I mean that we need not have questions to ask of them or specific purposes for them. Quants study the data using sophisticated programs that find patterns in the data, and the data patterns yield proxy data that is useful for some sort of marketing-related purpose.

With respect to the rest of us (real people who think of ourselves as some sort of agents of our own fate) Morozov writes, it might be the case that “…we should not look to new ways to expose people to every nook and cranny of the decision-making process, as a solution to people’s negative views of government.” In the end, there are right varieties of transparency, which lead to the promotion of fairness in society and contribute to effective and accountable institutions, and there are wrong varieties, which can lead to “populism, thwarted deliberation, and increased discrimination.”

Morozov: How to Break Politics by Fixing It

Morozov’s great pageant of case examples remains a fascinating read. Naturally, some of them have held up better than others. For example, he pans the notion that social media will lead to wrenching social transformations (if only it were so). However, in my view the thing that is of most enduring value in the book is the thread, running throughout, that connects the critique of Silicon Valley’s solutionism to an overall hyper-capitalist reductionism concerning the human condition.

Tech solutionism, per Morozov, must be seen to both augment and also to diminish our human reality. It does so by making our distinctly human needs ever more responsive to the law of supply and demand. In the abstract, creating greater efficiencies is all well and good in the domain of consumer marketing—but its another thing, as Morozov points out repeatedly, when this mentality, which is really about using tech to disrupt markets for fun and profit, infiltrates our politics and other spheres of non-market-oriented human value.

Solutionists, Morozov writes, “do not understand that politicians are not like inflatable mattresses or hairdryers that can be easily ranked…as we are wont to do with Amazon purchases.” Tech solutionist interventions in our politics, whether intentional or side-effect, all have the same basic flaw, per Morozov: they treat politics like commercial markets. “The Amazon customer is someone who prizes immediate payoff and isn’t oriented toward making sacrifices in the name of others. Try telling the shopper that not all of his or her desires can be satisfied, because someone else has equally compelling interests and those have to be taken into account as well,” Morozov says. The market simply doesn’t work that way. The problem is not with the tech solutions per se; it has to do with the aspect of exhaustively recognizing human beings everywhere and in every way as market actors.

When you marry super-intelligent machines with no-holds-barred capitalism, therefore, you get what I am calling hyper-capitalism.

The danger here is that where we import a pervasive consumer mentality into spheres of life that generally operate otherwise, under guise of improving efficiency, we end up with “resentful citizens whose expectations aren’t met, and who don’t develop the concern for the public good” that, according to Morozov and Catherine Needham, “must be the foundation of democratic engagement and support for public services.” The result of all of this is the recasting of “deeply political life altering issues” in such a way that the ability of communities and polities to advocate for their specific needs satisfaction is effectively cut off. This is what Morozov means when he says, “what works in Palo Alto doesn’t necessarily work in Penang.” He is not pointing out, like some sort of Silicon Valley marketing guru, something that gets labeled in the industry as a “localization problem.” Rather, it is my contention that he is pointing toward what I am going to be calling “the hyper-capitalist dictatorship over needs.”

The same notion can also be seen just by looking at the Wiktionary definition of “solutionism.” Two related definitions are provided. The first is “the belief that all difficulties have benign solutions, often of a technocratic nature.” The second definition is “the providing of a solution or solutions to a customer or client, sometimes before a problem has been identified.” In these two definitions, taken together, we can see where tech utopian optimism meets solutionism, downplaying the complexity of human problems in favor of the marketing of technology that has already decided for us, at some level, what the problem is in order to satisfy it (in a way that successfully productizes it and thus monetizes it).

If this is still too abstract, just consider the dread you feel anytime you are confronted with a customer service telephone tree. The company you are calling is proud of all the options they have for you to press on your phone keypad. There are twenty-six of them! This shows their commitment to customer service. They have twenty-six ways to meet the needs you might have that they care to meet. Why then are you swearing at the recorded messages? Maybe it has something to do with the fact that you don’t get to decide what your needs are. But don’t worry…just keep blurting “operator!” “operator!” At some point someone is sure to pick up, rather than send you back to the beginning of the list of sanctioned options.

Elon Musk’s Allegory of the Strawberries, Revisited

In the proceeding sections of this post, I have attempted to leverage Evgeny Morozov’s critique of tech solutionism in “Click Here to Save Everything” in order to draw a first sketch of Silicon Valley’s distinctive dystopia, what I am wanting to call “the hyper capitalist dictatorship over needs.” However, there were some specific reasons why, in the first installment, I elected to marry this account of “tech solutionism, social media, and political polarization” with a complementary account of growing awareness of the dangers represented by Big Data analytics as such.

This parallel section of the prior post was thus mostly about unpacking Elon Musk’s recent dire warning that “AI is a fundamental risk to the existence of human civilization.” I relied on Science fiction writer and essayist Ted Chiang’s discussion of Musk’s remarks to say that “the threat comes from the alignment of the spirit of Silicon Valley entrepreneurial capitalism, big data and machine learning technology, and the immense wealth, power, and reach of the large tech companies.”

I concluded by saying that the line of constraint connecting Musk’s “allegory of the strawberries” (where strawberry production-optimizing AI covers the whole earth with strawberry fields) and Cathy O’Neil’s warning about ubiquitous algorithmic decision making is a common concern over the dangers of handing over the future, as an open site for human cultural and political decision, to the vagaries of wholly cybernetic functions that have been deployed in the service of the goals of advanced corporate capitalism.

Before moving on to a direct characterization of the Silicon Valley dystopia as the hyper-capitalist dictatorship over needs in the final two installments of this post, I would like to offer a short excursus that brings Musk’s concern about the dangers of “the destruction of human civilization as an unintended side effect” up to the same level of reflection as was hopefully accomplished here with the material from Evgeny Morozov’s Click Here to Save Everything.

Strawberry Picking AIs Lack of Insight

In his December 28th, 2017 Buzzfeed article, Ted Chiang explains what is likely meant by Elon Musk’s comment that “AI could bring about the extinction of humanity purely as an unintended side effect.”

In psychology, Chiang writes, “…the term insight is used to describe a recognition of one’s own condition…it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking…” Such Insight, Chiang adds, is what Musk’s strawberry-picking AI lacks.

Further characterizing algorithmic decision-making’s lack of insight, Chiang says that what AIs do not and cannot do is take a step back and ask “…whether their current course of action is really a good idea.” When you marry super-intelligent machines with no-holds-barred capitalism, therefore, you get what I am calling hyper-capitalism. The resulting hyper-capitalism “actively erodes this capacity in people, by demanding that they replace their own judgement of what good means with whatever the market decides.”

Some Reflections on the Rise of Cybernetic Functions

Reading Ted Chiang on Elon Musk reminded me of something written by one of my old teachers at the New School for Social Research, the late, great Hans Jonas. Jonas died early in 1993 at age 89. I actually attended his final lecture in late ’92, where he ‘said goodbye’ in the way that only an existential philosopher can (let’s just say everybody ends up really moved, but nobody cries). Jonas was one of those intellects who is hard to pigeonhole, generally writing untimely books and essays that have had a habit of hanging around and being influential more than half a century later. He wrote pioneering works on Gnostic religion, bioethics, philosophy and technology, deep ecology and existential philosophy among other things.

In 1953, however, Hans Jonas was mostly thinking a lot about the 1782 steam engine of James Watt, and in particular, Watt’s patented “flyball governor” an auxiliary device that Jonas regarded as the first “negative feedback, self-regulating control mechanism.” Jonas was writing a paper called “A Critique of Cybernetics” where, along with the rise of automatic control mechanisms in the production and application of power, he was as interested in the language of cybernetic explanation as he was with what he called “the mathematics and technology of communication engineering and automatic control.”

Finally, in 1948, Norbert Weiner of MIT gave the name “cybernetics” to the study of such functions (from the Greek transliterated as “kybernetes” meaning helmsman, or pilot).

The beauty of the flyball governor’s self-regulation, Jonas says, “is in the fact that the machine performs it as part of the output to be controlled, and through the very excess or deficiency which are the objects of its corrective action.” When functioning properly, he continues, “it will keep the performance around a mean value by reacting alternatively to the plus or minus departures from it.” In 1868, Jonas tells us, none other than James Maxwell gave the first theoretical account of such a mechanism in a paper he read before the Royal Society, called “On Governors.” Finally, in 1948, Norbert Weiner of MIT gave the name “cybernetics” to the study of such functions (from the Greek transliterated as “kybernetes” meaning helmsman, or pilot).

Since the dominant aspect of the early industrial revolution was power engineering, Jonas points out, the significance of Watt’s governor was simply in its ability to assure the steady running of the engine. But by the time that Jonas was writing in the early 1950s, it had already become clear that the significance of automatic control went well beyond the mere production and application of power. While the first phase of the industrial revolution was all about replacing human bodies in the provision of the moving force, the rise of servo-mechanisms such as thermostats, self-correcting steering in ships, the automatic firing of anti-aircraft guns, target-seeking torpedoes, automatic telephone exchanges, and the first computers were all about the superseding of human higher functions at later stages of industrialization.

The whole cybernetic doctrine of teleological behavior, Jonas concludes, is reducible to a perhaps willful confusion of “serving a purpose” with “having a purpose.”

But as mentioned before, Jonas’ primary concern isn’t so much about the advent and significance of control theory as it is with the rise of a “technical literature devoted specifically to cybernetics.” Jonas is primarily interested in the emerging application of cybernetical explanations per se. While there has always been a strong tendency, Jonas writes, to interpret human functions in terms of the artifacts that take their place, and artifacts in terms of the replaced functions, “…such analogies were left mainly to the imaginative writer and played no part in science”—the power engine is a slaving giant; the human or animal body is a fuel-burning power machine, etc.

But with the burgeoning of a new cybernetical discourse, cybernetical explanation and accounts of human behavior, processes of thought, and culture were suddenly becoming ubiquitous. Reading this stuff, Jonas is confused and concerned. In order to try to get a grip on the cybernetical concepts of purpose and teleology, he scans the above-mentioned scientific literature.

In one case, two examples are thought to be similar. One concerns a servo-mechanism, the other a neurological disturbance. In the first example, a feedback mechanism is described, where feedback is inadequately damped, and results in over-corrections (thereby becoming in effect positive instead of negative). Basically, the description is of a machine designed to hit a moving target, that ends up overshooting in ever large oscillations, and thus misses the goal.

In the second example, Jonas writes, “…a patient with a brain injury (purpose tremor) is asked to carry a glass of water from the table to his mouth,” and “the hand carrying the glass will execute a series of oscillatory motions of increasing amplitude, so that the water spills, and the purpose will not be fulfilled.”

Jonas wants to know precisely what is going on where it is suggested that these two cases are somehow similar. The patient, Jonas says firmly, “…himself wills to bring the glass to his mouth, that is, he wants it there. This end, motivating the action from the start, is intrinsic.” In the case of the machine, “missing the goal means missing our goal, the goal for which it has been designed, namely, by us, having none itself.”

Following this, there is a very nuanced discussion about what it means for an action to have a purpose. Philosophy buffs will read in these lines a series of reverberations, Platonic, Aristotelian, Cartesian, Kantian. Resolute that having a purpose means something like having an intrinsic goal, Jonas says that while a clock is designed with a purpose, “the mechanism is orderly, but not purposive. There is no specific final condition toward which the movement of the clock strives.”

Hans Jonas and The Target-Seeking Torpedo

But what about other sorts of machines, ones that are more cybernetical? As we said in the beginning, cybernetics derives from the Greek word for helmsman or pilot—so what about the purposiveness of target-seeking torpedoes?

The torpedo performs compensatory action with respect to the target, it is true. But while this negative-feedback behavior may appear goal directed, it is still not properly teleological, even if it is, as such, “functionally similar to the effector/receptor equipment, the capacity for motility, and perception in motivated animal behavior.” The fact that the target-seeking torpedo is self-adaptive and comprises a whole of parts that are designed to work together to achieve this result, still does not make the torpedo’s behavior purposive.

Despite the cybernetical contention to the contrary, the whole of parts does not constitute a purpose, because “…it cannot be said to have an identity that can be the bearer of purpose, the subject of action, and the maker of decisions.” Jonas did not live to take on the example of an unmanned drone, where the target seeking torpedo now has a pilot also. But even in such a case, it should be clear that the pilot is not a part of the drone mechanism, is not, strictly speaking, one of its parts. “The feedback of a receptor/effector system,” Jonas says, “…only lends itself to purposive action if and when interposed between them there is a will or interest or concern.”

The whole cybernetic doctrine of teleological behavior, Jonas concludes, is reducible to a perhaps willful confusion of “serving a purpose” with “having a purpose.” And while there is some value to be had, Jonas concedes, in looking at the sensor/effector combination in animals as representing a feedback pattern that conforms to the model evolved in cybernetics, the model nevertheless falls short with respect to one key aspect. Living things, Jonas reminds us, “…are creatures of need. Only living things have needs and act on needs.”

On Handing Over Our Human Future to Cybernetic Functions

This rather long take on Hans Jonas’ reflections on the rise of cybernetic explanations in the scientific literature of the early 1950s got sparked by Ted Chiang’s comments on the dangers of techno-capitalist cybernetic functionality due to a pervasive lack of insight or metacognition. It is my strong conviction that Hans Jonas’ unease about easily slipping back and forth between “serving a purpose” and “having a purpose” comes from the same place as the Musk-Chiang concern about the growing power of artificial intelligence/algorithmic decision-making in what until now have been properly human domains.

Silicon Valley’s Dystopia: The Hyper-Capitalist Dictatorship Over Needs

The two sides to the analysis offered here, both the story of the Silicon Valley solutionists’ reductionism concerning the human condition, and the story about the dangers of technological or cybernetic (algorithmic) capitalism, are offered as sides of a coin. Together they are meant to expose the essential ways in which high technology capitalism is increasingly limiting our ability as human beings to advocate, both individually and collectively, for our needs satisfaction. In this sense I have sought to represent these joined narratives as together containing the basic elements and conditions for what I want to call the emergent Silicon Valley dystopia, the “hyper-capitalist dictatorship over needs.”

In the third installment of this post entitled, “Silicon Valley Rant Part III: Hyper-Capitalist Tech Solutionism & Human Needs” I renew my critique of tech solutionism’s effects on democracy, and turn to a consideration of theories of human needs as a condition for talking about tech solutionism as a “dictatorship over needs”

In the final installment of this post, called “Silicon Valley’s Emerging Dystopia: The Hyper-Capitalist Dictatorship over Needs” I rely upon the early work of the post-Marxist philosopher Agnes Heller to offer up a specific interpretation of the telos of Silicon Valley technological solutionism.