Fast company logo
|
advertisement

How has the interface of consumer tech become the thing that legitimizes any and all human experience?

Technology has an interface problem

[Illustration: Daniel Salo for Fast Company]

BY Zachary Kaiser6 minute read

In 2020, Amazon launched its first piece of wearable technology, the Halo. Like an Apple Watch or Fitbit, the Halo could tell you how many steps you take, track your sleeping habits, and guide your workouts, but it also came with a particularly personal feature Amazon referred to as “tone,” touting the feature on its website, urging potential users to “see how you sound to your partner” or “see how you sound to your waiter.” By listening to the pitch, rhythm, and intensity of a user’s voice, the Halo purportedly could infer a variety of emotional states, including “worried,” “hopeful,” “bored,” and “affectionate.” 

[Images: Amazon]

The Halo’s tech was equal parts impressive and problematic. Being able to correctly identify someone’s mood or tone of voice across an infinite variety of social and cultural contexts requires some machine-learning prowess, as well as a combination of voice recognition, natural language processing, and audio spectrogram analysis on the fly. At the same time, the premise is disturbing: What if how you sound doesn’t match up with how you feel? Will you have to modulate your expression to adhere to some kind of computational norm?

[Image: courtesy of the author]

The Halo has not been a smash hit, especially compared with the Apple Watch or Fitbit. But its underlying premise—that emotion, like every other aspect of everyday human experience, is a quantifiable piece of data—extends far beyond its own product line. San Francisco-based Feel Therapeutics is “decoding mental health” with its wearable and companion platform, while Sonde Health transforms “any device into a health monitoring device using your voice.” 

In a time when it feels hard to trust anything, interfaces—particularly those that look scientific (it’s data!)—can offer a comforting reprieve. People tend to think about technology in terms of what it does and how they use it, something akin to UX. The interface, or the “visual design,” is seen in many instances as a “skin” for the actual underlying tech, a means to an end and not central to the dynamic interplay between people and the function of technology. I think this is an oversight. It ignores the role of the interface in circumscribing thought and action through concealment, obfuscation, and aesthetics. 

In the case of a product like the Halo, the complexity of its computational infrastructures—in other words, the algorithms, training data, machine-learning inputs and weighted scores, as well as the very fact that none of this is as cut and dried as it appears in the interface—is never shown to users. Instead, they see emojis indicating particular emotions along with numbers indicating the percentage of “significant moments” in which this emotion was present. 

It’s at the interface where users can follow Amazon’s insistent prompts to see how they sound. Halo’s interface makes the product seem useful, and it is where the use of technology actually occurs. This exemplifies so much of emerging consumer tech, which relies on the interface to produce the appearance of utility that will ensure further use. It is the form of the interface that enables the function of the technology. Remember that Amazon tells us to “see how you sound.” In other words, you can’t actually know how you sound until you see it. Today, the interface becomes the thing that legitimizes any and all human experience, even emotion. 

I’d argue that the interface itself is the ideological bulwark of capitalist technology writ large. The interface serves to naturalize—in Roland Barthes’s parlance—the ideologies, biases, histories, etc., that are embedded within our technologies. The interface is the place where users actually interact with those biases. It legitimizes the ideas and ideals designed into a technology and circulates them in society.

Why does this matter? Well, one of the big-picture reasons, for me, especially in writing my book, is that the interfaces to our most-often-used digital products and services support dangerous and neoliberal ideas about people and society.

Under neoliberalism, individual people are held responsible for problems that they themselves did not necessarily produce. If, for instance, you’re a single mom and driving for Uber at night and teaching first graders during the day, it’s actually your fault that you don’t have enough free time to exercise and eat right, and so it’s your problem that your health insurance premiums are going up, or that you can’t get enough life insurance. This mindset ignores the fact that teachers should actually be paid a living wage, and that everyone deserves access to healthcare, healthy food, and enough time to be active. 

The interfaces of so many consumer technologies support a strangely familiar individualism—one that tells us that we should seek to optimize ourselves and that seeing ourselves as computers is the best way to achieve this optimization. The way that interfaces circulate this idea through society has all sorts of pernicious effects.

advertisement

Not only do interfaces make the ideas embedded in consumer technologies appear legitimate, but then users have to legitimize themselves on the terms of those very technologies. You’re not “real” until the numbers prove it. You haven’t exercised enough until your watch’s interface tells you that you have. You aren’t happy until your emotion-tracking platform tells you that you “sound” happy. On an individual scale, this is bad enough. But because this view has its roots in neoliberalism, interfaces to digital technologies become weaponized to further atomize us and entrench the status quo more deeply. 

Consumer tech, like the Amazon Halo, which seems so convenience-enhancing to everyday people, can further erode the social solidarity that would otherwise endanger capitalism’s grip on society. When we feel as though all problems are individual problems, we lose the political will to make change—we don’t feel like we need to join labor unions or engage in actions of solidarity with others. We think instead that a hackathon can solve hunger, when what’s necessary is degrowth and a radical redistribution of wealth. When health is seen as an individual problem, an app is seen as a solution. We don’t think individual human beings deserve to be healthy. We should take advantage of this app instead of having access to universal healthcare.   

Interfaces do this ideological work through their aesthetic (what they show), as well as through what they hide. Technology interfaces that purport to help us understand ourselves—ranging from Sonde Health’s emotion tracking app to the tools used by universities to evaluate faculty “impact”—are designed to look scientific. The authority conferred upon something when it appears scientific, write designers Jessica Helfand and William Drenttel, is “a false authority, particularly because we buy into the form so unquestioningly.”

When designers make things look scientific, they appeal to the supposed ideological neutrality of science, which, of course, is nonexistent. The interface’s power to legitimize certain ideas about people and society also derives from what it hides. Imagine if, for example, the Halo showed users the computational gymnastics required for it to “understand” a user’s tone and emotional state, explained in its patent application for one of its underlying technologies. Would users be less likely to believe they can actually be understood through computing? 

To do this would be anathema to the business goals of companies like Amazon, for which frictionless usability is central to its short-term profitability and long-term monopolistic aspirations. While I sincerely believe in showing people what lies under the proverbial tip of the iceberg, it would be, like other technological interventions of artists and activists, consigned to the domain of “art” or “critical design.” And even if an intervention that required some sort of artificial intelligence or machine-learning transparency were to be instituted by regulatory bodies (yeah, right), the impenetrability of these systems would make any effort at “transparency” useless from a general consumer perspective. 

Way back in the 1970s, the philosopher Ivan Illich advocated for a democratic determination of design criteria for any and all tools used in a particular society. Today, however, we are bludgeoned with “innovation,” none of which we acceded to, but to which we have become subject as it achieves dominance through marketing that plays on our fears or aspirations.

Contesting the power of the interface is essential to combating this trend, which means, at the very least, taking interfaces less seriously. But perhaps we can use a critical engagement with the interface to build technologies that undermine capital’s power and work toward a socialization of technology itself.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.


Explore Topics