Who Is We?

We may earn a commission from links on this page.
Image for article titled Who Is We?
Illustration: Jim Cooke (GMG)

This past December, the New York Times ran a perfectly anodyne post promoting a new episode of The Daily, its popular podcast, and found itself mildly embarrassed. The problem wasn’t the actual substance of what it had published, a straightforward accounting of how the national-security establishment’s fixation on Arabs and Muslims had blinded it to the threat posed by right-wing terrorism, but the presentation: Someone had headlined the post “The Rise of Right-Wing Extremism, and How We Missed It.” Who exactly, any number of people asked, reasonably, is “we”?

The Times, ever sensitive to charges that its journalism expresses the narrow perspective of a provincial and self-interested elite—that its coverage is by and for a discernible “we”—quickly amended the headline, and probably very few people have thought about the episode since; the world moves quickly and there is a lot by which to be irritated. Readers put off by the presumptuousness inherent in the headline were certainly onto something, though, and not just in the sense that any definition of “we” extending much beyond the likes of Daily host Michael Barbaro would bring in many people who did not miss the rise of right-wing extremism (among them those targeted by violent misogynists, anti-Semites, and white supremacists), but in the broader sense that any time someone uses the word “we” without specifically defining who they mean, they should be treated with skepticism, if not hostility.

“We” is useful, depending on who’s using it and why, for any number of purposes. Writers use it to, among other things, attribute their own beliefs to others, whether to make them seem more widely-held than they are or to make it seem more understandable that they hold them; closely circumscribe a group comprising the right-thinking and respectable and leave those who disagree with them outside it; claim that their failure to do or notice something was shared by all; construct a straw consensus for the purpose of assailing it; deflect blame from them or their institution to Society as such; and elevate particular concerns to the level of universal ones, obviating the distinction between the individual and the collective. When someone uses it, you can presume they’re wrong, or lying.


Contact the Special Projects Desk

This post was produced by the Special Projects Desk of Gizmodo Media. Know something we should know? Email us at tips@gizmodomedia.com, or contact us securely using SecureDrop.

The Times sub-editor who wrote the objectionable headline that embarrassed the paper was doing something less egregious, in one sense, than what the typical “we”-user does, in that they were in fact identifying a discrete group to which the reader could be presumed to belong. In this case, it was the national-security bureaucracy, which, unlike the Times, in fact missed the rise of right-wing extremism; the use of the plural in the headline was an invitation to the reader to identify with that bureaucracy, and with the class that produces the Times and for which it’s produced, and to identify all their interests as coterminous. In another, more important sense, that precise set of presumptions—that the reader and the Times are at one with the national-security bureaucracy, that the interests of the one necessarily reflect onto the others, and that the sort of people who did not miss the rise of right-wing extremism are outside the circle of mutual affinities here—is exactly the problem. The use of language always reflects values, and the values expressed here were wanting. The Times was embarrassed because, unusually, people noticed.

If you were looking for a place to assign blame for nosism—the use of the first-person plural to refer to oneself—you might start with God (“Let us make man in our image, after our likeness,” etc.) or with papal practice, or with Henry II, who is credited with having introduced the practice of English monarchs referring to themselves in the majestic plural in the 12th century; in the end, none of this is convincing. Without getting into the intricacies of theology and Hebrew grammar, for instance, it’s quite possible that God is a trinitarian, or using an intensifier to boast about His greatness in a way that seems more understandable coming from the creator of space and time than from a middle-tier magazine columnist, while popes and monarchs are not using the plural to refer to themselves alone but to themselves and God and/or the polities for which they speak, which may be arrogant, but displays a different sort of presumptuousness than that of a writer claiming to have divined the thoughts of everyone reading them. Even then, all of this is archaic: God hasn’t said anything in a long time, popes have generally passed over the “we” business since John Paul II, and even British monarchs aren’t all that likely to abuse the royal we. Victoria may have famously said, “We are not amused,” but if she did she was probably referring to herself and the ladies of the court, not herself and the realm, or herself and God; unlike Margaret Thatcher, queens keep an eye to the bound between themselves and everything else.


The real locus of infection is probably the newspaper. In 1877, on hearing that President Rutherford B. Hayes had been using the first-person plural, Senator Roscoe Conkling sniffed that only three sorts of people used “we” when they meant “I”: emperors, editors, and men with tapeworms. This was a new version of an old joke, and the first, so far as could be determined by a thorough investigation into who among the many people to whom it’s been attributed actually came up with it, to mention editorialists.


In their defense, editors who refer to themselves in the plural when endorsing someone for water commissioner or holding forth on The Syrian Question are, in theory, writing not just on behalf of their institution but on behalf of the editorial board, a group of actual people who, in theory, sit around debating what their boss and his friends would like them to write before assigning someone to write it; the joke is still pretty good, though, in part because of how the editorial “we” came to spread through the rest of the paper. Conkling’s crack came two years after Anthony Trollope published The Way We Live Now; just over a year ago, the Times published Home: The Best of The New York Times Home Section: The Way We Live Now, a glossy selection of home-improvement pornography in which who the “we” are who live in such a way as to end up in the Times’s Home section goes without saying. In the years in between, terrible things happened and half the writers in America, perhaps inspired by the “what we talk about when we talk about” headline construction, became populations in themselves, thousands of entire worlds in themselves pecking away at thousands of keyboards.

Just over the last while, I’ve learned, to my surprise, and among many other things, that I tend not to think of Aretha Franklin as an artist of bravado and nerve and daring; that I am Julia Salazar, and that if I dislike her, I also made her; that I assign greater meaning to college football than it is equipped to handle and impart skills and powers to its leaders that are greater than they deserve, and always have; and that I await Marvel movies not simply because they’re all I have left, but because I am the one I have been waiting for. Perhaps all of this is down to an accident of birth; as an American, I’m given to understand, I shunt aside anything that smacks of moral impropriety, denying my ravenous appetites even while indulging them.


It isn’t hard to see how how writers end up making this sort of claim. If you have a general statement to make about a large group of people—Habitual “we”-users are mostly either lazy, thoughtless writers who don’t think through the actual implications of what they’re saying or shameless narcissists, say—several alternatives are available to you. You can prove it; this takes a lot of work, though, and some things which seem to pretty clearly be true can’t practically be proved. You can not make the assertion, on the principle that saying things you can’t prove are true is generally to be avoided; this is usually a good idea, but of course the world would be worse off if no one ever offered a theory based on nothing stronger than their own observations and sense of things. You can make the assertion on your own authority, admitting that you can’t prove it but arguing it that accords with what you see and your sense of how the world works, but this is dangerous because you might be wrong and someone might say so. You can, finally, use the first-person plural, which offers the advantages of asserting something without being able to prove it without the disadvantage of leaving you accountable for it.


Who, after all, can counter a statement like We use “we” too much—we’re lazy and thoughtless and don’t think through the actual implications of what we’re saying. We’re shamelessly narcissistic? It sounds true enough—it’s an argument against interest, with one of the accused leveling the charge—and, perhaps more importantly, it offers nothing and no one in particular to argue against or prove wrong. It has the shape of a statement without exactly being one. Followed to its logical end, this sort of thing results in writers seeming unable to distinguish between the world and themselves.

Three years ago, for no good reason I can remember, I read a review of Drake’s Views by Caitlin White that remains every bit as bewildering to me now as it was then. White starts by inveighing against the idea of consensus on behalf of some completely indeterminate group of people (“When every single review of an album reads the same way, it begins to feel like we’ve lost something. Hell, maybe we’ve gained something, and it’s that we’re not afraid to call out our superstars on their bullshit.”) and then spins off into a dizzying, spectacular array of “we” usage:

Why can’t Drake be a character too? Or rather, why is he the character upon whom we’re most eager to pass judgement? Is it because he reminds us too much of ourselves? Is it because his raps hew the closest to our own introspective self-importance and we don’t want to hear it? We casually brush off lyrics far more specific than every slippery instance of selfishness that Drake openly cops to–and we even fight battles in court that rappers not be equated to their lyrics, like these lines are the facts of their lives. Almost every review turns around and does just that to Views, which, fair enough. I do that too. I recently wrote about re-listening to Weezer in the wake of their newest album, and grappling with the inherent sexism that I missed as a teenaged fan. But Weezer has been pulling this shit for nearly twenty years and people are barely even starting to embark on criticism, while Drake is directly between the crosshairs. Probably because we all sit watching those three dots. It’s easier to critique him for it than look at our own behavior.


If you stare at this long enough, you can just discern the shape of a claim: Unnamed reviewers, White says, are inaccurately treating Drake, a persona, and Aubrey Graham, whose persona it is, as one and the same, interfering with their ability to criticize or even appreciate his (their?) music—which, fair enough. It takes work to figure this out because the basic adversarial dynamic, with ”I” over here and “they” over there, is buried under a whole lot of mud, with “I” and “they” subsumed into “we,” so that by criticizing other people, the writer is criticizing herself. Why is she taking the blame for doing something other people are doing, which she thinks they shouldn’t and says she isn’t? Perhaps because by including herself among the people who are doing this she can avoid criticizing anyone specifically, or even pointing out any examples of anyone doing the thing she says they’re doing; who knows.

Looming over the mud here, though, is a towering edifice whose sheer scale, awesome as it is at a slight remove, one can really only appreciate once one gets close enough for it to block out everything else within sight. If White has previously dissolved all boundaries between herself and others, so that she is culpable for what they do, here the operation works in reverse, so that what she does is done by everyone.

Drake made an album about being Drake right now, and yeah, it’s the Drakest shit you’ve ever heard. But what does that sentence even mean? It means this man, a superstar, has communicated an image of who we think he is to such an extent that we use his own name to describe him. Shit, we use his name to describe ourselves.


The lines between the interior and the exterior, and between the writer and the outside world, have been completely erased in a singular act of transcendence. It is incredible to behold.

Most of the time, nosism doesn’t involve anything as psychedelic as a Drake review suggesting that everyone reading it has done something probably only the writer has ever done. A more typical instance would be something like Anne Helen Petersen’s recent essay in BuzzFeed on how burnout became the millennial condition, an examination of why the writer finds herself unable to complete tasks like getting her knives sharpened that quickly arrives at a reasonable conclusion (“Why am I burned out? Because I’ve internalized the idea that I should be working all the time. Why have I internalized that idea? Because everything and everyone in my life has reinforced it—explicitly and implicitly—since I was young”) and then writes it onto a population of 75 million people, largely through the use of false plurals. In 8,000 words, the essay uses “we” about 80 times and “our” about 50, often enough that you can more or less synopsize the entire thing by stringing a few instances together:

It’s the millennial condition. It’s our base temperature. It’s our background music. It’s the way things are. It’s our lives … How, in other words, can I optimize myself to get those mundane tasks done and theoretically cure my burnout? As millennials have aged into our thirties, that’s the question we keep asking … We all know what we see on Facebook or Instagram isn’t “real,” but that doesn’t mean we don’t judge ourselves against it … We put up with companies treating us poorly because we don’t see another option. We don’t quit. We internalize that we’re not striving hard enough. And we get a second gig … We use Fresh Direct and Amazon because the time they save allows us to do more work … Burnout isn’t a place to visit and come back from; it’s our permanent residence … We are beginning to understand what ails us, and it’s not something an oxygen facial or a treadmill desk can fix ...


All of this seems true enough on some level (I certainly relate) and it’s at least possible that one could, through rigorous empirical study, prove some of it. You could establish that burnout, however defined, is the way things are for millennials; that they judge themselves against what they see on social media; that they read poor treatment from employers as evidence that they’re not working hard enough, and so passively accept poor treatment, etc. It would, though, be difficult, and as is, it’s a run of unproven and/or unprovable assertions to which there’s no way to assign a truth value.

This is, interestingly, not at all the case for either class of statement if you substitute singular pronouns, or ones referring to tightly defined groups, for the broad, collective ones. If one were to write, “Burnout is my condition. It’s my base temperature. It’s my background music. It’s the way things are. It’s my life,” it would, assuming it were true, be an accurate statement, attested to by the most authoritative possible source. The same would be true if one were to write, “My friends and I are beginning to understand what ails us, and it’s not something an oxygen facial or a treadmill desk can fix.” With specific enough descriptors, one could considerably widen the scope of who “we” are here without saying anything that isn’t true.


In the story as published, though, the scope is being widened as much as possible, with the characteristics and experiences of a subset of unknown size being ascribed to the entire set. You can fit a lot in the vast space between the specific and accurate statements that might have been made and the universalizing and inaccurate ones that were—the two-thirds or so of millennials who didn’t attend college, for instance, many and perhaps most of whom probably don’t especially relate to the concerns on which the piece centers, or the members of marginalized groups who often have a very different experience of burnout than the one Petersen describes. (You could certainly fit the fifth or so of U.S. millennials who don’t use Facebook, or the two-fifths who don’t use Instagram.) They seem not to quite have fit in the story, or in its concept of what a millennial is. The use of language can tell you a lot about what, and who, matters.

Whatever the problems are with universalizing the experience of a circumscribed group, it has the significant merit of involving a group, and thus something outside the writer’s immediate experience. The worst uses of the first-person plural don’t even do that, instead measuring the dimensions of the inside of the writer’s head and presenting them as those of the cosmos. It’s so common you might not even notice it; it’s easy to believe the people doing it might not, either.


Often enough, this is harmless if slightly absurd, as when New York’s Will Leitch, a prodigious we-user, positions himself as the tribune and voice of all sports fans, uniquely able not just to speak for them but to read what is happening in their interior lives, and submits that we don’t actually much care about whether athletes win or lose. (“We don’t drag these guys nearly as much as we used to: We’ve gone from obsessing over goats to obsessing over GOATs. We forgive so much more quickly now … as we learn more and more about our athletes and the world they inhabit, we’re growing more comfortable with winning, and losing, not actually being everything.”) At other times it’s somewhat less harmless, as when Leitch, at the end of a piece lambasting Barstool Sports in which he admits to having avoided doing so in the past because he didn’t want to provoke their sociopathic followers, declares, “It’s time to stop kidding ourselves.”


Exactly who has been kidding themselves goes unmentioned (I wonder if the journalists who have been subjected to the worst, most vicious kinds of racist and misogynist harassment for doing their jobs and not avoiding writing about Barstool—among them those at Deadspin, where Leitch and I both used to work—have been), but a neat flattening effect is produced: The person admitting he’s done something wrong gets the benefit of confession while somewhat lessening the degree of his guilt by noting just how many other people, possibly up to and including everyone, are complicit in the same wrongdoing.

Recently, the Times’s Wesley Morris demonstrated just how far one can go along these lines, and of just how far they can take you toward the actively harmful, in an essay about Leaving Neverland and the renewed salience of the child-molestation accusations that defined Michael Jackson over the last 20 years of his life. At points, Morris comes close to explicitly interrogating his own inability or unwillingness to reconcile the widely-known truth about Jackson with what Jackson meant to him; every time he does, he pulls back, ascribing his own inner conflict to Society:

The story was that Jackson never molested anybody. And we stuck to it, and it stuck to him. And the question now, of course, is what do we do? It’s the question of our #MeToo times: If we believe the accusers (and I believe Wade and James), what do we do with the art? With Jackson, what can we do? [...] Jackson provided us an early occasion to ask the question about the art without ever realizing it was being asked. We simply lived with it, with the possibility of his guilt, and the many compartments we make to contain everything he was: the conscientious enthusiasm for and the comedy of him, the tragedy he so obviously represents.


That this isn’t true (survey data show that four in five Americans didn’t stick to the story, and, more basically, anyone with the least familiarity with the issue knows that Jackson was canceled before canceling was anything anyone did) doesn’t seem to especially matter, any more than does the responsibility to describe the world as it is: to account for truths easily accessed via a search engine, to reckon with the existence of the many people who realized more than 25 years ago just what questions the accusations against Jackson raised, to observe a distinction between people who did not passively approve of child abuse and those who did. What matters is that the external world does nothing more than reflect what happened (and didn’t) in the writer’s mind.

You can most easily locate the danger of the ill-defined or undefined “we” precisely here, at the point where it begins to blot out the world. Solipsism can be funny, and its expression in the plural a mere tic or a bit of bad practice; it can also be so all-encompassing as to amount to a denial that there is an objective reality to be apprehended. All the strange and beautiful things in the world begin to dissolve as the singular expands toward the scale of the collective. So do all the reasons one might be wrong, all the ways things might seem to others, and the things you don’t know—everything that marks a difference between one thing and another. Let it go long enough and you’re left with the person saying a thing and little else, until in the end there is only an “I” in the shape of an “us,” unable to hear what anyone else is saying if not unaware anyone is there, with no way to ask and no interest in asking the simple question that could and should be asked any time the word is used: Who the fuck is we?