Americans watch roughly two hundred billion hours of TV every year. That represents about two thousand Wikipedias’ projects’ worth of free time annually. Even tiny subsets of this time are enormous: we spend roughly a hundred million hours every weekend just watching commercials. This is a pretty big surplus. People who ask “Where do they find the time?” about those who work on Wikipedia don’t understand how tiny that entire project is, relative to the aggregate free time we all possess. One thing that makes the current age remarkable is that we can now treat free time as a general social asset that can be harnessed for large, communally created projects, rather than as a set of individual minutes to be whiled away one person at a time.
Society never really knows what to do with any surplus at first. (That’s what makes it a surplus.) For most of the time when we’ve had a truly large-scale surplus in free time—billions and then trillions of hours a year—we’ve spent it consuming television, because we judged that use of time to be better than the available alternatives. Sure, we could have played outdoors or read books or made music with our friends, but we mostly didn’t, because the thresholds to those activities were too high, compared to just sitting and watching. Life in the developed world includes a lot of passive participation: at work we’re office drones, at home we’re couch potatoes. The pattern is easy enough to explain by assuming we’ve wanted to be passive participants more than we wanted other things. This story has been, in the last several decades, pretty plausible; a lot of evidence certainly supported this view, and not a lot contradicted it.
But now, for the first time in the history of television, some cohorts of young people are watching TV less than their elders. Several population studies—of high school students, broadband users, YouTube users—have noticed the change, and their basic observation is always the same: young populations with access to fast, interactive media are shifting their behavior away from media that presupposes pure consumption. Even when they watch video online, seemingly a pure analog to TV, they have opportunities to comment on the material, to share it with their friends, to label, rate, or rank it, and of course, to discuss it with other viewers around the world. As Dan Hill noted in a much-cited online essay, “Why Lost Is Genuinely New Media,” the viewers of that show weren’t just viewers—they collaboratively created a compendium of material related to that show called (what else?) Lostpedia. Even when they are engaged in watching TV, in other words, many members of the networked population are engaged with one another, and this engagement correlates with behaviors other than passive consumption.
The choices leading to reduced TV consumption are at once tiny and enormous. The tiny choices are individual; someone simply decides to spend the next hour talking to friends or playing a game or creating something instead of just watching. The enormous choices are collective ones, an accumulation of those tiny choices by the millions; the cumulative shift toward participation across a whole population enables the creation of a Wikipedia. The television industry has been shocked to see alternative uses of free time, especially among young people, because the idea that watching TV was the best use of free time, as ratified by the viewers, has been such a stable feature of society for so long. …
Johannes Gutenberg, a printer in Mainz, in present-day Germany, introduced movable type to the world in the middle of the fifteenth century. Printing presses were already in use, but they were slow and laborious to operate, because a carving had to be made of the full text of each page. Gutenberg realized that if you made carvings of individual letters instead, you could arrange them into any words you liked. These carved letters—type—could be moved around to make new pages, and the type could be set in a fraction of the time that it would take to carve an entire page from scratch.
Movable type introduced something else to the intellectual landscape of Europe: an abundance of books. Prior to Gutenberg, there just weren’t that many books. A single scribe, working alone with a quill and ink and a pile of vellum, could make a copy of a book, but the process was agonizingly slow, making output of scribal copying small and the price high. At the end of the fifteenth century, a scribe could produce a single copy of a five-hundred-page book for roughly thirty florins, while Ripoli, a Venetian press, would, for roughly the same price, print more than three hundred copies of the same book. Hence most scribal capacity was given over to producing additional copies of extant works. In the thirteenth century Saint Bonaventure, a Franciscan monk, described four ways a person could make books: copy a work whole, copy from several works at once, copy an existing work with his own additions, or write out some of his own work with additions from elsewhere. Each of these categories had its own name, like scribe or author, but Bonaventure does not seem to have considered— and certainly didn’t describe—the possibility of anyone creating a wholly original work. In this period, very few books were in existence and a good number of them were copies of the Bible, so the idea of bookmaking was centered on re-creating and recombining existing words far more than on producing novel ones.
Movable type removed that bottleneck, and the first thing the growing cadre of European printers did was to print more Bibles—lots more Bibles. Printers began publishing Bibles translated into vulgar languages—contemporary languages other than Latin—because priests wanted them, not just as a convenience but as a matter of doctrine. Then they began putting out new editions of works by Aristotle, Galen, Virgil, and others that had survived from antiquity. And still the presses could produce more. The next move by the printers was at once simple and astonishing: print lots of new stuff. Prior to movable type, much of the literature available in Europe had been in Latin and was at least a millennium old. And then in a historical eyeblink, books started appearing in local languages, books whose text was months rather than centuries old, books that were, in aggregate, diverse, contemporary, and vulgar. (Indeed, the word novel comes from this period, when newness of content was itself new.)
This radical solution to spare capacity—produce books that no one had ever read before—created new problems, chiefly financial risk. If a printer produced copies of a new book and no one wanted to read it, he’d lose the resources that went into creating it. If he did that enough times, he’d be out of business. Printers reproducing Bibles or the works of Aristotle never had to worry that people might not want their wares, but anyone who wanted to produce a novel book faced this risk. How did printers manage that risk?
Their answer was to make the people who bore the risk—the printers—responsible for the quality of the books as well. There’s no obvious reason why people who are good at running a printing press should also be good at deciding which books are worth printing. But a printing press is expensive, requiring a professional staff to keep it running, and because the material has to be produced in advance of demand for it, the economics of the printing press put the risk at the site of production. Indeed, shouldering the possibility that a book might be unpopular marks the transition from printers (who made copies of hallowed works) to publishers (who took on the risk of novelty).
A lot of new kinds of media have emerged since Gutenberg: images and sounds were encoded onto objects, from photographic plates to music CDs; electromagnetic waves were harnessed to create radio and TV. All these subsequent revolutions, as different as they were, still had the core of Gutenberg economics: enormous investment costs. It’s expensive to own the means of production, whether it is a printing press or a TV tower, which makes novelty a fundamentally high-risk operation. If it’s expensive to own and manage the means of production or if it requires a staff, you’re in a world of Gutenberg economics. And wherever you have Gutenberg economics, whether you are a Venetian publisher or a Hollywood producer, you’re going to have fifteenth-century risk management as well, where the producers have to decide what’s good before showing it to the audience. In this world almost all media was produced by “the media,” a world we all lived in up until a few years ago. …
In an environment so stable that getting TV over a wire instead of via antennae counted as an upheaval, it’s a real shock to see the appearance of a medium that lets anyone in the world make an unlimited number of perfect copies of something they created for free. Equally surprising is the fact that the medium mixes broadcast and conversational patterns so thoroughly that there is no obvious gulf between them. The bundle of concepts tied to the word media is unraveling. We need a new conception for the word, one that dispenses with the connotations of “something produced by professionals for consumption by amateurs.” Here’s mine: media is the connective tissue of society. Media is how you know when and where your friend’s birthday party is. Media is how you know what’s happening in Tehran, who’s in charge in Tegucigalpa, or the price of tea in China. Media is how you know what your colleague named her baby. Media is how you know why Kierkegaard disagreed with Hegel. Media is how you know where your next meeting is. Media is how you know about anything more than ten yards away. All these things used to be separated into public media (like visual or print communications made by a small group of professionals) and personal media (like letters and phone calls made by ordinary citizens). Now those two modes have fused.
The internet is the first public medium to have post-Gutenberg economics. You don’t need to understand anything about its plumbing to appreciate how different it is from any form of media in the previous five hundred years. Since all the data is digital (expressed as numbers), there is no such thing as a copy anymore. Every piece of data, whether an e-mailed love letter or a boring corporate presentation, is identical to every other version of the same piece of data.
You can see this reflected in common parlance. No one ever says, Give me a copy of your phone number. Your phone number is the same number for everybody, and since data is made of numbers, the data is the same for everybody. Because of this curious property of numbers, the old distinction between copying tools for professionals and those for amateurs—printing presses that make high-quality versions for the pros, copy machines for the rest of us—is over. Everyone has access to a medium that makes versions so identical that the old distinction between originals and copies has given way to an unlimited number of equally perfect versions.
Moreover, the means of digital production are symmetrical. A television station is a hugely expensive and complex site designed to send signals, while a television is a relatively simple device for receiving those signals. When someone buys a TV, the number of consumers goes up by one, but the number of producers stays the same. On the other hand, when someone buys a computer or a mobile phone, the number of consumers and producers both increase by one. Talent remains unequally distributed, but the raw ability to make and to share is now widely distributed and getting wider every year.
Digital networks are increasing the fluidity of all media. The old choice between one-way public media (like books and movies) and two-way private media (like the phone) has now expanded to include a third option: two-way media that operates on a scale from private to public. Conversations among groups can now be carried out in the same media environments as broadcasts. This new option bridges the two older options of broadcast and communications media. All media can now slide from one to the other. A book can stimulate public discussion in a thousand places at once. An e-mail conversation can be published by its participants. An essay intended for public consumption can anchor a private argument, parts of which later become public. We move from public to private and back again in ways that weren’t possible in an era when public and private media, like the radio and the telephone, used different devices and different networks.
And finally the new media involves a change in economics. With the internet, everyone pays for it, and then everyone gets to use it. Instead of having one company own and operate the whole system, the internet is just a set of agreements about how to move data between two points. Anyone who abides by these agreements, from an individual working from a mobile phone to a huge company, can be a full-fledged member of the network. The infrastructure isn’t owned by the producers of the content: it’s accessible to everyone who pays to use the network, regardless of how they use it. This shift to post-Gutenberg economics, with its interchangeably perfect versions and conversational capabilities, with its symmetrical production and low costs, provides the means for much of the generous, social, and creative behavior we’re seeing. …
Consider the denizens of FanFiction.net, the community of people who write new stories set in the imagined worlds of existing fictional works. The most fecund of these communities are people writing stories set in the Harry Potter universe—FanFiction.net hosts more than half a million Potter stories (and still more appear on sites like FictionAlley.org and HarryPotterFanFiction.com). Hundreds of the stories run to over one hundred thousand words, roughly the length of J. K. Rowling’s original novels. Fan-Fiction.net doesn’t just aggregate stories; it hosts a community in constant conversation with itself. If “thank you” is the coin of the realm among Grobanites, attention is the coin for fan fiction; the plea to “please read and review my story” is so common it has been shortened to “R&R.”
Like all communities, the world of fan fiction sometimes gets roiled by violations of its cultural norms. In the Harry Potter community, a fanfic author with the pen name of Cassandra Claire was accused of copying passages into her fan fiction from two books by the fantasy author Patricia Dean. It may seem odd that a group of people publicly engaging in wholesale copyright violation are concerned with plagiarism, but they are, and deeply so. Failure to give credit where credit is due is the crime in this community, a violation not of property rights but of deeply held ethical norms about credit. Some fan fiction writers even use a “legal” disclaimer at the beginning of their works, with “legal” in quotes because the disclaimers read like this, misspellings and all:
“Disclaimer: I don’t own these characters, but I do own their personalities? [grin]… kind of? I dunno. But anyway, JK Rowling is amazing.”
“Disclaimer: i do not own harry potter this is purely a fan written story.”
“Disclaimer: Harry Potter Universe in not mine, just Dana Cresswell is ”
“Disclaimer: I do not own Harry Potter or any of the other characters … I just am borrowing them!”
Lawyers would laugh till coffee came out their noses at the idea that writers can legally borrow other writers’ characters, that fan fiction is a special class of creativity, or that writers can own new characters or plots in existing fictional universes without the permission of those who created those universes. Even the writers of the disclaimers are unsure about them, like the author who claims to “kind of” own the personalities of characters Rowling invented. Like children staging a wedding, the disclaimers mimic an existing form of obligation while remaining legally inert. They aren’t worthless, but their worth lies elsewhere.
The internal logic of the fanfic community becomes clearer in light of the other charge leveled at Cassandra Claire; she was accused of profiteering, which, in the culture of fan fiction, means trying to make money from her fanfic. This was held up as still more evidence that she was impure of heart. Fanfic disclaimers express the logic of giving public credit (“JK Rowling is amazing”), albeit in the language of ownership. This is a “two worlds” view of creative acts. The world of money, where Rowling lives, is the one where creators are paid for their work. Fan fiction authors by definition do not inhabit this world, and more important, they rarely aspire to inhabit it. Instead, they often choose to work in the world of affection, where the goal is to be recognized by others for doing something creative within a particular fictional universe. A robust communal infrastructure is essential to that mutual recognition. Indeed, one of the most lamented effects oil’affaire Claire was that it created a schism in the Harry Potter fanfic community.
Seen in this light, it doesn’t matter whether the fan fiction authors understand that what they are doing is illegal. By publicly disavowing ownership of JK Rowling’s work—something that was never in dispute—they are demonstrating their respect for the source of material that is now integrated into their imagination. They are also carving out a practical distinction between the world of money and the world of love, because even though that distinction is meaningless in a court of law, it is meaningful to them. Within that community, purity of motivation inside the community matters more than legality of action outside it.
If you had a few spare weeks to kill, you could spend as much time as you like reading various public utterances on mailing lists, blogs, social networks, wikis, bulletin boards, and every other place online where an individual can, with three minutes of typing and one press of a button, make his thoughts globally available. And if you tried it, you’d get exhausted without coming anywhere close to exhausting what’s out there. Indeed, you’d be outstripped by the desire of the world’s participants to avail themselves of these newly public channels. No matter how much time you devoted to reading, watching, and listening, the world’s amateurs would, in that same period, produce more material—vastly more—than you’d have taken in. By the end of 2009 an average of twenty-four hours of video were being uploaded onto YouTube every minute; Twitter receives close to three hundred million words a day.
When you see people acting in ways you don’t understand, you may ask rhetorically, Why are they behaving that way? A better question is this: Is their behavior rewarding a desire for autonomy, or for competence? Is it rewarding their desire to feel connected or generous? If the answer to any of those questions is yes, you may have your explanation. If the answer to more than one of those questions is yes, you probably do. …
When we want something to happen, and it’s more complex than one person can accomplish alone, we need a group to do it. There are many ways to get groups to undertake big or complex activities, but for large-scale, long-lived tasks, the primary mechanisms have been twofold. The first is the private sector, where a task will get done when the group to do it can be assembled and paid for less than their output will fetch in the market. (This is the world of the firm; it is how most cars are built.) The second is the public sector, where employment comes with an obligation to work together on tasks that are of high perceived value, even if they are not compensated in the market. (This is the world of government and nonprofits; it is how most roads are built.) The single most heated political debate in the last century was how best to balance the competing values of those two modes. The result, after the collapse of Communism as the maximum case for a pure public option and after the rise of the welfare state tempered the idea of a pure market, has been a convergence to a broad center, with different mixes of public and private creation in different places.
There is a third mechanism for group production though, outside managed organizations and the market. Social production is the creation of value by a group for its members, using neither price signals nor managerial oversight to coordinate participants’ efforts. (This is the world of friends and family; it is how most picnics happen.) Social production was not included in the heated political debates of the twentieth century, because the things people could produce for one another using their free time and working without markets or managers were limited.
Two things happened to end that consensus. First, behavioral economics upended the idea that humans always determine value rationally, the way competitive markets do. In fact, we aren’t rational, we are “predictably irrational” (to use the title of Dan Ariely’s wonderful 2008 book on behavioral economics), and markets turn out to be a special case, effective only under tightly controlled conditions. As with the Ultimatum Game, the default human behavior relies on mutual regard for other participants, even when there’s money to be made. The second thing that has happened is that the emergence of a medium that makes group coordination cheap and widespread caused many of the old limits on social production to recede.
This is the mechanism of production that Harvard law professor Yochai Benkler has called “commons-based peer production,” work that is jointly owned or accessed by its participants, and created by people operating as peers, without a managerial hierarchy. The inclusion of millions of new participants in our media environment has expanded the scale and scope of such production dramatically. Where markets and managers have been the preeminent mechanisms for large-scale creation, we can now add this form of social production as a way to take on such tasks, linking our aggregate free time to tasks we find interesting, important, or urgent, using media that now provides opportunities for this kind of production. This increase in our ability to create things together, to pool our free time and particular talents into something useful, is one of the great new opportunities of the age, one that changes the behaviors of people who take advantage of it. …
The choice we face is this: out of the mass of our shared cognitive surplus, we can create an Invisible University—many Invisible Colleges doing the hard work of creating many kinds of public and civic value—or we can settle for Invisible High School, where we get lolcats but no open source software, fan fiction but no improvement in medical research. The Invisible High School is already widespread, and our ability to participate in ways that reward personal or communal value is in no imminent danger. Following Gary Kamiya’s observation about the ease of getting what we want, we can always use the internet today to find something entertaining to read, watch, or listen to.
Creating real public or civic value, though, requires more than posting funny pictures. Public and civic value require commitment and hard work among the core group of participants. It also requires that these groups be self-governing and submit to constraints that help them ignore distracting and entertaining material and stay focused instead on some sophisticated task. Getting an Invisible University means mastering the art of creating groups that commit themselves to working together outside existing market and managerial structures, in order to create opportunities for planet-scale sharing. This work is not easy, and it never goes smoothly. Because we are hopelessly committed to both individual satisfaction and group effectiveness, groups committed to public or civic value are rarely permanent. Instead, groups need to acquire a culture that rewards their members for doing that hard work. It takes this kind of group effort to get what we need, not just what we want; understanding how to create and maintain it is one of the great challenges of our era.