скачать книгу бесплатно
The People’s Platform: Taking Back Power and Culture in the Digital Age
Astra Taylor
The internet has been hailed as an unprecedented democratising force, a place where everyone can participate. But how true is this? Dismantling the techno-utopian vision, ‘The People’s Platform’ argues that for all our “tweeting” and “sharing,” the internet in fact reflects and amplifies real-world inequalities as much as it reduces them. Online, just as off-line, attention accrues to those who already have plenty of it.What we have seen so far, Astra Taylor argues, has been not a revolution but a rearrangement. A handful of giants like Amazon, Apple, Google and Facebook are our gatekeepers. And the worst habits of the old media model – the pressure to seek easy celebrity – have proliferated. When culture is “free,” creative work has diminishing value and advertising fuels the system.We can do better, Taylor insists. The online world does offer a unique opportunity, but a democratic culture that supports the diverse and lasting will not spring up from technology alone. If we want the internet to be a people’s platform, we will have to make it so.
CONTENTS
Cover (#u6d5a140a-d23b-5265-97b3-ff7de2152577)
Title Page (#ue22957ec-329b-594a-b0a5-cf34794aa582)
Preface (#uf6352043-3e6a-5dbf-8678-6d1ea910ec7e)
1. A Peasant’s Kingdom (#uc246103b-2bb8-5503-9033-0c4d9936fef0)
2. For Love or Money (#uc7e0882e-83f6-5ea6-83ea-a43d46fbfe63)
3. What We Want (#ucb4426e0-9efe-50c6-98c5-178f96e818b1)
4. Unequal Uptake (#litres_trial_promo)
5. The Double Anchor (#litres_trial_promo)
6. Drawing a Line (#litres_trial_promo)
Conclusion (#litres_trial_promo)
Notes (#litres_trial_promo)
Index (#litres_trial_promo)
Acknowledgments (#litres_trial_promo)
About the Author (#litres_trial_promo)
Also by Astra Taylor (#litres_trial_promo)
Copyright (#litres_trial_promo)
About the Publisher (#litres_trial_promo)
PREFACE (#ulink_3ac8a099-796d-5d4e-9f82-f59e4f8a6933)
When I was twelve years old, while most of my peers were playing outside, I hunkered down in my family’s den, consumed by the project of making my own magazine. Obsessed with animal rights and environmentalism, I imagined my publication as a homemade corrective to corporate culture, a place where other kids could learn the truth that Saturday morning cartoons, big-budget movies, and advertisements for “Happy Meals” hid from them. I wrangled my friends into writing for it (I know it’s hard to believe I had any), used desktop publishing software to design it, and was thrilled that the father of one of my conspirators managed a local Kinkos, which meant we could make copies at a steep discount. Every couple of months my parents drove me to the handful of bookstores and food co-ops in Athens, Georgia, where I eagerly asked the proprietors if I could give them the latest issue, convinced that when enough young people read my cri de coeur the world would change.
It was a strange way to spend one’s preadolescence. But equally strange, now, is to think of how much work I had to do to get it into readers’ hands once everything was written and edited. That’s how it went back in the early nineties: each precious copy could be accounted for, either given to a friend, handed out on a street corner, shelved at a local store, or mailed to the few dozen precious subscribers I managed to amass. And I, with access to a computer, a printer, and ample professional copiers, had it pretty easy compared to those who had walked a similar road just decades before me: a veteran political organizer told me how he and his friends had to sell blood in order to raise the funds to buy a mimeograph machine so they could make a newsletter in the early sixties.
When I was working on my magazine I had only vague inklings that the Internet even existed. Today any kid with a smartphone and a message has the potential to reach more people with the push of a button than I did during two years of self-publishing. New technologies have opened up previously unimaginable avenues for self-expression and exposure to information, and each passing year has only made it easier to spread the word.
In many respects, my adult work as an independent filmmaker has been motivated by the same concerns as my childhood hobby: frustration with the mainstream media. So many subjects I cared about were being ignored; so many worthwhile stories went uncovered. I picked up a camera to fill in the gap, producing various documentaries focused on social justice and directing two features about philosophy. On the side I’ve written articles and essays for the independent press, covering topics including disability rights and alternative education. When Occupy Wall Street took off in the fall of 2011, I became one of the coeditors of a movement broadsheet called the Occupy! Gazette, five crowd-funded issues in total, which my cohorts and I gave away for free on the Web and in print.
I’m a prime candidate, in other words, for cheering on the revolution that is purportedly being ushered in by the Internet. The digital transformation has been hailed as the great cultural leveler, putting the tools of creation and dissemination in everyone’s hands and wresting control from long-established institutions and actors. Due to its remarkable architecture, the Internet facilitates creativity and communication in unprecedented ways. Each of us is now our own broadcaster; we are no longer passive consumers but active producers. Unlike the one-way, top-down transmission of radio or television and even records and books, we finally have a medium through which everyone’s voice can supposedly be heard.
To all of this I shout an enthusiastic hurrah. Progressives like myself have spent decades decrying mass culture and denouncing big media. Since 1944, when Max Horkheimer and Theodor Adorno published their influential essay “The Culture Industry: Enlightenment as Mass Deception,” critics have sounded the alarm about powerful corporate interests distorting our culture and drowning out democracy in pursuit of profit.
But while heirs to this tradition continue to worry about commercialism and media consolidation, there is now a countervailing tendency to assume that the Internet, by revolutionizing our media system, has rendered such concerns moot. In a digital world, the number of channels is theoretically infinite, and no one can tell anyone what to consume. We are the ultimate deciders, fully in charge of our media destinies, choosing what to look at, actively seeking and clicking instead of having our consumption foisted upon us by a cabal of corporate executives.
As a consequence of the Internet, it is assumed that traditional gatekeepers will crumble and middlemen will wither. The new orthodoxy envisions the Web as a kind of Robin Hood, stealing audience and influence away from the big and giving to the small. Networked technologies will put professionals and amateurs on an even playing field, or even give the latter an advantage. Artists and writers will thrive without institutional backing, able to reach their audiences directly. A golden age of sharing and collaboration will be ushered in, modeled on Wikipedia and open source software.
In many wonderful ways this is the world we have been waiting for. So what’s the catch? In some crucial respects the standard assumptions about the Internet’s inevitable effects have misled us. New technologies have undoubtedly removed barriers to entry, yet, as I will show, cultural democracy remains elusive. While it’s true that anyone with an Internet connection can speak online, that doesn’t mean our megaphones blast our messages at the same volume. Online, some speak louder than others. There are the followed and the followers. As should be obvious to anyone with an e-mail account, the Internet, though open to all, is hardly an egalitarian or noncommercial paradise, even if you bracket all the porn and shopping sites.
To understand why the most idealistic predictions about how the Internet would transform cultural production and distribution, upending the balance of power in the process, have not come to pass, we need to look critically at the current state of our media system. Instead, we celebrate a rosy vision of what our new, networked tools theoretically make possible or the changes they will hypothetically unleash. What’s more, we need to look ahead and recognize the forces that are shaping the development and implementation of technology—economic forces in particular.
Writing critically about technological and cultural transformation means proceeding with caution. Writers often fall into one of two camps, the cheerleaders of progress at any cost and the prophets of doom who condemn change, lamenting all they imagine will be lost. This pattern long precedes us. In 1829, around the time advances in locomotion and telegraphy inspired a generation to speak rapturously of the “annihilation of space and time,” Thomas Carlyle, the Victorian era’s most irascible and esteemed man of letters, published a sweeping indictment of what he called the Mechanical Age.
Everywhere Carlyle saw new contraptions replacing time-honored techniques—there were machines to drive humans to work faster or replace them altogether—and he was indignant: “We war with rude Nature; and, by our resistless engines, come off always victorious, and loaded with spoils.” Yet the spoils of this war, he anxiously observed, were not evenly distributed. While some raced to the top, others ate dust. Wealth had “gathered itself more and more into masses, strangely altering the old relations, and increasing the distance between the rich and the poor.” More worrisome still, mechanism was encroaching on the inner self. “Not the external and physical alone is now managed by machinery, but the internal and spiritual also,” he warned. “Men are grown mechanical in head and in heart, as well as in hand,” a shift he imagined would make us not wiser but worse off.
Two years later, Timothy Walker, a young American with a career in law ahead of him, wrote a vigorous rebuttal entitled “Defense of Mechanical Philosophy.” Where Carlyle feared the mechanical metaphor making society over in its image, Walker welcomed such a shift, dismissing Carlyle as a vaporizing mystic. Mechanism, in Walker’s judgment, has caused no injury, only advantage. Where mountains stood obstructing, mechanism flattened them. Where the ocean divided, mechanism stepped across. “The horse is to be unharnessed, because he is too slow; and the ox is to be unyoked, because he is too weak. Machines are to perform the drudgery of man, while he is to look on in self-complacent ease.” Where, Walker asked, is the wrong in any of this?
Carlyle, Walker observed, feared “that mind will become subjected to the laws of matter; that physical science will be built up on the ruins of our spiritual nature; that in our rage for machinery, we shall ourselves become machines.” On the contrary, Walker argued, machines would free our minds by freeing our bodies from tedious labor, thus permitting all of humankind to become “philosophers, poets, and votaries of art.” That “large numbers” of people had been thrown out of work as a consequence of technological change is but a “temporary inconvenience,” Walker assured his readers—a mere misstep on mechanism’s “triumphant march.”
Today, most pronouncements concerning the impact of technology on our culture, democracy, and work resound with Carlyle’s and Walker’s sentiments, their well-articulated insights worn down into twenty-first-century sound bites. The argument about the impact of the Internet is relentlessly binary, techno-optimists facing off against techno-skeptics. Will the digital transformation liberate humanity or tether us with virtual chains? Do communicative technologies fire our imaginations or dull our senses? Do social media nurture community or intensify our isolation, expand our intellectual faculties or wither our capacity for reflection, make us better citizens or more efficient consumers? Have we become a nation of skimmers, staying in the shallows of incessant stimulation, or are we evolving into expert synthesizers and multitaskers, smarter than ever before? Are those who lose their jobs due to technological change deserving of our sympathy or our scorn (“adapt or die,” as the saying goes)? Is that utopia on the horizon or dystopia around the bend?
These questions are important, but the way they are framed tends to make technology too central, granting agency to tools while sidestepping the thorny issue of the larger social structures in which we and our technologies are embedded. The current obsession with the neurological repercussions of technology—what the Internet is doing to our brains, our supposedly shrinking attention spans, whether video games improve coordination and reflexes, how constant communication may be addictive, whether Google is making us stupid—is a prime example. This focus ignores the business imperatives that accelerate media consumption and the market forces that encourage compulsive online engagement.
Yet there is one point on which the cheerleaders and the naysayers agree: we are living at a time of profound rupture—something utterly unprecedented and incomparable. All connections to the past have been rent asunder by the power of the network, the proliferation of smartphones, tablets, and Google glasses, the rise of big data, and the dawning of digital abundance. Social media and memes will remake reality—for better or for worse. My view, on the other hand, is that there is as much continuity as change in our new world, for good and for ill.
Many of the problems that plagued our media system before the Internet was widely adopted have carried over into the digital domain—consolidation, centralization, and commercialism—and will continue to shape it. Networked technologies do not resolve the contradictions between art and commerce, but rather make commercialism less visible and more pervasive. The Internet does not close the distance between hits and flops, stars and the rest of us, but rather magnifies the gap, eroding the middle space between the very popular and virtually unknown. And there is no guarantee that the lucky few who find success in the winner-take-all economy online are more diverse, authentic, or compelling than those who succeeded under the old system.
Despite the exciting opportunities the Internet offers, we are witnessing not a leveling of the cultural playing field, but a rearrangement, with new winners and losers. In the place of Hollywood moguls, for example, we now have Silicon Valley tycoons (or, more precisely, we have Hollywood moguls and Silicon Valley tycoons). The pressure to be quick, to appeal to the broadest possible public, to be sensational, to seek easy celebrity, to be attractive to corporate sponsors—these forces multiply online where every click can be measured, every piece of data mined, every view marketed against. Originality and depth eat away at profits online, where faster fortunes are made by aggregating work done by others, attracting eyeballs and ad revenue as a result.
Indeed, the advertising industry is flourishing as never before. In a world where creative work holds diminishing value, where culture is “free,” and where fields like journalism are in crisis, advertising dollars provide the unacknowledged lifeblood of the digital economy. Moreover, the constant upgrading of devices, operating systems, and Web sites; the move toward “walled gardens” and cloud computing; the creep of algorithms and automation into every corner of our lives; the trend toward filtering and personalization; the lack of diversity; the privacy violations: all these developments are driven largely by commercial incentives. Corporate power and the quest for profit are as fundamental to new media as old. From a certain angle, the emerging order looks suspiciously like the old one.
In fact, the phrase “new media” is something of a misnomer because it implies that the old media are on their way out, as though at the final stage of some natural, evolutionary process. Contrary to all the talk of dinosaurs, this is more a period of adaptation than extinction. Instead of distinct old and new media, what we have is a complex cultural ecosystem that spans the analog and digital, encompassing physical places and online spaces, material objects and digital copies, fleshy bodies and virtual identities.
In that ecosystem, the online and off-line are not discrete realms, contrary to a perspective that has suffused writing about the Internet since the word “cyberspace” was in vogue.
You might be reading this book off a page or screen—a screen that is part of a gadget made of plastic and metal and silicon, the existence of which puts a wrench into any fantasy of a purely ethereal exchange. All bits eventually butt up against atoms; even information must be carried along by something, by stuff.
I am not trying to deny the transformative nature of the Internet, but rather to recognize that we’ve lived with it long enough to ask tough questions.
Thankfully, this is already beginning to happen. Over the course of writing this book, the public conversation about the Internet and the technology industry has shifted significantly.
There have been revelations about the existence of a sprawling international surveillance infrastructure, uncompetitive business and exploitative labor practices, and shady political lobbying initiatives, all of which have made major technology firms the subjects of increasing scrutiny from academics, commentators, activists, and even government officials in the United States and abroad.
People are beginning to recognize that Silicon Valley platitudes about “changing the world” and maxims like “don’t be evil” are not enough to ensure that some of the biggest corporations on Earth will behave well. The risk, however, is that we will respond to troubling disclosures and other disappointments with cynicism and resignation when what we need is clearheaded and rigorous inquiry into the obstacles that have stalled some of the positive changes the Internet was supposed to usher in.
First and foremost, we need to rethink how power operates in a post-broadcast era. It was easy, under the old-media model, to point the finger at television executives and newspaper editors (and even book publishers) and the way they shaped the cultural and social landscape from on high. In a networked age, things are far more ambiguous, yet new-media thinking, with its radical sheen and easy talk of revolution, ignores these nuances. The state is painted largely as a source of problematic authority, while private enterprise is given a free pass; democracy, fuzzily defined, is attained through “sharing,” “collaboration,” “innovation,” and “disruption.”
In fact, wealth and power are shifting to those who control the platforms on which all of us create, consume, and connect. The companies that provide these and related services are quickly becoming the Disneys of the digital world—monoliths hungry for quarterly profits, answerable to their shareholders not us, their users, and more influential, more ubiquitous, and more insinuated into the fabric of our everyday lives than Mickey Mouse ever was. As such they pose a whole new set of challenges to the health of our culture.
Right now we have very little to guide us as we attempt to think through these predicaments. We are at a loss, in part, because we have wholly adopted the language and vision offered up by Silicon Valley executives and the new-media boosters who promote their interests. They foresee a marketplace of ideas powered by profit-driven companies who will provide us with platforms to creatively express ourselves and on which the most deserving and popular will succeed.
They speak about openness, transparency, and participation, and these terms now define our highest ideals, our conception of what is good and desirable, for the future of media in a networked age. But these ideals are not sufficient if we want to build a more democratic and durable digital culture. Openness, in particular, is not necessarily progressive. While the Internet creates space for many voices, the openness of the Web reflects and even amplifies real-world inequities as often as it ameliorates them.
I’ve tried hard to avoid the Manichean view of technology, which assumes either that the Internet will save us or that it is leading us astray, that it is making us stupid or making us smart, that things are black or white. The truth is subtler: technology alone cannot deliver the cultural transformation we have been waiting for; instead, we need to first understand and then address the underlying social and economic forces that shape it. Only then can we make good on the unprecedented opportunity the Internet offers and begin to make the ideal of a more inclusive and equitable culture a reality. If we want the Internet to truly be a people’s platform, we will have to work to make it so.
1 (#ulink_f6171d25-93c5-53ae-a9cb-922feb7db258)
A PEASANT’S KINGDOM (#ulink_f6171d25-93c5-53ae-a9cb-922feb7db258)
I moved to New York City in 1999 just in time to see the dot-com dream come crashing down. I saw high-profile start-ups empty out their spacious lofts, the once ebullient spaces vacant and echoing; there were pink-slip parties where content providers, designers, and managers gathered for one last night of revelry. Although I barely felt the aftershocks that rippled through the economy when the bubble burst, plenty of others were left thoroughly shaken. In San Francisco the boom’s rising rents pushed out the poor and working class, as well as those who had chosen voluntary poverty by devoting themselves to social service or creative experimentation. Almost overnight, the tech companies disappeared, the office space and luxury condos vacated, jilting the city and its inhabitants despite the irreversible accommodations that had been made on behalf of the start-ups. Some estimate that 450,000 jobs were lost in the Bay Area alone.
As the economist Doug Henwood has pointed out, a kind of amnesia blots out the dot-com era, blurring it like a bad hangover. It seems so long ago: before tragedy struck lower Manhattan, before the wars in Afghanistan and Iraq started, before George W. Bush and then Barack Obama took office, before the economy collapsed a second time. When the rare backward glance is cast, the period is usually dismissed as an anomaly, an embarrassing by-product of irrational exuberance and excess, an aberrational event that gets chalked up to collective folly (the crazy business schemes, the utopian bombast, the stock market fever), but “never as something emerging from the innards of American economic machinery,” to use Henwood’s phrase.
At the time of the boom, however, the prevailing myth was that the machinery had been forever changed. “Technological innovation,” Alan Greenspan marveled, had instigated a new phase of productivity and growth that was “not just a cyclical phenomenon or a statistical aberration, but … a more deep-seated, still developing, shift in our economic landscape.” Everyone would be getting richer, forever. (Income polarization was actually increasing at the time, the already affluent becoming ever more so while wages for most U.S. workers stagnated at levels below 1970s standards.)
The wonders of computing meant skyrocketing productivity, plentiful jobs, and the end of recessions. The combination of the Internet and IPOs (initial public offerings) had flattened hierarchies, computer programming jobs were reconceived as hip, and information was officially more important than matter (bits, boosters liked to say, had triumphed over atoms). A new economy was upon us.
Despite the hype, the new economy was never that novel. With some exceptions, the Internet companies that fueled the late nineties fervor were mostly about taking material from the off-line world and simply posting it online or buying and selling rather ordinary goods, like pet food or diapers, and prompting Internet users to behave like conventional customers. Due to changes in law and growing public enthusiasm for high-risk investing, the amount of money available to venture capital funds ballooned from $12 billion in 1996 to $106 billion in 2000, leading many doomed ideas to be propped up by speculative backing. Massive sums were committed to enterprises that replicated efforts: multiple sites specialized in selling toys or beauty supplies or home improvement products, and most of them flopped. Barring notable anomalies like Amazon and eBay, online shopping failed to meet inflated expectations. The Web was declared a wasteland and investments dried up, but not before many venture capitalists and executives profited handsomely, soaking up underwriting fees from IPOs or exercising their options before stocks went under.
Although the new economy evaporated, the experience set the stage for a second bubble and cemented a relationship between technology and the market that shapes our digital lives to this day.
As business and technology writer Sarah Lacy explains in her breathless account of Silicon Valley’s recent rebirth, Once You’re Lucky, Twice You’re Good, a few discerning entrepreneurs extracted a lesson from the bust that they applied to new endeavors with aplomb after the turn of the millennium: the heart of the Internet experience was not e-commerce but e-mail, that is to say, connecting and communicating with other people as opposed to consuming goods that could easily be bought at a store down the street. Out of that insight rose the new wave of social media companies that would be christened Web 2.0.
The story Lacy tells is a familiar one to those who paid attention back in the day: ambition and acquisitions, entrepreneurs and IPOs. “Winning Is Everything” is the title of one chapter; “Fuck the Sweater-Vests” another. You’d think it was the nineties all over again, except that this time around the protagonists aspired to market valuations in the billions, not millions. Lacy admires the entrepreneurs all the more for their hubris; they are phoenixes, visionaries who emerged unscathed from the inferno, who walked on burning coals to get ahead. After the bust, the dot-coms and venture capitalists were “easy targets,” blamed for being “silly, greedy, wasteful, irrelevant,” Lacy writes. The “jokes and quips” from the “cynics” cut deep, making it that much harder for wannabe Web barons “to build themselves back up again.” But build themselves back up a handful of them did, heading to the one place insulated against the downturn, Silicon Valley. “The Valley was still awash in cash and smart people,” says Lacy. “Everyone was just scared to use them.”
Web 2.0 was the logical consequence of the Internet going mainstream, weaving itself into everyday life and presenting new opportunities as millions of people rushed online. The “human need to connect” is “a far more powerful use of the Web than for something like buying a book online,” Lacy writes, recounting the evolution of companies like Facebook, LinkedIn, Twitter, and the now beleaguered Digg. “That’s why these sites are frequently described as addictive … everyone is addicted to validations and human connections.”
Instead of the old start-up model, which tried to sell us things, the new one trades on our sociability—our likes and desires, our observations and curiosities, our relationships and networks—which is mined, analyzed, and monetized. To put it another way, Web 2.0 is not about users buying products; rather, users are the product. We are what companies like Google and Facebook sell to advertisers. Of course, social media have made a new kind of engagement possible: they have also generated a handful of enormous companies that profit off the creations and interactions of others. What is social networking if not the commercialization of the once unprofitable art of conversation? That, in a nutshell, is Web 2.0: content is no longer king, as the digital sages like to say; connections are.
Though no longer the popular buzzword it once was, “Web 2.0” remains relevant, its key tenets incorporated not just by social networking sites, but in just by all cultural production and distribution, from journalism to film and music. As traditional institutions go under—consider the independent book, record, and video stores that have gone out of business—they are being replaced by a small number of online giants—Amazon, iTunes, Netflix, and so on—that are better positioned to survey and track users. These behemoths “harness collective intelligence,” as the process has been described, to sell people goods and services directly or indirectly. “The key to media in the twenty-first century may be who has the most knowledge of audience behavior, not who produces the most popular content,” Tom Rosenstiel, the director of the Pew Research Center’s Project for Excellence in Journalism, explained.
Understanding what sites people visit, what content they view, what products they buy and even their geographic coordinates will allow advertisers to better target individual consumers. And more of that knowledge will reside with technology companies than with content producers. Google, for instance, will know much more about each user than will the proprietor of any one news site. It can track users’ online behavior through its Droid software on mobile phones, its Google Chrome Web browser, its search engine and its new tablet software. The ability to target users is why Apple wants to control the audience data that goes through the iPad. And the company that may come to know the most about you is Facebook, with which users freely share what they like, where they go and who their friends are.
For those who desire to create art and culture—or “content,” to use that horrible, flattening word—the shift is significant. More and more of the money circulating online is being soaked up by technology companies, with only a trickle making its way to creators or the institutions that directly support them. In 2010 publishers of articles and videos received around twenty cents of each dollar advertisers spent on their sites, down from almost a whole dollar in 2003.
Cultural products are increasingly valuable only insofar as they serve as a kind of “signal generator” from which data can be mined. The real profits flow not to the people who fill the platforms where audiences congregate and communicate—the content creators—but to those who own them.
The original dot-com bubble’s promise was first and foremost about money. Champions of the new economy conceded that the digital tide would inevitably lift some boats higher than others, but they commonly assumed that everyone would get a boost from the virtual effervescence. A lucky minority would work at a company that was acquired or went public and spend the rest of their days relaxing on the beach, but the prevailing image had each individual getting in on the action, even if it was just by trading stocks online.
After the bubble popped, the dream of a collective Internet-enabled payday faded. The new crop of Internet titans never bothered to issue such empty promises to the masses. The secret of Web 2.0 economics, as Lacy emphasizes, is getting people to create content without demanding compensation, whether by contributing code, testing services, or sharing everything from personal photos to restaurant reviews. “A great Web 2.0 site needs a mob of people who use it, love it, and live by it—and convince their friends and family to do the same,” Lacy writes. “Mobs will devote more time to a site they love than to their jobs. They’ll frequently build the site for the founders for free.” These sites exist only because of unpaid labor, the millions of minions toiling to fill the coffers of a fortunate few.
Spelling this out, Lacy is not accusatory but admiring—awestruck, even. When she writes that “social networking, media, and user-generated content sites tap into—and exploit—core human emotions,” it’s with fealty appropriate to a fiefdom. As such, her book inadvertently provides a perfect exposé of the hypocrisy lurking behind so much social media rhetoric. The story she tells, after all, is about nothing so much as fortune seeking, yet the question of compensating those who contribute to popular Web sites, when it arises, is quickly brushed aside. The “mobs” receive something “far greater than money,” Lacy writes, offering up the now-standard rationalization for the inequity: entertainment, self-expression, and validation.
This time around, no one’s claiming the market will be democratized—instead, the promise is that culture will be. We will “create” and “connect” and the entrepreneurs will keep the cash.
This arrangement has been called “digital sharecropping.”
Instead of the production or distribution of culture being concentrated in the hands of the few, it is the economic value of culture that is hoarded. A small group, positioned to capture the value of the network, benefits disproportionately from a collective effort. The owners of social networking sites may be forbidden from selling songs, photos, or reviews posted by individual users, for example, but the companies themselves, including user content, might be turned over for a hefty sum: hundreds of millions for Bebo and Myspace and Goodreads, one billion or more for Instagram and Tumblr. The mammoth archive of videos displayed on YouTube and bought by Google was less a priceless treasure to be preserved than a vehicle for ads. These platforms succeed because of an almost unfathomable economy of scale; each search brings revenue from targeted advertising and fodder for the data miners: each mouse click is a trickle in the flood.
Over the last few years, there has been an intermittent but spirited debate about the ethics of this economic relationship. When Flickr was sold to Yahoo!, popular bloggers asked whether the site should compensate those who provided the most viewed photographs; when the Huffington Post was acquired by AOL for $315 million, many of the thousands of people who had been blogging for free were aghast, and some even started a boycott; when Facebook announced its upcoming IPO, journalists speculated about what the company, ethically, owed its users, the source of its enormous valuation.
The same holds for a multitude of sites: Twitter wouldn’t be worth billions if people didn’t tweet, Yelp would be useless without freely provided reviews, Snapchat nothing without chatters. The people who spend their time sharing videos with friends, rating products, or writing assessments of their recent excursion to the coffee shop—are they the users or the used?
The Internet, it has been noted, is a strange amalgamation of playground and factory, a place where amusement and labor overlap in confusing ways. We may enjoy using social media, while also experiencing them as obligatory; more and more jobs require employees to cultivate an online presence, and social networking sites are often the first place an employer turns when considering a potential hire. Some academics call this phenomenon “playbor,” an awkward coinage that tries to get at the strange way “sexual desire, boredom, friendship” become “fodder for speculative profit” online, to quote media scholar Trebor Scholz.
Others use the term “social factory” to describe the Web 2.0, envisioning it as a machine that subsumes our leisure, transforming lazy clicks into cash. “Participation is the oil of the digital economy,” as Scholz is fond of saying. The more we comment and share, the more we rate and like, the more economic value is accumulated by those who control the platforms on which our interactions take place.
Taking this argument one step further, a frustrated minority have complained that we are living in a world of “digital feudalism,” where sites like Facebook and Tumblr offer up land for content providers to work while platform owners expropriate value with impunity and, if you read the fine print, stake unprecedented claim over users’ creations.
“By turn, we are the heroic commoners feeding revolutions in the Middle East and, at the same time, ‘modern serfs’ working on Mark Zuckerberg’s and other digital plantations,” Marina Gorbis of the Institute for the Future has written. “We, the armies of digital peasants, scramble for subsistence in digital manor economies, lucky to receive scraps of ad dollars here and there, but mostly getting by, sometimes happily, on social rewards—fun, social connections, online reputations. But when the commons are sold or traded on Wall Street, the vast disparities between us, the peasants, and them, the lords, become more obvious and more objectionable.”
Computer scientist turned techno-skeptic Jaron Lanier has staked out the most extreme position in relation to those he calls the “lords of the computing clouds,” arguing that the only way to counteract this feudal structure is to institute a system of nano-payments, a market mechanism by which individuals are rewarded for every bit of private information gleaned by the network (an interesting thought experiment, Lanier’s proposed solution may well lead to worse outcomes than the situation we have now, due to the twisted incentives it entails).
New-media cheerleaders take a different view.
Consider the poet laureate of digital capitalism, Kevin Kelly, cofounder of Wired magazine and longtime technology commentator. It is not feudalism and exploitation that critics see, he argued in a widely circulated essay, but the emergence of a new cooperative ethos, a resurgence of collectivism—though not the kind your grandfather worried about. “The frantic global rush to connect everyone to everyone, all the time, is quietly giving rise to a revised version of socialism,” Kelly raves, pointing to sites like Wikipedia, YouTube, and Yelp.
Instead of gathering on collective farms, we gather in collective worlds. Instead of state factories, we have desktop factories connected to virtual co-ops. Instead of sharing drill bits, picks, and shovels, we share apps, scripts, and APIs. Instead of faceless politburos, we have faceless meritocracies, where the only thing that matters is getting things done. Instead of national production, we have peer production. Instead of government rations and subsidies, we have a bounty of free goods.
Kelly reassures his readers that the people who run this emerging economy are not left-wing in any traditional sense. They are “more likely to be libertarians than commie pinkos,” he explains. “Thus, digital socialism can be viewed as a third way that renders irrelevant the old debates,” transcending the conflict between “free-market individualism and centralized authority.” Behold, then, the majesty of digital communitarianism: it’s socialism without the state, without the working class, and, best of all, without having to share the wealth.
The sensational language is easy to mock, but this basic outlook is widespread among new-media enthusiasts. Attend any technology conference or read any book about social media or Web 2.0, whether by academics or business gurus, and the same conflation of communal spirit and capitalist spunk will be impressed upon you. The historian Fred Turner traces this phenomenon back to 1968, when a small band of California outsiders founded the WholeEarth Catalog and then, in 1985, the online community the Whole Earth ’Lectronic Link, the WELL, the prototype of online communities, and then Wired.
This group performed the remarkable feat of transforming computers from enablers of stodgy government administration to countercultural cutting edge, from implements of technocratic experts to machines that empower everyday people. They “reconfigured the status of information and information technologies,” Turner explains, by contending that these new tools would tear down bureaucracy, enhance individual consciousness, and help build a new collaborative society.
These prophets of the networked age—led by the WELL’s Stewart Brand and including Kelly and many other still-influential figures—moved effortlessly from the hacker fringe to the upper echelon of the Global Business Network, all while retaining their radical patina.
Thus, in 1984 Macintosh could run an ad picturing Karl Marx with the tagline, “It was about time a capitalist started a revolution”—and so it continues today. The online sphere inspires incessant talk of gift economies and public-spiritedness and democracy, but commercialism and privatization and inequality lurk beneath the surface.
This contradiction is captured in a single word: “open,” a concept capacious enough to contain both the communal and capitalistic impulses central to Web 2.0 while being thankfully free of any socialist connotations. New-media thinkers have claimed openness as the appropriate utopian ideal for our time, and the concept has caught on. The term is now applied to everything from education to culture to politics and government. Broadly speaking, in tech circles, open systems—like the Internet itself—are always good, while closed systems—like the classic broadcast model—are bad. Open is Google and Wi-Fi, decentralization and entrepreneurialism, the United States and Wikipedia. Closed equals Hollywood and cable television, central planning and entrenched industry, China and the Encyclopaedia Britannica. However imprecisely the terms are applied, the dichotomy of open versus closed (sometimes presented as freedom versus control) provides the conceptual framework that increasingly underpins much of the current thinking about technology, media, and culture.
The fetish for openness can be traced back to the foundational myths of the Internet as a wild, uncontrollable realm. In 1996 John Perry Barlow, the former Grateful Dead lyricist and cattle ranger turned techno-utopian firebrand, released an influential manifesto, “A Declaration of the Independence of Cyberspace,” from Davos, Switzerland, during the World Economic Forum, the annual meeting of the world’s business elite. (“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone … You have no sovereignty where we gather.”) Almost twenty years later, these sentiments were echoed by Google’s Eric Schmidt and the State Department’s Jared Cohen, who partnered to write The New Digital Age: “The Internet is the largest experiment involving anarchy in history,” they insist. It is “the world’s largest ungoverned space,” one “not truly bound by terrestrial laws.”
While openness has many virtues, it is also undeniably ambiguous. Is open a means or an end? What is open and to whom? Mark Zuckerberg said he designed Facebook because he wanted to make the world more “open and connected,” but his company does everything it can to keep users within its confines and exclusively retains the data they emit. Yet this vagueness is hardly a surprise given the history of the term, which was originally imported from software production: the designation “open source” was invented to rebrand free software as business friendly, foregrounding efficiency and economic benefits (open as in open markets) over ethical concerns (the freedom of free software).
In keeping with this transformation, openness is often invoked in a way that evades discussions of ownership and equity, highlighting individual agency over commercial might and ignoring underlying power imbalances.
In the 2012 “open issue” of Google’s online magazine Think Quarterly, phrases like “open access to information” and “open for business” appear side by side, purposely blurring participation and profit seeking. One article on the way “smart brands” are adapting to the digital world insists that as a consequence of the open Web, “consumers have more power than ever,” while also outlining the ways “the web gives marketers a 24/7 focus group of the world,” unleashing a flood of “indispensable” data that inform “strategic planning and project development.” Both groups are supposedly “empowered” by new technology, but the first gets to comment on products while the latter boosts their bottom line.
By insisting that openness is the key to success, whether you are a multinational corporation or a lone individual, today’s digital gurus gloss over the difference between humans and businesses, ignoring the latter’s structural advantages: true, “open” markets in some ways serve consumers’ buying interests, but the more open people’s lives are, the more easily they can be tracked and exploited by private interests.
But as the technology writer Rob Horning has observed, “The connections between people are not uniformly reciprocal.” Some are positioned to make profitable use of what they glean from the network; others are more likely to be taken advantage of, giving up valuable information and reaping few benefits. “Networks,” Horning writes, “allow for co-optation as much as cooperation.”
Under the rubric of open versus closed, the paramount concern is access and whether people can utilize a resource or platform without seeking permission first. This is how Google and Wikipedia wind up in the same camp, even though one is a multibillion-dollar advertising-funded business and the other is supported by a nonprofit foundation. Both are considered “open” because they are accessible, even though they operate in very different ways. Given that we share noncommercial projects on commercial platforms all the time online, the distinction between commercial and noncommercial has been muddled; meanwhile “private” and “public” no longer refer to types of ownership but ways of being, a setting on a social media stream. This suits new-media partisans, who insist that the “old debates” between market and the state, capital and government, are officially behind us. “If communism vs. capitalism was the struggle of the twentieth century,” law professor and open culture activist Lawrence Lessig writes, “then control vs. freedom will be the debate of the twenty-first century.”
No doubt, there is much to be said for open systems, as many have shown elsewhere.
The heart of the Internet is arguably the end-to-end principle (the idea that the network should be kept as flexible, unrestricted, and open to a variety of potential uses as possible). From this principle to the freely shared technical protocols and code that Tim Berners-Lee used to create the World Wide Web, we have open standards to thank for the astonishing growth of the online public sphere and the fact that anyone can participate without seeking permission first.