скачать книгу бесплатно
“Each day men sell little pieces of themselves in order to try to buy them back each night and week end with the coin of fun,” wrote Mills, despairing of a cycle that splits us in two: an at-work self and an at-play self, the person who produces for money and the person who produces for love.
New-media thinkers believe social production and amateurism transcend the old problem of alienated labor by allowing us to work for love, not money, but in fact the unremunerated future they anticipate will only deepen a split that many desperately desire to reconcile.
Innovations and invention were expected to bring about humankind’s inevitable release from alienated labor. The economist John Maynard Keynes once predicted that the four-hour workday was close at hand and that technical improvements in manufacturing would allow ample time for people to focus on “the art of life itself.” Into the 1960s experts agonized over the possibility of a “crisis of leisure time,” which they prophesized would sweep the country—a crisis precipitated not for want of time off but by an excess of it.
In 1967, testimony before a Senate subcommittee indicated that “by 1985 people could be working just 22 hours a week or 27 weeks a year or could retire at 38.” Over the ensuing decades countless people have predicted that machines would facilitate the “end of work” by automating drudgery and freeing humans to perform labor they enjoy (“Let the robots take the jobs, and let them help us dream up new work that matters,” concludes one Wired cover story rehashing this old idea).
New-media thinkers do not pretend this future has come to pass, but in Cognitive Surplus Clay Shirky presents what can be read as a contemporary variation on this old theme, explaining how the cumulative free time of the world’s educated population—an estimated trillion hours a year—is being funneled into creative, collaborative projects online.
Time is something Shirky claims we have a growing abundance of thanks to two factors: steadily increasing prosperity and a decline of television viewing. The Web, he argues, challenges us to stop thinking of time as “individual minutes to be whiled away” and imagine it, instead, as a “social asset that can be harnessed.”
Projects like Wikipedia, message boards, and the latest viral memes are creative paradigms for a new age: entertaining, inclusive, easy to make, and efficient—the accumulation of tidbits of attention from thousands of people around the world. Much of the art and culture of the future, he wagers, will be produced in a similar manner, by pooling together spare moments spent online. Our efforts shall be aggregated, all the virtual crumbs combining to make a cake. Institutions will be supplanted as a consequence of the deployment of this surplus.
Shirky’s contributions reveal not how far we’ve progressed in pursuit of “the art of life” but how much ground has been lost since Keynes, how our sense of what’s possible has been circumscribed despite the development of new, networked wonders. Today’s popular visionary imagines us hunched over our computers with a few idle minutes to spare, our collective clicks supposed to substitute for what was once the promise of personal creative development—the freedom to think, feel, create, and act with the whole of one’s being.
In addition to other problematic aspects of his argument, Shirky’s two foundational assertions—that television watching is down and that free time has increased over recent decades—are both unfounded. Despite competition from the Internet, television viewing has generally risen over recent years, with the average American taking in nearly five hours of video each day, 98 percent through a traditional TV set. “Americans,” a 2012 Nielsen report states, “are not turning off.”
According to economists, with the exception of those who suffer from under- and unemployment, work hours have actually risen. Those lucky enough to be fully employed are, in fact, suffering from “time impoverishment.” Today the average citizen works longer hours for less money than he or she once did, putting in an extra four and a half weeks a year compared to 1979. Married couples with children are on the job an extra 413 hours, or an extra ten weeks a year, combined.
Adding salt to the wounds, the United States is the only industrialized nation where employers are not required by law to provide workers any paid vacation time.
The reason the prophecies of Mills and Keynes never came to pass is obvious but too often overlooked: new technologies do not emerge in a vacuum free of social, political, and economic influences. Context is all-important. On their own, labor-saving machines, however ingenious, are not enough to bring about a society of abundance and leisure, as the Luddites who destroyed the power looms set to replace them over two centuries ago knew all too well. If we want to see the fruits of technological innovation widely shared, it will require conscious effort and political struggle. Ultimately, outcomes are shaped as much by the capabilities of new technologies as by the wider circumstances in which they operate.
Baumol and Bowen, for example, made their rosy predictions against the backdrop of a social consensus now in tatters. When they wrote their report in the sixties, the prevailing economic orthodoxy said that both prosperity and risk should be broadly spread. Health care, housing, and higher education were more accessible to more people than they had ever been. Bolstered by a strong labor movement, unemployment was low and wages high by today’s standards. There was talk of shortened workweeks and guaranteed annual income for all. As a consequence of these conditions, men and women felt emboldened to demand more than just a stable, well-compensated job; they wanted work that was also engaging and gratifying.
In the fifties and sixties, this wish manifested in multiple ways, aiming at the status quo from within and without. First came books like The Organization Man and The Lonely Crowd, which voiced widespread anxieties about the erosion of individuality, inwardness, and agency within the modern workplace. Company men revolted against the “rat race.” Conformity was inveighed against, mindless acquiescence condemned, and affluence denounced as an anesthetic to authentic experience. Those who stood poised to inherit a gray flannel suit chafed against its constraints. By 1972 blue-collar workers were fed up, too, with wildcat strikers at auto factories protesting the monotony of the assembly line. The advances of technology did not, in the end, liberate the worker from drudgery but rather further empowered those who owned the machines. By the end of the 1970s, as former labor secretary Robert Reich explains,
a wave of new technologies (air cargo, container ships and terminals, satellite communications and, later, the Internet) had radically reduced the costs of outsourcing jobs abroad. Other new technologies (automated machinery, computers, and ever more sophisticated software applications) took over many other jobs (remember bank tellers? telephone operators? service station attendants?). By the ’80s, any job requiring that the same steps be performed repeatedly was disappearing—going over there or into software.
At the same time the ideal of a “postindustrial society” offered the alluring promise of work in a world in which goods were less important than services. Over time, phrases like “information economy,” “immaterial labor,” “knowledge workers,” and “creative class” slipped into everyday speech. Mental labor would replace the menial; stifling corporate conventions would give way to diversity and free expression; flexible employment would allow them to shape their own lives.
These prognostications, too, were not to be. Instead the increase of shareholder influence in the corporate sector accelerated the demand for ever-higher returns on investment and shorter turnaround. Dismissing stability as the refusal to innovate (or rather cut costs), business leaders cast aspersions on the steadying tenets of the first half of the twentieth century, including social provisions and job security. Instead of lifetime employment, the new system valorized adaptability, mobility, and risk; in the place of full-time employment, there were temporary contracts and freelance instability. In this context, the wish for expressive, worthwhile work, the desire to combine employment and purpose, took on a perverse form.
New-media thinkers, with their appetite for disintermediation and creative destruction, implicitly endorse and advance this transformation. The crumbling and hollowing out of established cultural institutions, from record labels to universities, and the liberation of individuals from their grip is a fantasy that animates discussions of amateurism. New technologies are hailed for enabling us to “organize without organizations,” which are condemned as rigid and suffocating and antithetical to the open architecture of the Internet.
However, past experience shows that the receding of institutions does not necessarily make space for a more authentic, egalitarian existence: if work and life have been made more flexible, people have also become unmoored, blown about by the winds of the market; if old hierarchies and divisions have been overthrown, the price has been greater economic inequality and instability; if the new system emphasizes potential and novelty, past achievement and experience have been discounted; if life has become less predictable and predetermined, it has also become more precarious as liability has shifted from business and government to the individual. It turns out that what we need is not to eliminate institutions but to reinvent them, to make them more democratic, accountable, inclusive, and just.
More than anyone else, urbanist Richard Florida, author of The Rise of the Creative Class, has built his career as a flag-bearer for the idea that individual ingenuity can fill the void left by declining institutions. Like new-media thinkers, with whom he shares a boundless admiration for all things high tech and Silicon Valley, he also shuns “organizational or institutional directives” while embracing the values meritocracy and openness. In Florida’s optimistic view, the demise of career stability has unbridled creativity and eliminated alienation in the workplace. “To some degree, Karl Marx had it partly right when he foresaw that the workers would someday control the means of production,” Florida declares. “This is now beginning to happen, although not as Marx thought it would, with the proletariat rising to take over factories. Rather, more workers than ever control the means of production, because it is inside their heads; they are the means of the production.”
Welcome to what Florida calls the “information-and-idea-based economy,” a place where “people have come to accept that they’re on their own—that the traditional sources of security and entitlement no longer exist, or even matter.” Where earlier visionaries prophesied a world in which increased leisure allowed all human beings the well-being and security to freely cultivate their creative instincts, the apostles of the creative class collapse labor into leisure and exploitation into self-expression, and they arrogate creativity to serve corporate ends.
“Capitalism has also expanded its reach to capture the talents of heretofore excluded groups of eccentrics and nonconformists,” Florida writes. “In doing so, it has pulled off yet another astonishing mutation: taking people who would once have been bizarre mavericks operating at the bohemian fringe and setting them at the very heart of the process of innovation and economic growth.” According to Florida’s theory, the more creative types colorfully dot an urban landscape, the greater a city’s “Bohemian Index” and the higher the likelihood of the city’s economic success.
It’s all part of what he calls the “Big Morph”—“the resolution of the centuries-old tension between two value systems: the Protestant work ethic and the Bohemian ethic” into a new “creative ethos.” The Protestant ethic treats work as a duty; the Bohemian ethic, he says, is hedonistic. Profit seeking and pleasure seeking have united, the industrialist and the bon vivant have become one. “Highbrow and lowbrow, alternative and mainstream, work and play, CEO and hipster are all morphing together today,” Florida enthuses.
What kind of labor is it, exactly, that people will perform in this inspired Shangri-la? Florida’s popular essays point the way: he applauds a “teenage sales rep re-conceiving a Vonage display” as a stunning example of creative ingenuity harnessed for economic success; later he announces, anecdotally, that an “overwhelming” number of students would prefer to work “lower-paying temporary jobs in a hair salon” than “good, high-paying jobs in a machine tool factory.” Cosmetology is “more psychologically rewarding, creative work,” he explains.
It’s tempting to dismiss such a broad definition of creativity as out of touch, but Florida’s declarations illuminate an important trend and one that helped set the terms for the ascension of amateurism. It is not that creative work has suddenly become abundant, as Florida would have us believe; we have not all become Mozarts on the floor of some big-box store, Frida Kahlos at the hair salon. Rather, the point is that the psychology of creativity has become increasingly useful to the economy. The disposition of the artist is ever more in demand. The ethos of the autonomous creator has been repurposed to serve as a seductive facade for a capricious system and adopted as an identity by those who are trying to make their way within it.
Thus the ideal worker matches the traditional profile of the enthusiastic virtuoso: an individual who is versatile and rootless, inventive and adaptable; who self-motivates and works long hours, tapping internal and external resources; who is open to reinvention, emphasizing potential and promise opposed to past achievements; one who loves the work so much, he or she would do it no matter what, and so expects little compensation or commitment in return—amateurs and interns, for example.
The “free” credo promoted by writers such as Chris Anderson and other new-media thinkers has helped lodge a now rung on an ever-lengthening educational and career ladder, the now obligatory internship. Like artists and culture makers of all stripes, interns are said to be “entrepreneurs” and “free agents” investing in their “personal brands.” “The position of interns is not unlike that of many young journalists, musicians, and filmmakers who are now expected to do online work for no pay as a way to boost their portfolios,” writes Ross Perlin, author of the excellent book Intern Nation. “If getting attention and building a reputation online are often seen as more valuable than immediate ‘monetization,’ the same theory is being propounded for internships in the analog world—with exposure, contacts, and references advanced as the prerequisite, or even plausible alternative, to making money.”
As Perlin documents in vivid detail, capitalizing on desperate résumé-building college students and postgraduates exacerbates inequality. Who can afford to take a job that doesn’t pay but the relatively well off? Those who lack financial means are either shut out of opportunities or forced to support themselves with loans, going into debt for the privilege of working for free.
Creativity is invoked time and again to justify low wages and job insecurity. Across all sectors of the economy, responsibility for socially valuable work, from journalism to teaching and beyond, is being off-loaded onto individuals as institutions retreat from obligations to support efforts that aren’t immediately or immensely profitable. The Chronicle of Higher Education urges graduate students to imagine themselves as artists, to better prepare for the possibility of impoverishment when tenure-track jobs fail to materialize: “We must think of graduate school as more like choosing to go to New York to become a painter or deciding to travel to Hollywood to become an actor. Those arts-based careers have always married hope and desperation into a tense relationship.”
In a similar vein, NPR reports that the “temp-worker lifestyle” is a kind of “performance art,” a statement that conjures a fearless entertainer mid-tightrope or an acrobat hurling toward the next trapeze without a safety net—a thrilling image, especially to employers who would prefer not to provide benefits.
The romantic stereotype of the struggling artist is familiar to the musician Marc Ribot, a legendary figure on the New York jazz scene who has worked with Marianne Faithfull, Elvis Costello, John Zorn, Tom Waits, Alison Krauss, Robert Plant, and even Elton John. Ribot tells me he had an epiphany watching a “great but lousy” made-for-TV movie about Apple computers. As he tells it, two exhausted employees are complaining about working eighteen-hour days with no weekends when an actor playing Steve Jobs tells them to suck it up—they’re not regular workers at a stodgy company like IBM but artists.
“In other words art was the new model for this form of labor,” Ribot says, explaining his insight. “The model they chose is musicians, like Bruce Springsteen staying up all night to get that perfect track. Their life does not resemble their parents’ life working at IBM from nine to five, and certainly doesn’t resemble their parents’ pay structures—it’s all back end, no front end. All transfer of risk to the worker.” (In 2011 Apple Store workers upset over pay disparities were told, “Money shouldn’t be an issue when you’re employed at Apple. Working at Apple should be viewed as an experience.”)
In Ribot’s field this means the more uncertain part of the business—the actual writing, recording, and promoting of music—is increasingly “outsourced” to individuals while big companies dominate arenas that are more likely to be profitable, like concert sales and distribution (Ticketmaster, Amazon, iTunes, and Google Play, none of which invests in music but reaps rewards from its release). “That technological change is upon us is undeniable and irreversible,” Ribot wrote about the challenges musicians face as a consequence of digitization. “It will probably not spell the end of music as a commodity, although it may change drastically who is profiting off whose music. Whether these changes will create a positive future for producers or consumers of music depends on whether musicians can organize the legal and collective struggle necessary to ensure that those who profit off music in any form pay the people who make it.”
Ribot quotes John Lennon: “You think you’re so clever and classless and free.” Americans in general like to think of themselves as having transcended economic categories and hierarchies, Ribot says, and artists are no exception. During the Great Depression artists briefly began to think of themselves as workers and to organize as such, amassing social and political power with some success, but today it’s more popular to speak of artists as entrepreneurs or brands, designations that further obscure the issue of labor and exploitation by comparing individual artists to corporate entities or sole proprietors of small businesses.
If artists are fortunate enough to earn money from their art, they tend to receive percentages, fees, or royalties rather than wages; they play “gigs” or do “projects” rather than hold steady jobs, which means they don’t recognize the standard breakdowns of boss and worker. They also spend a lot of time on the road, not rooted in one place; hence they are not able to organize and advocate for their rights.
What’s missing, as Ribot sees it, is a way to understand how the economy has evolved away from the old industrial model and how value is extracted within the new order. “I think that people, not just musicians, need to do an analysis so they stop asking the question, ‘Who is my legal employer?’ and start asking, ‘Who works, who creates things that people need, and who profits from it?’” These questions, Ribot wagers, could be the first step to understanding the model of freelance, flexible labor that has become increasingly dominant across all sectors of the economy, not just in creative fields.
We are told that a war is being waged between the decaying institutions of the off-line world and emerging digital dynamos, between closed industrial systems and open networked ones, between professionals who cling to the past and amateurs who represent the future. The cheerleaders of technological disruption are not alone in their hyperbole. Champions of the old order also talk in terms that reinforce a seemingly unbridgeable divide.
Unpaid amateurs have been likened to monkeys with typewriters, gate-crashing the cultural conversation without having been vetted by an official credentialing authority or given the approval of an established institution. “The professional is being replaced by the amateur, the lexicographer by the layperson, the Harvard professor by the unschooled populace,” according to Andrew Keen, obstinately oblivious to the failings of professionally produced mass culture he defends.
The Internet is decried as a province of know-nothing narcissists motivated by a juvenile desire for fame and fortune, a virtual backwater of vulgarity and phoniness. Jaron Lanier, the technologist turned skeptic, has taken aim at what he calls “digital Maoism” and the ascendance of the “hive mind.” Social media, as Lanier sees it, demean rather than elevate us, emphasizing the machine over the human, the crowd over the individual, the partial over the integral. The problem is not just that Web 2.0 erodes professionalism but, more fundamentally, that it threatens originality and autonomy.
Outrage has taken hold on both sides. But the lines in the sand are not as neatly drawn as the two camps maintain. Wikipedia, considered the ultimate example of amateur triumph as well as the cause of endless hand-wringing, hardly hails the “death of the expert” (the common claim by both those who love the site and those who despise it). While it is true that anyone can contribute to the encyclopedia, their entries must have references, and many of the sources referenced qualify as professional. Most entries boast citations of academic articles, traditional books, and news stories. Similarly, social production does not exist quite outside the mainstream. Up to 85 percent of the open source Linux developers said to be paradigmatic of this new age of volunteerism are, in fact, employees of large corporations that depend on nonproprietary software.
More generally, there is little evidence that the Internet has precipitated a mass rejection of more traditionally produced fare. What we are witnessing is a convergence, not a coup. Peer-to-peer sites—estimated to take up half the Internet’s bandwidth—are overwhelmingly used to distribute traditional commercial content, namely mainstream movies and music. People gather on message boards to comment on their favorite television shows, which they download or stream online. The most popular videos on YouTube, year after year, are the product of conglomerate record labels, not bedroom inventions. Some of the most visited sites are corporate productions like CNN. Most links circulated on social media are professionally produced. The challenge is to understand how power and influence are distributed within this mongrel space where professional and amateur combine.
Consider, for a moment, Clay Shirky, whose back-flap biography boasts corporate consulting gigs with Nokia, News Corp, BP, the U.S. Navy, Lego, and others. Shirky embodies the strange mix of technological utopianism and business opportunism common to many Internet entrepreneurs and commentators, a combination of populist rhetoric and unrepentant commercialism. Many of amateurism’s loudest advocates are also business apologists, claiming to promote cultural democracy while actually advising corporations on how to seize “collaboration and self-organization as powerful new levers to cut costs” in order to “discover the true dividends of collective capability and genius” and “usher their organizations into the twenty-first century.”
The grassroots rhetoric of networked amateurism has been harnessed to corporate strategy, continuing a nefarious tradition. Since the 1970s populist outrage has been yoked to free-market ideology by those who exploit cultural grievances to shore up their power and influence, directing public animus away from economic elites and toward cultural ones, away from plutocrats and toward professionals. But it doesn’t follow that criticizing “professionals” or “experts” or “cultural elites” means that we are striking a blow against the real powers; and when we uphold amateur creativity, we are not necessarily resolving the deeper problems of entrenched privilege or the irresistible imperative of profit. Where online platforms are concerned, our digital pastimes can sometimes promote positive social change and sometimes hasten the transfer of wealth to Silicon Valley billionaires.
Even well-intentioned celebration of networked amateurism has the potential to obscure the way money still circulates. That’s the problem with PressPausePlay, a slick documentary about the digital revolution that premiered at a leading American film festival. The directors examine the ways new tools have sparked a creative overhaul by allowing everyone to participate—or at least everyone who owns the latest Apple products. That many of the liberated media makers featured in the movie turn out to work in advertising and promotion, like celebrity business writer Seth Godin, who boasts of his ability to turn his books into bestsellers by harnessing the power of the Web, underscores how the hype around the cultural upheaval sparked by connective technologies easily slides from making to marketing. While the filmmakers pay tribute to DIY principles and praise the empowering potential of digital tools unavailable a decade ago, they make little mention of the fact that the telecommunications giant Ericsson provided half of the movie’s seven-hundred-thousand-dollar budget and promotional support.
We should be skeptical of the narrative of democratization by technology alone. The promotion of Internet-enabled amateurism is a lazy substitute for real equality of opportunity. More deeply, it’s a symptom of the retreat over the past half century from the ideals of meaningful work, free time, and shared prosperity—an agenda that entailed enlisting technological innovation for the welfare of each person, not just the enrichment of the few.
Instead of devising truly liberating ways to harness machines to remake the economy, whether by designing satisfying jobs or through the social provision of a basic income to everyone regardless of work status, we have Amazon employees toiling on the warehouse floor for eleven dollars an hour and Google contract workers who get fired after a year so they don’t have to be brought on full-time. Cutting-edge new-media companies valued in the tens of billions retain employees numbering in the lowly thousands, and everyone else is out of luck. At the same time, they hoard their record-setting profits, sitting on mountains of cash instead of investing it in ways that would benefit us all.
The zeal for amateurism looks less emancipatory—as much necessity as choice—when you consider the crisis of rising educational costs, indebtedness, and high unemployment, all while the top 1 percent captures an ever-growing portion of the surplus generated by increased productivity. (Though productivity has risen 23 percent since 2000, real hourly pay has effectively stagnated.)
The consequences are particularly stark for young people: between 1984 and 2009, the median net worth for householders under thirty-five was down 68 percent while rising 42 percent for those over sixty-five.
Many are delaying starting families of their own and moving back in with Mom and Dad.
Our society’s increasing dependence on free labor—online and off—is immoral in this light. The celebration of networked amateurism—and of social production and the cognitive surplus—glosses over the question of who benefits from our uncompensated participation online. Though some internships are enjoyable and useful, the real beneficiary of this arrangement is corporate America, which reaps the equivalent of a two-billion-dollar annual subsidy.
And many of the digital platforms to which we contribute are highly profitable entities, run not for love but for money.
Creative people have historically been encouraged to ignore economic issues and maintain indifference to matters like money and salaries. Many of us believe that art and culture should not succumb to the dictates of the market, and one way to do this is to act as though the market doesn’t exist, to devise a shield to deflect its distorting influence, and uphold the lack of compensation as virtuous. This stance can provide vital breathing room, but it can also perpetuate inequality. “I consistently come across people valiantly trying to defy an economic class into which they were born,” Richard Florida writes. “This is particularly true of the young descendants of the truly wealthy—the capitalist class—who frequently describe themselves as just ‘ordinary’ creative people working on music, film or intellectual endeavors of one sort or another.”
How valiant to deny the importance of money when it is had in abundance. “Economic power is first and foremost a power to keep necessity at arm’s length,” the French sociologist Pierre Bourdieu observed. Especially, it seems, the necessity of talking honestly about economics.
Those who applaud social production and networked amateurism, the colorful cacophony that is the Internet, and the creative capacities of everyday people to produce entertaining and enlightening things online, are right to marvel. There is amazing inventiveness, boundless talent and ability, and overwhelming generosity on display. Where they go wrong is in thinking that the Internet is an egalitarian, let alone revolutionary, platform for our self-expression and development, that being able to shout into the digital torrent is adequate for democracy.
The struggle between amateurs and professionals is, fundamentally, a distraction. The tragedy for all of us is that we find ourselves in a world where the qualities that define professional work—stability, social purpose, autonomy, and intrinsic and extrinsic rewards—are scarce. “In part, the blame falls on the corporate elite,” Barbara Ehrenreich wrote back in 1989, “which demands ever more bankers and lawyers, on the one hand, and low-paid helots on the other.” These low-paid helots are now unpaid interns and networked amateurs. The rub is that over the intervening years we have somehow deceived ourselves into believing that this state of insecurity and inequity is a form of liberation.
3 (#ulink_04c6f9b4-1ac3-53aa-a231-4ae5bd241394)
WHAT WE WANT (#ulink_04c6f9b4-1ac3-53aa-a231-4ae5bd241394)
Today it is standard wisdom that a whole new kind of person lives in our midst, the digital native—“2.0 people” as the novelist Zadie Smith dubbed them. Exalted by techno-enthusiasts for being hyper-connected and sociable, technically savvy and novelty seeking—and chastised by techno-skeptics for those very same traits—this new generation and its predecessors are supposedly separated by a gulf that is immense and unbroachable. Self-appointed experts tell us that “today’s students are no longer the people our educational system was designed to teach”; they “experience friendship” and “relate to information differently” than all who came before.
Reflecting on this strange new species, the skeptics are inclined to agree. “The cyber-revolution is bringing about a different magnitude of change, one that marks a massive discontinuity,” warns the literary critic Sven Birkerts. “Pre-Digital Man has more in common with his counterpart in the agora than he will with a Digital Native of the year 2050.” It is not just cultural or social references that divide the natives from their pre-digital counterparts, but “core phenomenological understandings.” Their very modes of perception and sense making, of experiencing the world and interpreting it, Birkerts claims, are simply incomprehensible to their elders. They are different creatures altogether.
The tech-enthusiasts make a similarly extreme case for total generational divergence, idolizing digital natives with fervor and ebullience equal and opposite to Birkerts’s unease. These natives, born and raised in networked waters, surf shamelessly, with no need for privacy or solitude. As described by Nick Bilton in his book I Live in the Future and Here’s How It Works, digital natives prefer media in “bytes” and “snacks” as opposed to full “meals”—defined as the sort of lengthy article one might find in the New Yorker magazine. Digital natives believe “immediacy trumps quality.”
They “unabashedly create and share content—any type of content,” and, unlike digital immigrants, they never suffer from information overload. People who have grown up online also do not read the news. Or rather, we are told, for them the news is whatever their friends deem interesting, not what some organization or authoritative source says is significant. “This is the way I navigate today as well,” Bilton, technology writer for the New York Times, proudly declares. “If the news is important, it will find me.”
(Notably, Bilton’s assertion was contradicted by a Harvard study that found eighteen- to twenty-nine-year-olds still prefer to get their political news from established newspapers, print or digital, than from the social media streams of their friends.)
Вы ознакомились с фрагментом книги.
Для бесплатного чтения открыта только часть текста.
Приобретайте полный текст книги у нашего партнера: