Library@Kendriya Vidyalaya Pattom

Where Minds meet and Ideas pop up !

Finding good information on the internet

Have you heard about the highly endangered tree octopus of the forests of the Pacific Northwest? "Ridiculous," you say? But I found a whole webpage devoted to saving them, so they must exist, right? This website has been used to intentionally mislead students (see news article) as part of a scientific study, and it points out a real problem our society faces in the digital information age: anyone can put anything on the internet.[Ed. this sentence edited for clarification]

The internet empowers us to educate ourselves and make more informed choices and decisions without leaving our couches. But if we believe everything we find on the internet, we are likely to wind up making some very poor decisions. In this new digital information age, how do we keep from being misinformed? As a skeptical environmental research scientist and educator I have picked up a few tricks that anyone can use to find and select high-quality information from the internet.

#1 Don’t be scared of scientific papers

Aren’t scientific papers too hard to read?

While scientific papers may be filled with difficult to understand jargon, anyone can scan them and get the gist (pdf); just skip the confusing words or look them up. If you are still confused and it is important that you understand completely, ask for help from someone trained in the field of interest. For instance, if it is a medical paper, you might take the article to a physician, but not all doctors have the necessary level of scientific literacy to understand the article and you many need a second opinion.

You might also contact a medical researcher you know or are able to find using the internet and ask for their help in understanding what the article means. You could also take advantage of the power of social media and ask questions on twitter, adding hashtags for science or medical topics. You can even find and follow individual scientists in particular fields of interest on twitter using sciencepond.com.

Additionally, certain websites may allow you to ask questions directly of health professionals or scientists. Alternatively, you may be able to email the lead author of the study directly. Their address is usually listed on the publication and they are often thrilled to have someone interested in their research.

Why use scientific papers?

Scientific papers are the best source of information on the internet. These papers use rigorous experimental and statistical methods that improve validity of the evidence presented within them. For instance, scientific studies often allow the clear identification of cause and effect because only one factor is different at a time, so that the effect of that single factor can be determined. Additionally, these studies often repeat the tests many times using randomization of specific aspects of the experimental design, which helps reduce the likelihood that the results were due to chance or unknown factors.

Scientific papers are also peer-reviewed. Peer-review is a process by which other scientists in the field are asked to anonymously and critically evaluate the research presented within a paper. The editor ultimately makes the decision on whether or not to publish the articles, based on feedback from reviewers and his or her own critique.

This usually ensures that the information published in these papers has been obtained through rigorous, high-quality techniques and that the authors do not make overly speculative conclusions that do not match the data. However, this is not always true. With the proliferation of peer-reviewed journals, it may be that erroneous or poorly conducted research sometimes gets published. Even high profile journals sometimes make mistakes, but with a few techniques, you can make sure these publications don’t trick you.

How do I find appropriate scientific papers?

Google scholar allows anyone to search directly for scientific papers. This is an incredibly powerful tool. When I am trying to find information on a topic that is really important, I use several simple tricks to ensure that I am not misled by a single erroneous publication.

First, I add the word "review" to my search. Review articles generally consider most, if not all studies published on a topic and summarize them, often weighing the trade-offs between these studies, and giving the opinion of the authors about the relative weight of the evidence.

Second, if no recent review exists, I scan the top 10-50 articles and make a list, conducting my own mini-review on the topic. How do I read so many studies you ask? I don’t. I read the titles and abstracts first. Abstracts are a brief, simplified summary of the information contained within the article. Scanning these abstracts is a handy way to quickly examine a large collection of papers for their key points.

Another trick I use to obtain the best information is to obtain the most recent information. Google scholar gives you an option to narrow your search to a range of years. A drop down box allows you to specify how far back you want to search for articles. If you narrow your search to the last 1, 2, or 5 years, you are likely to get the most recent papers, which often will present the best summaries of the current understanding of the topic.

In general, abstracts of papers are available to everyone. This is not true for the full papers. Some journals or specific papers will be "open-access" and available to everyone, and these are becoming more common, but many papers are still not publicly available. Google scholar will give you links to the papers that are available in-full, online, but some will be unavailable with this tool. However, there are many other methods of obtaining these papers. For instance, you can ask for the paper on twitter using the hashtag #icanhazpdf, ask for it on friendfeed, email the author (all three of these methods ultimately result in one person e-mailing the paper to one other person, thus not violating copyright) of the paper asking them for it respectfully, or go to the nearest university library.

#2 Not all websites are created equal

Can I find good information on regular websites?

Using websites is like talking to friends. While we all have friends that are truly knowledgeable and can be very helpful, we all also have friends who are often wrong, but never in doubt. Some of our friends may tell us things that have mixtures of truth and fiction at the same time. Most people know to be skeptical of what some of their friends tell them and we should hold websites to the same standards. However, there are a few guidelines that can help you to separate the informative from the misleading.

Commercial websites that are trying to sell a product cannot be trusted to tell you the truth about issues related to that product. You may be able to gain understanding of a product and be able to compare products using these sites, but any claims should be highly suspect. Never forget they are trying to sell you something. Selling you something is the website’s goal, not educating you.

Websites sponsored by particular commercial, political, or other entities should be viewed with suspicion as well. Like commercial websites, these sites also have a goal of "selling" you something, it just may be a particular belief rather than a physical product. Check the "about us" page to see who sponsors the website. You are more likely to get high-quality information from websites affiliated with universities, government agencies (but not politicians), and some news organizations or non-profit associations devoted to public education.

Beyond these categories, we enter realm of news and magazine websites, blogs, and Wikipedia. All of these reporting platforms are valuable and can be trustworthy. Many respectable news and magazine websites now employ bloggers, hired because they are good journalists, with editors in some cases reviewing blog posts prior to publication. So blogs may be of equal or higher quality than many paper publications.

However, there is more variability among unaffiliated individual bloggers. Commonly, no one reviews the material in these blogs prior to posting. However, readers may comment on their posts and these comments can be informative. Among science blogs, individuals and organizations who regularly produce high-quality work are now being recognized and included in science blog aggregators like ResearchBlogging.org, Scienceblogging.org, and ScienceSeeker.org. Many of these blogs can be regarded as trustworthy sources of information and are often better than the coverage by the main stream media (see this post).

Wikipedia can also be very valuable. Although Wikipedia can be edited by anyone, it has a large group of active moderators/editors who remove erroneous information and protect some controversial websites from being altered too often or rapidly. Additionally, Wikipedia now flags pages that have fewer citations or might have poorer quality information with warning messages at the top of the page. Multiple approaches work together to ensure the overall accuracy of Wikipedia pages. Even before many of these policies were implemented, Wikipedia was found to be relatively equal in quality to Encyclopedia Britannica (see this study (pdf)). Thus, in its current form, Wikipedia is a fairly trustworthy source of information. Some individual pages may still need caution, but for these, you will get a warning right at the top.

#3 Checking the facts

How do I know if a particular piece of information is valid?

In general, webpages with more citations and links, or from more unbiased, reputable organizations, will provide you with better information. However, it is also a good idea to try spot-checking some of what is on the more suspicious websites, like commercial sites or some individual blogs, with other trusted webpages, or by using Google scholar or a fact-checking website.

Fact checking websites claim to deliver unbiased evaluations of recent internet rumors or statements by politicians. A couple of my favorites are factcheck.org and snopes.com. These sites are a great way to check whether that email claiming to be donating to a sick child or offering strange advice is a fake. You can also ask experts on twitter or other social media websites your questions. The more different sources you have that all say the same thing, the more confident you can be of the validity of that information.

See also:

http://www.cnn.com/2008/HEALTH/02/21/ep.web.sites/index.html

http://gethelp.library.upenn.edu/guides/tutorials/webliteracy/

http://askabiologist.asu.edu/

Note: many thanks to Bora Zivkovic for his helpful suggestions.

*

About the Author: Kevin McCluney is a postdoctoral research associate in the School of Life Sciences at Arizona State University, also receiving his PhD in biology from this institution in 2010. McCluney conducts research on rivers and streamside animal communities and how water structures these communities. Through a series of peer-reviewed publications, he has shown that many food webs are really water webs, with animals essentially drinking their food rather than eating it. McCluney has also taught college level classes for seven years and mentored students from 6th grade through the graduate level in their efforts to conduct independent research projects, with students winning many local and international awards and publishing peer-reviewed papers.

Filed under: Article of the Week, , ,

THE INFORMATION

How the Internet gets inside us.

by Adam Gopnik

When the first Harry Potter book appeared, in 1997, it was just a year before the universal search engine Google was launched. And so Hermione Granger, that charming grind, still goes to the Hogwarts library and spends hours and hours working her way through the stacks, finding out what a basilisk is or how to make a love potion. The idea that a wizard in training might have, instead, a magic pad where she could inscribe a name and in half a second have an avalanche of news stories, scholarly articles, books, and images (including images she shouldn’t be looking at) was a Quidditch broom too far. Now, having been stuck with the library shtick, she has to go on working the stacks in the Harry Potter movies, while the kids who have since come of age nudge their parents. “Why is she doing that?” they whisper. “Why doesn’t she just Google it?”

That the reality of machines can outpace the imagination of magic, and in so short a time, does tend to lend weight to the claim that the technological shifts in communication we’re living with are unprecedented. It isn’t just that we’ve lived one technological revolution among many; it’s that our technological revolution is the big social revolution that we live with. The past twenty years have seen a revolution less in morals, which have remained mostly static, than in means: you could already say “fuck” on HBO back in the eighties; the change has been our ability to tweet or IM or text it. The set subject of our novelists is information; the set obsession of our dons is what it does to our intelligence.

The scale of the transformation is such that an ever-expanding literature has emerged to censure or celebrate it. A series of books explaining why books no longer matter is a paradox that Chesterton would have found implausible, yet there they are, and they come in the typical flavors: the eulogistic, the alarmed, the sober, and the gleeful. When the electric toaster was invented, there were, no doubt, books that said that the toaster would open up horizons for breakfast undreamed of in the days of burning bread over an open flame; books that told you that the toaster would bring an end to the days of creative breakfast, since our children, growing up with uniformly sliced bread, made to fit a single opening, would never know what a loaf of their own was like; and books that told you that sometimes the toaster would make breakfast better and sometimes it would make breakfast worse, and that the cost for finding this out would be the price of the book you’d just bought.

All three kinds appear among the new books about the Internet: call them the Never-Betters, the Better-Nevers, and the Ever-Wasers. The Never-Betters believe that we’re on the brink of a new utopia, where information will be free and democratic, news will be made from the bottom up, love will reign, and cookies will bake themselves. The Better-Nevers think that we would have been better off if the whole thing had never happened, that the world that is coming to an end is superior to the one that is taking its place, and that, at a minimum, books and magazines create private space for minds in ways that twenty-second bursts of information don’t. The Ever-Wasers insist that at any moment in modernity something like this is going on, and that a new way of organizing data and connecting users is always thrilling to some and chilling to others—that something like this is going on is exactly what makes it a modern moment. One’s hopes rest with the Never-Betters; one’s head with the Ever-Wasers; and one’s heart? Well, twenty or so books in, one’s heart tends to move toward the Better-Nevers, and then bounce back toward someplace that looks more like home.

Among the Never-Betters, the N.Y.U. professor Clay Shirky—the author of “Cognitive Surplus” and many articles and blog posts proclaiming the coming of the digital millennium—is the breeziest and seemingly most self-confident. “Seemingly,” because there is an element of overdone provocation in his stuff (So people aren’t reading Tolstoy? Well, Tolstoy sucks) that suggests something a little nervous going on underneath. Shirky believes that we are on the crest of an ever-surging wave of democratized information: the Gutenberg printing press produced the Reformation, which produced the Scientific Revolution, which produced the Enlightenment, which produced the Internet, each move more liberating than the one before. Though it may take a little time, the new connective technology, by joining people together in new communities and in new ways, is bound to make for more freedom. It’s theWired version of Whig history: ever better, onward and upward, progress unstopped. In John Brockman’s anthology “Is the Internet Changing the Way You Think?,” the evolutionary psychologist John Tooby shares the excitement—“We see all around us transformations in the making that will rival or exceed the printing revolution”—and makes the same extended parallel to Gutenberg: “Printing ignited the previously wasted intellectual potential of huge segments of the population. . . . Freedom of thought and speech—where they exist—were unforeseen offspring of the printing press.”

Shirky’s and Tooby’s version of Never-Betterism has its excitements, but the history it uses seems to have been taken from the back of a cereal box. The idea, for instance, that the printing press rapidly gave birth to a new order of information, democratic and bottom-up, is a cruel cartoon of the truth. If the printing press did propel the Reformation, one of the biggest ideas it propelled was Luther’s newly invented absolutist anti-Semitism. And what followed the Reformation wasn’t the Enlightenment, a new era of openness and freely disseminated knowledge. What followed the Reformation was, actually, the Counter-Reformation, which used the same means—i.e., printed books—to spread ideas about what jerks the reformers were, and unleashed a hundred years of religious warfare. In the seventeen-fifties, more than two centuries later, Voltaire was still writing in a book about the horrors of those other books that urged burning men alive in auto-da-fé. Buried in Tooby’s little parenthetical—“where they exist”—are millions of human bodies. If ideas of democracy and freedom emerged at the end of the printing-press era, it wasn’t by some technological logic but because of parallel inventions, like the ideas of limited government and religious tolerance, very hard won from history.

Of course, if you stretch out the time scale enough, and are sufficiently casual about causes, you can give the printing press credit for anything you like. But all the media of modern consciousness—from the printing press to radio and the movies—were used just as readily by authoritarian reactionaries, and then by modern totalitarians, to reduce liberty and enforce conformity as they ever were by libertarians to expand it. As Andrew Pettegree shows in his fine new study, “The Book in the Renaissance,” the mainstay of the printing revolution in seventeenth-century Europe was not dissident pamphlets but royal edicts, printed by the thousand: almost all the new media of that day were working, in essence, for kinglouis.gov.

Even later, full-fledged totalitarian societies didn’t burn books. They burned some books, while keeping the printing presses running off such quantities that by the mid-fifties Stalin was said to have more books in print than Agatha Christie. (Recall that in “1984” Winston’s girlfriend works for the Big Brother publishing house.) If you’re going to give the printed book, or any other machine-made thing, credit for all the good things that have happened, you have to hold it accountable for the bad stuff, too. The Internet may make for more freedom a hundred years from now, but there’s no historical law that says it has to.

Many of the more knowing Never-Betters turn for cheer not to messy history and mixed-up politics but to psychology—to the actual expansion of our minds. The argument, advanced in Andy Clark’s “Supersizing the Mind” and in Robert K. Logan’s “The Sixth Language,” begins with the claim that cognition is not a little processing program that takes place inside your head, Robby the Robot style. It is a constant flow of information, memory, plans, and physical movements, in which as much thinking goes on out there as in here. If television produced the global village, the Internet produces the global psyche: everyone keyed in like a neuron, so that to the eyes of a watching Martian we are really part of a single planetary brain. Contraptions don’t change consciousness; contraptions are part of consciousness. We may not act better than we used to, but we sure think differently than we did.

Cognitive entanglement, after all, is the rule of life. My memories and my wife’s intermingle. When I can’t recall a name or a date, I don’t look it up; I just ask her. Our machines, in this way, become our substitute spouses and plug-in companions. Jerry Seinfeld said that the public library was everyone’s pathetic friend, giving up its books at a casual request and asking you only to please return them in a month or so. Google is really the world’s Thurber wife: smiling patiently and smugly as she explains what the difference is between eulogy and elegy and what the best route is to that little diner outside Hackensack. The new age is one in which we have a know-it-all spouse at our fingertips.

But, if cognitive entanglement exists, so does cognitive exasperation. Husbands and wives deny each other’s memories as much as they depend on them. That’s fine until it really counts (say, in divorce court). In a practical, immediate way, one sees the limits of the so-called “extended mind” clearly in the mob-made Wikipedia, the perfect product of that new vast, supersized cognition: when there’s easy agreement, it’s fine, and when there’s widespread disagreement on values or facts, as with, say, the origins of capitalism, it’s fine, too; you get both sides. The trouble comes when one side is right and the other side is wrong and doesn’t know it. The Shakespeare authorship page and the Shroud of Turin page are scenes of constant conflict and are packed with unreliable information. Creationists crowd cyberspace every bit as effectively as evolutionists, and extend their minds just as fully. Our trouble is not the over-all absence of smartness but the intractable power of pure stupidity, and no machine, or mind, seems extended enough to cure that.

The books by the Better-Nevers are more moving than those by the Never-Betters for the same reason that Thomas Gray was at his best in that graveyard: loss is always the great poetic subject. Nicholas Carr, in “The Shallows,” William Powers, in “Hamlet’s BlackBerry,” and Sherry Turkle, in “Alone Together,” all bear intimate witness to a sense that the newfound land, the ever-present BlackBerry-and-instant-message world, is one whose price, paid in frayed nerves and lost reading hours and broken attention, is hardly worth the gains it gives us. “The medium does matter,” Carr has written. “As a technology, a book focuses our attention, isolates us from the myriad distractions that fill our everyday lives. A networked computer does precisely the opposite. It is designed to scatter our attention. . . . Knowing that the depth of our thought is tied directly to the intensity of our attentiveness, it’s hard not to conclude that as we adapt to the intellectual environment of the Net our thinking becomes shallower.”

These three Better-Nevers have slightly different stories to tell. Carr is most concerned about the way the Internet breaks down our capacity for reflective thought. His testimony about how this happened in his own life is plangent and familiar, but he addles it a bit by insisting that the real damage is being done at the neurological level, that our children are having their brains altered by too much instant messaging and the like. This sounds impressive but turns out to be redundant. Of course the changes are in their brains; where else would they be? It’s the equivalent of saying that playing football doesn’t just affect a kid’s fitness; it changes the muscle tone that creates his ability to throw and catch footballs.

Powers’s reflections are more family-centered and practical. He recounts, very touchingly, stories of family life broken up by the eternal consultation of smartphones and computer monitors:

Somebody excuses themselves for a bathroom visit or a glass of water and doesn’t return. Five minutes later, another of us exits on a similarly mundane excuse along the lines of “I have to check something.”. . . Where have all the humans gone? To their screens of course. Where they always go these days. The digital crowd has a way of elbowing its way into everything, to the point where a family can’t sit in a room together for half an hour without somebody, or everybody, peeling off. . . . As I watched the Vanishing Family Trick unfold, and played my own part in it, I sometimes felt as if love itself, or the acts of heart and mind that constitute love, were being leached out of the house by our screens.

He then surveys seven Wise Men—Plato, Thoreau, Seneca, the usual gang—who have something to tell us about solitude and the virtues of inner space, all of it sound enough, though he tends to overlook the significant point that these worthies were not entirely in favor of the kinds of liberties that we now take for granted and that made the new dispensation possible. (He knows that Seneca instructed the Emperor Nero, but sticks in a footnote to insist that the bad, fiddling-while-Rome-burned Nero asserted himself only after he fired the philosopher and started to act like an Internet addict.)

Similarly, Nicholas Carr cites Martin Heidegger for having seen, in the mid-fifties, that new technologies would break the meditational space on which Western wisdoms depend. Since Heidegger had not long before walked straight out of his own meditational space into the arms of the Nazis, it’s hard to have much nostalgia for this version of the past. One feels the same doubts when Sherry Turkle, in “Alone Together,” her touching plaint about the destruction of the old intimacy-reading culture by the new remote-connection-Internet culture, cites studies that show a dramatic decline in empathy among college students, who apparently are “far less likely to say that it is valuable to put oneself in the place of others or to try and understand their feelings.” What is to be done? Other Better-Nevers point to research that’s supposed to show that people who read novels develop exceptional empathy. But if reading a lot of novels gave you exceptional empathy university English departments should be filled with the most compassionate and generous-minded of souls, and, so far, they are not.

One of the things that John Brockman’s collection on the Internet and the mind illustrates is that when people struggle to describe the state that the Internet puts them in they arrive at a remarkably familiar picture of disassociation and fragmentation. Life was once whole, continuous, stable; now it is fragmented, multi-part, shimmering around us, unstable and impossible to fix. The world becomes Keats’s “waking dream,” as the writer Kevin Kelly puts it.

The odd thing is that this complaint, though deeply felt by our contemporary Better-Nevers, is identical to Baudelaire’s perception about modern Paris in 1855, or Walter Benjamin’s about Berlin in 1930, or Marshall McLuhan’s in the face of three-channel television (and Canadian television, at that) in 1965. When department stores had Christmas windows with clockwork puppets, the world was going to pieces; when the city streets were filled with horse-drawn carriages running by bright-colored posters, you could no longer tell the real from the simulated; when people were listening to shellac 78s and looking at color newspaper supplements, the world had become a kaleidoscope of disassociated imagery; and when the broadcast air was filled with droning black-and-white images of men in suits reading news, all of life had become indistinguishable from your fantasies of it. It was Marx, not Steve Jobs, who said that the character of modern life is that everything falls apart.

We must, at some level, need this to be true, since we think it’s true about so many different kinds of things. We experience this sense of fracture so deeply that we ascribe it to machines that, viewed with retrospective detachment, don’t seem remotely capable of producing it. If all you have is a hammer, the saying goes, everything looks like a nail; and, if you think the world is broken, every machine looks like the hammer that broke it.

It is an intuition of this kind that moves the final school, the Ever-Wasers, when they consider the new digital age. A sense of vertiginous overload is the central experience of modernity, they say; at every moment, machines make new circuits for connection and circulation, as obvious-seeming as the postage stamps that let eighteenth-century scientists collaborate by mail, or as newfangled as the Wi-Fi connection that lets a sixteen-year-old in New York consult a tutor in Bangalore. Our new confusion is just the same old confusion.

Among Ever-Wasers, the Harvard historian Ann Blair may be the most ambitious. In her book “Too Much to Know: Managing Scholarly Information Before the Modern Age,” she makes the case that what we’re going through is like what others went through a very long while ago. Against the cartoon history of Shirky or Tooby, Blair argues that the sense of “information overload” was not the consequence of Gutenberg but already in place before printing began. She wants us to resist “trying to reduce the complex causal nexus behind the transition from Renaissance to Enlightenment to the impact of a technology or any particular set of ideas.” Anyway, the crucial revolution was not of print but of paper: “During the later Middle Ages a staggering growth in the production of manuscripts, facilitated by the use of paper, accompanied a great expansion of readers outside the monastic and scholastic contexts.” For that matter, our minds were altered less by books than by index slips. Activities that seem quite twenty-first century, she shows, began when people cut and pasted from one manuscript to another; made aggregated news in compendiums; passed around précis. “Early modern finding devices” were forced into existence: lists of authorities, lists of headings.

Everyone complained about what the new information technologies were doing to our minds. Everyone said that the flood of books produced a restless, fractured attention. Everyone complained that pamphlets and poems were breaking kids’ ability to concentrate, that big good handmade books were ignored, swept aside by printed works that, as Erasmus said, “are foolish, ignorant, malignant, libelous, mad.” The reader consulting a card catalogue in a library was living a revolution as momentous, and as disorienting, as our own. The book index was the search engine of its era, and needed to be explained at length to puzzled researchers—as, for that matter, did the Hermione-like idea of “looking things up.” That uniquely evil and necessary thing the comprehensive review of many different books on a related subject, with the necessary oversimplification of their ideas that it demanded, was already around in 1500, and already being accused of missing all the points. In the period when many of the big, classic books that we no longer have time to read were being written, the general complaint was that there wasn’t enough time to read big, classic books.

Blair’s and Pettegree’s work on the relation between minds and machines, and the combination of delight and despair we find in their collisions, leads you to a broader thought: at any given moment, our most complicated machine will be taken as a model of human intelligence, and whatever media kids favor will be identified as the cause of our stupidity. When there were automatic looms, the mind was like an automatic loom; and, since young people in the loom period liked novels, it was the cheap novel that was degrading our minds. When there were telephone exchanges, the mind was like a telephone exchange, and, in the same period, since the nickelodeon reigned, moving pictures were making us dumb. When mainframe computers arrived and television was what kids liked, the mind was like a mainframe and television was the engine of our idiocy. Some machine is always showing us Mind; some entertainment derived from the machine is always showing us Non-Mind.

Armed with such parallels, the Ever Wasers smile condescendingly at the Better-Nevers and say, “Of course, some new machine is always ruining everything. We’ve all been here before.” But the Better-Nevers can say, in return, “What if the Internet is actually doing it?” The hypochondriac frets about this bump or that suspicious freckle and we laugh—but sooner or later one small bump, one jagged-edge freckle, will be the thing for certain. Worlds really do decline. “Oh, they always say that about the barbarians, but every generation has its barbarians, and every generation assimilates them,” one Roman reassured another when the Vandals were at the gates, and next thing you knew there wasn’t a hot bath or a good book for another thousand years.

And, if it was ever thus, how did it ever get to be thus in the first place? The digital world is new, and the real gains and losses of the Internet era are to be found not in altered neurons or empathy tests but in the small changes in mood, life, manners, feelings it creates—in the texture of the age. There is, for instance, a simple, spooky sense in which the Internet is just a loud and unlimited library in which we now live—as if one went to sleep every night in the college stacks, surrounded by pamphlets and polemics and possibilities. There is the sociology section, the science section, old sheet music and menus, and you can go to the periodicals room anytime and read old issues of the New Statesman. (And you can whisper loudly to a friend in the next carrel to get the hockey scores.) To see that that is so is at least to drain some of the melodrama from the subject. It is odd and new to be living in the library; but there isn’t anything odd and new about the library.

Yet surely having something wrapped right around your mind is different from having your mind wrapped tightly around something. What we live in is not the age of the extended mind but the age of the inverted self. The things that have usually lived in the darker recesses or mad corners of our mind—sexual obsessions and conspiracy theories, paranoid fixations and fetishes—are now out there: you click once and you can read about the Kennedy autopsy or the Nazi salute or hog-tied Swedish flight attendants. But things that were once external and subject to the social rules of caution and embarrassment—above all, our interactions with other people—are now easily internalized, made to feel like mere workings of the id left on its own. (I’ve felt this myself, writing anonymously on hockey forums: it is easy to say vile things about Gary Bettman, the commissioner of the N.H.L., with a feeling of glee rather than with a sober sense that what you’re saying should be tempered by a little truth and reflection.) Thus the limitless malice of Internet commenting: it’s not newly unleashed anger but what we all think in the first order, and have always in the past socially restrained if only thanks to the look on the listener’s face—the monstrous music that runs through our minds is now played out loud.

A social network is crucially different from a social circle, since the function of a social circle is to curb our appetites and of a network to extend them. Everything once inside is outside, a click away; much that used to be outside is inside, experienced in solitude. And so the peacefulness, the serenity that we feel away from the Internet, and which all the Better-Nevers rightly testify to, has less to do with being no longer harried by others than with being less oppressed by the force of your own inner life. Shut off your computer, and your self stops raging quite as much or quite as loud.

It is the wraparound presence, not the specific evils, of the machine that oppresses us. Simply reducing the machine’s presence will go a long way toward alleviating the disorder. Which points, in turn, to a dog-not-barking-in-the-nighttime detail that may be significant. In the Better-Never books, television isn’t scanted or ignored; it’s celebrated. When William Powers, in “Hamlet’s BlackBerry,” describes the deal his family makes to have an Unplugged Sunday, he tells us that the No Screens agreement doesn’t include television: “For us, television had always been a mostly communal experience, a way of coming together rather than pulling apart.” (“Can you please turn off your damn computer and come watch television with the rest of the family,” the dad now cries to the teen-ager.)

Yet everything that is said about the Internet’s destruction of “interiority” was said for decades about television, and just as loudly. Jerry Mander’s “Four Arguments for the Elimination of Television,” in the nineteen-seventies, turned on television’s addictive nature and its destruction of viewers’ inner lives; a little later, George Trow proposed that television produced the absence of context, the disintegration of the frame—the very things, in short, that the Internet is doing now. And Bill McKibben ended his book on television by comparing watching TV to watching ducks on a pond (advantage: ducks), in the same spirit in which Nicholas Carr leaves his computer screen to read “Walden.”

Now television is the harmless little fireplace over in the corner, where the family gathers to watch “Entourage.” TV isn’t just docile; it’s positively benevolent. This makes you think that what made television so evil back when it was evil was not its essence but its omnipresence. Once it is not everything, it can be merely something. The real demon in the machine is the tirelessness of the user. A meatless Monday has advantages over enforced vegetarianism, because it helps release the pressure on the food system without making undue demands on the eaters. In the same way, an unplugged Sunday is a better idea than turning off the Internet completely, since it demonstrates that we can get along just fine without the screens, if only for a day.

Hermione, stuck in the nineties, never did get her iPad, and will have to manage in the stacks. But perhaps the instrument of the new connected age was already in place in fantasy. For the Internet screen has always been like the palantír in Tolkien’s “Lord of the Rings”—the “seeing stone” that lets the wizards see the entire world. Its gift is great; the wizard can see it all. Its risk is real: evil things will register more vividly than the great mass of dull good. The peril isn’t that users lose their knowledge of the world. It’s that they can lose all sense of proportion. You can come to think that the armies of Mordor are not just vast and scary, which they are, but limitless and undefeatable, which they aren’t.

Thoughts are bigger than the things that deliver them. Our contraptions may shape our consciousness, but it is our consciousness that makes our credos, and we mostly live by those. Toast, as every breakfaster knows, isn’t really about the quality of the bread or how it’s sliced or even the toaster. For man cannot live by toast alone. It’s all about the butter. ♦

ILLUSTRATION: CHRISTOPH NIEMANN

Read more http://www.newyorker.com/arts/critics/atlarge/2011/02/14/110214crat_atlarge_gopnik#ixzz1DrREjBQP

Filed under: Article of the Week, ,

How the internet is changing language

‘To Google’ has become a universally understood verb and many countries are developing their own internet slang. But is the web changing language and is everyone up to speed?

Child's building blocks

The web is a hub of neologisms

In April 2010 the informal online banter of the internet-savvy collided with the traditional and austere language of the court room.

Christopher Poole, founder of anarchic image message board 4Chan, had been called to testify during the trial of the man accused of hacking into US politician Sarah Palin’s e-mail account.

During the questioning he was asked to define a catalogue of internet slang that would be familiar to many online, but which was seemingly lost on the lawyers.

At one point during the exchange, Mr Poole was asked to define "rickrolling".

"Rickroll is a meme or internet kind of trend that started on 4chan where users – it’s basically a bait and switch. Users link you to a video of Rick Astley performing Never Gonna Give You Up," said Mr Poole.

"And the term "rickroll" – you said it tries to make people go to a site where they think it is going be one thing, but it is a video of Rick Astley, right?," asked the lawyer.

"Yes."

"He was some kind of singer?"

"Yes."

"It’s a joke?"

"Yes."

The internet prank was just one of several terms including "lurker", "troll" and "caps" that Mr Poole was asked to explain to a seemingly baffled court.

But that is hardly a surprise, according to David Crystal, honorary professor of linguistics at the University of Bangor, who says that new colloquialisms spread like wildfire amongst groups on the net.

"The internet is an amazing medium for languages," he told BBC News.

"Language itself changes slowly but the internet has speeded up the process of those changes so you notice them more quickly."

People using word play to form groups and impress their peers is a fairly traditional activity, he added.

"It’s like any badge of ability, if you go to a local skatepark you see kids whose expertise is making a skateboard do wonderful things.

"Online you show how brilliant you are by manipulating the language of the internet."

Super slang

One example of this is evident in Ukraine, where a written variation of the national tongue has sprung up on internet blogs and message boards called "padronkavskiy zhargon" – in which words are spelled out phonetically.

It is often used to voice disapproval or anger towards another commentator, says Svitlana Pyrkalo, a producer at the BBC World Service Ukrainian Service.

Rick Astley

Rickrolling is the redirection of a website address to a video of popstar Rick Astley from 1987

"Computer slang is developing pretty fast in Ukraine," she said.

The Mac and Linux communities even have their own word for people who prefer Microsoft Windows – віндузятники (vinduzyatnyky literally means "Windowers" but the "nyky" ending makes it derogatory).

"There are some original words with an unmistakably Ukrainian flavour," said Ms Pyrkalo.

The dreaded force-quit process of pressing ‘Control, Alt, Delete’ is known as Дуля (dulya).

"A dulya is an old-fashioned Ukrainian gesture using two fingers and a thumb – something similar to giving a finger in Anglo-Saxon cultures," she said.

"And you need three fingers to press the buttons. So it’s like telling somebody (a computer in this case) to get lost."

Word play

For English speakers there are cult websites devoted to cult dialects – "LOLcat" – a phonetic and deliberately grammatically incorrect caption that accompanies a picture of a cat, and "Leetspeak" in which some letters are replaced by numbers which stem from programming code.

lolcat LOLcats have become a 21st Century internet phenomenon

"There are about a dozen of these games cooked up by a crowd of geeks who, like anybody, play language games," said Professor Crystal.

"They are all clever little developments used by a very small number of people – thousands rather than millions. They are fashionable at the moment but will they be around in 50 years’ time? I would be very surprised."

For him, the efforts of those fluent in online tongues is admirable.

"They might not be reading Shakespeare and Dickens but they are reading and cooking up these amazing little games – and showing that they are very creative. I’m quite impressed with these movements."

Txt spk

One language change that has definitely been overhyped is so-called text speak, a mixture of often vowel-free abbreviations and acronyms, says Prof Crystal.

"People say that text messaging is a new language and that people are filling texts with abbreviations – but when you actually analyse it you find they’re not," he said.

In fact only 10% of the words in an average text are not written in full, he added.

They may be in the minority but acronyms seem to anger as many people as they delight.

Stephen Fry once blasted the acronym CCTV (closed circuit television) for being "such a bland, clumsy, rythmically null and phonically forgettable word, if you can call it a word".

But his inelegant group of letters is one of many acronyms to earn a place in the Oxford English Dictionary (OED).

The secret of their success is their longevity.

"We need evidence that people are using a word over a period of time," said Fiona McPherson, senior editor in the new words group at the OED.

She says the group looks for evidence that a word has been in use for at least five years before it can earn its place in the dictionary.

Such evidence comes in the form of correspondence from the public and trawling through dated material to find out when a term first started appearing.

Hence TMI (Too Much Information) and WTF (you may wish to look that one up for yourself) are in, while OMG (Oh My God) has yet to be included in the quarterly dictionary updates.

"Some people get quite exercised and say, ‘do these things belong in our language?’," said Ms McPherson.

"But maybe this has always happened. TTFN [ta ta for now] is from the ITMA (It’s That Man Again) radio series in the 1940s."

Word thief

There is no doubt that technology has had a "significant impact" on language in the last 10 years, says Ms McPherson.

Some entirely new words like the verb ‘to google’, or look something up on a search engine, and the noun ‘app’, used to describe programmes for smartphones (not yet in the OED), have either been recently invented or come into popular use.

text speak

Website internetslang.com lists 5,090 English language acronyms in use.

But the hijacking of existing words and phrases is more common.

Ms McPherson points out that the phrase "social networking" debuted in the OED in 1973. Its definition – "the use or establishment of social networks or connections" – has only comparatively recently been linked to internet-based activities.

"These are words that have arisen out of the phenomenon rather than being technology words themselves," she added.

"Wireless in the 1950s meant a radio. It’s very rare to talk about a radio now as a wireless, unless you’re of a particular generation or trying to be ironic. The word has taken on a whole new significance."

For Prof Crystal it is still too early to fully evaluate the impact of technology on language.

"The whole phenomenon is very recent – the entire technology we’re talking about is only 20 years old as far as the popular mind is concerned."

Sometimes the worst thing that can happen to a word is that it becomes too mainstream, he argues.

"Remember a few years ago, West Indians started talking about ‘bling’. Then the white middle classes started talking about it and they stopped using it.

"That’s typical of slang – it happens with internet slang as well."

 

By

By Zoe Kleinman Technology reporter, BBC News

Courtesy: http://www.bbc.co.uk/news/technology-10971949

Filed under: Article of the Week, , ,

Is the internet making us stupid?

How we seek breadth of information, and sacrifice depth

By Gary Marshall

http://www.techradar.com/news/internet/is-the-internet-making-us-stupid—673843

Since we came out of the caves, every new technology has been greeted with alarm and disdain.

When we invented fire, people moaned that we’d forget the art of making salads. When we invented the wheel, people moaned that we’d forget how to walk. And when we invented the internet, people moaned that we’d forget how to think.

The difference is, the internet moaners might be right. The 2008 report Information Behaviour of the Researcher of the Future, commissioned by the British Library and the Joint Information Systems Committee, found clear evidence of the negative effects of internet use.

"Deep log studies show that, from undergraduates to professors, people exhibit a strong tendency towards shallow, horizontal, ‘flicking’ behaviour in digital libraries. Society is dumbing down."

If that’s true, things are only going to get worse. The endless amusements of the internet are no longer limited to desktop PCs. Thanks to smartphones, we’re online whenever we’re out and about, too – and convergence means we’ll soon be tweeting from our TVs. So what is browsing doing to our brains?

Pavlov’s blogs

For all our fancy shoes and flat-screen iMacs, it turns out that we’re not that different from Pavlov’s dogs: we race from link to link because our brains have been conditioned to associate novelty with pleasure.

The more we do, the faster we think; the faster we think, the better we feel about ourselves and about the world around us.

In a series of experiments conducted at Harvard and Princeton universities, people were asked to think as quickly as possible by brainstorming ideas, speed-reading things on computer screens or watching video clips on fast-forward.

As Scientific American reports, "Results suggested that thinking fast made participants feel more elated, creative and, to a lesser degree, energetic and powerful. Activities that promote fast thinking, then, such as whipping through an easy crossword puzzle or brainstorming quickly about an idea, can boost energy and mood," says psychologist Emily Pronin, the study’s lead author.

Pronin and her colleagues suggest that we may associate fast thinking with being in a good mood, and that "thinking quickly may unleash the brain’s novelty-loving dopamine system, which is involved in sensations of pleasure and reward".

Dopamine

MMMM… DOPAMINE: Dopamine is a neurotransmitter that makes you feel good

Dopamine is a neurotransmitter, a chemical that’s released whenever we do anything pleasurable such as enjoy food, have sex or take drugs. It’s long been implicated in various forms of addiction and may explain why some people are so keen on risky behaviour such as extreme sports or high-stakes business decisions. It could be the reason why we’re constantly distracted.

Dr Gary Small is a professor of psychiatry at the UCLA Semel Institute, directs the Memory and Aging Research Center and the UCLA Center on Aging and is the author of iBrain: Surviving the Technological Alteration of the Modern Mind. As he explains, what many of us do on our PCs isn’t multitasking. It’s something rather different, which he calls Partial Continuous Attention.

 

GARY SMALL: Dr Gary Small, UCLA, says searching online is a form of brain exercise

"With Partial Continuous Attention or PCA you’re scanning the environment, looking for new bits of information that might tweak your dopamine reward system and be more exciting [than what you’re doing]," he says.

Dr Small and his colleagues at UCLA have found positive results from using technology, particularly with older people. As Dr Small puts it, "Searching online may be a simple form of brain exercise that might be employed to enhance cognition in older adults." But there’s an important caveat.

"The problem is that it tends to create this staccato quality of thought, where you jump from idea to idea as you jump from site to site. You get a lot of breadth of information, but you sacrifice depth."

The British Library study focused purely on scholars – that is, people with an interest in the things they were researching – but even they had magpie minds. "The figures are instructive," the report says.

CIBER study

CIBER STUDY: The British Library’s CIBER study found that short attention spans weren’t just for kids

"Around 60 per cent of e-journal users view no more than three pages and a majority (up to 65 per cent) never return … It’s clear that users are not reading online in the traditional sense, indeed there are signs that new forms of reading are emerging as users ‘power browse’ horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense."

The British Library study revealed another concern: "The speed of young people’s internet searching indicates that little time is spent in evaluating information, either for relevance, accuracy or authority. Researchers have similarly found young people give a consistent lack of attention to the issue of authority. In one study, many teenagers thought if a site was indexed by Yahoo it had to be authoritative."

Good tech, bad tech

So are we raising a generation of internet-addled kids with zero attention spans? Perhaps not. The study of 3,001 English and Scottish schoolchildren by the National Literacy Trust found that children who blog or post on social networks "have higher literacy levels and greater confidence in writing", with 61 per cent of bloggers and 56 per cent of social networkers claiming to be "good or very good at writing" compared to 47 per cent of nonblogging, non-networking children. "Pupils who write online are more likely to write short stories, letters, song lyrics or a diary," it reports.

National literary trust

ANOTHER STUDY: The National Literacy Trust found that children who write blogs and get involved in social networking tend to be more literate and more likely to write for fun

Technology isn’t good or bad; it just is. When we use it wisely it improves our lives, and the very distractions that ruin our attention span also make us amazingly good at juggling massive amounts of information.

"That’s why we love it and use it," Dr Small says, "because it really enhances our lives … for the most part it’s not going to harm us as far as we know, but I do think there are these subtler effects to which some people are more sensitive.

Some people do have problems, some people are addicted, and some people find it interferes with their lives. The issue is: how do we maximise the benefits and avoid some of the potential risks?"

 

Courtesy: Techradar.com

Filed under: Article of the Week,

50 things that are being killed by the internet

The internet has wrought huge changes on our lives – both positive and negative – in the fifteen years since its use became widespread.

By Matthew Moore
04 Sep 2009

Courtesy: www.newsweek.com

image

The web is changing the way we work, play and think Photo: REUTTERS

Tasks that once took days can be completed in seconds, while traditions and skills that emerged over centuries have been made all but redundant.

The internet is no respecter of reputations: innocent people have seen their lives ruined by viral clips distributed on the same World Wide Web used by activists to highlight injustices and bring down oppressive regimes

Below we have compiled – in no particular order – 50 things that are in the process of being killed off by the web, from products and business models to life experiences and habits. We’ve also thrown in a few things that have suffered the hands of other modern networking gadgets, specifically mobile phones and GPS systems.

Do you agree with our selections? What other examples can you think of? Please post your comments on the bottom of the story – we hope include the best suggestions in a fuller list.

1) The art of polite disagreement
While the inane spats of YouTube commencers may not be representative, the internet has certainly sharpened the tone of debate. The most raucous sections of the blogworld seem incapable of accepting sincerely held differences of opinion; all opponents must have "agendas".

2) Fear that you are the only person unmoved by a celebrity’s death
Twitter has become a clearing-house for jokes about dead famous people. Tasteless, but an antidote to the "fans in mourning" mawkishness that otherwise predominates.

3) Listening to an album all the way through
The single is one of the unlikely beneficiaries of the internet – a development which can be looked at in two ways. There’s no longer any need to endure eight tracks of filler for a couple of decent tunes, but will "album albums" like Radiohead’s Amnesiac get the widespread hearing they deserve?

4) Sarah Palin
Her train wreck interviews with NBC’s Katie Couric were watched and re-watched millions of times on the internet, cementing the Republican vice-presidential candidate’s reputation as a politician out of her depth. Palin’s uncomfortable relationship with the web continues; she has threatened to sue bloggers who republish rumours about the state of her marriage.

5) Punctuality
Before mobile phones, people actually had to keep their appointments and turn up to the pub on time. Texting friends to warn them of your tardiness five minutes before you are due to meet has become one of throwaway rudenesses of the connected age.

6) Ceefax/Teletext
All sports fans of a certain age can tell you their favourite Ceefax pages (p341 for Test match scores, p312 for football transfer gossip), but the service’s clunking graphics and four-paragraph articles have dated badly. ITV announced earlier this year that it was planning to pull Teletext, its version.

7) Adolescent nerves at first porn purchase
The ubiquity of free, hard-core pornography on the web has put an end to one of the most dreaded rights rites of passage for teenage boys – buying dirty magazines. Why tremble in the WHSmiths queue when you can download mountains of filth for free in your bedroom? The trend also threatens the future of "porn in the woods" – the grotty pages of Razzle and Penthouse that scatter the fringes of provincial towns and villages.

8) Telephone directories
You can find Fly Fishing by J R Hartley on Amazon.

9) The myth of cat intelligence
The proudest household pets are now the illiterate butts of caption-based jokes. Icanhasreputashunback?

10) Watches
Scrabbling around in your pocket to dig out a phone may not be as elegant as glancing at a watch, but it saves splashing out on two gadgets.

11) Music stores
In a world where people don’t want to pay anything for music, charging them £16.99 for 12 songs in a flimsy plastic case is no business model.

12) Letter writing/pen pals
Email is quicker, cheaper and more convenient; receiving a handwritten letter from a friend has become a rare, even nostalgic, pleasure. As a result, formal valedictions like "Yours faithfully" are being replaced by "Best" and "Thanks".

13) Memory
When almost any fact, no matter how obscure, can be dug up within seconds through Google and Wikipedia, there is less value attached to the "mere" storage and retrieval of knowledge. What becomes important is how you use it – the internet age rewards creativity.

14) Dead time
When was the last time you spent an hour mulling the world out a window, or rereading a favourite book? The internet’s draw on our attention is relentless and increasingly difficult to resist.

15) Photo albums and slide shows
Facebook, Flickr and printing sites like Snapfish are how we share our photos. Earlier this year Kodak announced that it was discontinuing its Kodachrome slide film because of lack of demand.

16) Hoaxes and conspiracy theories
The internet is often dismissed as awash with cranks, but it has proved far more potent at debunking conspiracy theories than perpetuating them. The excellent Snopes.com continues to deliver the final, sober, word on urban legends.

17) Watching television together
On-demand television, from the iPlayer in Britain to Hulu in the US, allows relatives and colleagues to watch the same programmes at different times, undermining what had been one of the medium’s most attractive cultural appeals – the shared experience. Appointment-to-view television, if it exists at all, seems confined to sport and live reality shows.

18) Authoritative reference works
We still crave reliable information, but generally aren’t willing to pay for it.

19) The Innovations catalogue
Preposterous as its household gadgets may have been, the Innovations catalogue was always a diverting read. The magazine ceased printing in 2003, and its web presence is depressingly bland.

20) Order forms in the back pages of books
Amazon’s "Customers who bought this item also bought…" service seems the closest web equivalent.

21) Delayed knowledge of sporting results
When was the last time you bought a newspaper to find out who won the match, rather than for comment and analysis? There’s no need to fall silent for James Alexander Gordon on the way home from the game when everyone in the car has an iPhone.

22) Enforceable copyright
The record companies, film studios and news agencies are fighting back, but can the floodgates ever be closed?

23) Reading telegrams at weddings
Quoting from a wad of email printouts doesn’t have the same magic.

24) Dogging
Websites may have helped spread the word about dogging, but the internet offers a myriad of more convenient ways to organise no-strings sex with strangers. None of these involve spending the evening in lay-by near Aylesbury.

25) Aren’t they dead? Aren’t they gay?
Wikipedia allows us to confirm or disprove almost any celebrity rumour instantly. Only at festivals with no Wi-Fi signals can the gullible be tricked into believing that David Hasselhoff has passed away.

26) Holiday news ignorance
Glancing at the front pages after landing back at Heathrow used to be a thrilling experience – had anyone died? Was the government still standing? Now it takes a stern soul to resist the temptation to check the headlines at least once while you’re away.

27) Knowing telephone numbers off by heart
After typing the digits into your contacts book, you need never look at them again.

28) Respect for doctors and other professionals
The proliferation of health websites has undermined the status of GPs, whose diagnoses are now challenged by patients armed with printouts.

29) The mystery of foreign languages
Sites like Babelfish offer instant, good-enough translations of dozens of languages – but kill their beauty and rhythm.

30) Geographical knowledge
With GPS systems spreading from cars to smartphones, knowing the way from A to B is a less prized skill. Just ask the London taxi drivers who spent years learning The Knowledge but are now undercut by minicabs.

31) Privacy
We may attack governments for the spread of surveillance culture, but users of social media websites make more information about themselves available than Big Brother could ever hoped to obtain by covert means.

32) Chuck Norris’s reputation
The absurdly heroic boasts on Chuck Norris Facts may be affectionate, but will anyone take him seriously again?

33) Pencil cricket
An old-fashioned schoolboy diversion swept away by the Stick Cricket behemoth

34) Mainstream media
The Seattle Post-Intelligencer and Rocky Mountain News in the US have already folded, and the UK’s Observer may follow. Free news and the migration of advertising to the web threaten the basic business models of almost all media organisations.

35) Concentration
What with tabbing between Gmail, Twitter, Facebook and Google News, it’s a wonder anyone gets their work done. A disturbing trend captured by the wonderful XKCD webcomic.

36) Mr Alifi’s dignity Mr Tombe’s dignity
Twenty years ago, if you were a Sudanese man who was forced to marry a goat after having sex with it, you’d take solace that news of your shame would be unlikely to spread beyond the neighbouring villages. Unfortunately for Mr Alifi, his indiscretion came in the digital age – and became one of the first viral news stories.
As pointed out in the comments, Mr Alifi was just the goat’s owner. It was another man, Mr Tombe, who actually did the deed. Apologies and thanks to readers for drawing attention to the error. (#51 Unchallenged journalistic inaccuracy?)

37) Personal reinvention
How can you forge a new identity at university when your Facebook is plastered with photos of the "old" you?

38) Viktor Yanukovych
The Orange Revolution in Ukraine was organised by a cabal of students and young activists who exploited the power of the web to mobilise resistance against the old regime, and sweep Viktor Yushchenko to power.

39) The insurance ring-round
Their adverts may grate, but insurance comparison websites have killed one of the most tedious annual chores

40) Undiscovered artists
Posting paintings to deviantART and Flickr – or poems to writebuzz – could not be easier. So now the garret-dwellers have no excuses.

41) The usefulness of reference pages at the front of diaries
If anyone still digs out their diaries to check what time zone Lisbon is in, or how many litres there are to a gallon, we don’t know them.

42) The nervous thrill of the reunion
You’ve spent the past five years tracking their weight-gain on Facebook, so meeting up with your first love doesn’t pack the emotional punch it once did.

43) Solitaire
The original computer timewaster has been superseded by the more alluring temptations of the web. Ditto Minesweeper.

44) Trust in Nigerian businessmen and princes
Some gift horses should have their mouths very closely inspected.

45) Prostitute calling cards/ kerb crawling
Sex can be marketed more cheaply, safely and efficiently on the web than the street corner.

46) Staggered product/film releases
Companies are becoming increasingly draconian in their anti-piracy measure, but are finally beginning to appreciate that forcing British consumers to wait six months to hand over their money is not a smart business plan.

47) Footnotes
Made superfluous by the link, although Wikipedia is fighting a brave rearguard action.

48) Grand National trips to the bookmaker
Having a little flutter is much more fun when you don’t have to wade though a shop of drunks and ne’er-do-wells

49) Fanzines
Blogs and fansites offer greater freedom and community interaction than paper fanzines, and can be read by many more people.

50) Your lunchbreak
Did you leave your desk today? Or snaffle a sandwich while sending a few personal emails and checking the price of a week in Istanbul?

 

Filed under: Article of the Week,

Internet Turns 40

A simple data exchange sparked the age of the Internet.

email sent - confirmed

ARPANET

Sept. 2, 2009 — Forty years ago today, two computers at the University of California, Los Angeles, exchanged meaningless data in the first test of the Advanced Research Projects Agency Network (ARPANET), an experimental military network.

This exchange would plant the seed for what would become the most advanced communications network in all of human history: the Internet.

The above photograph features a rudimentary concept sketch of ARPANET illustrated in 1969. By 1970, ARPANET had connected the two coasts of the United States. Three years later, the network went global.

The 1970s also ushered in e-mail and the TCP/IP communications protocols, which allowed multiple networks to connect — and formed the Internet. The ’80s gave birth to an addressing system with suffixes like ".com" and ".org" in widespread use today.

The Internet didn’t become a household word until the ’90s, though, after a British physicist, Tim Berners-Lee, invented the Web, a subset of the Internet that makes it easier to link resources across disparate locations. Meanwhile, service providers like America Online connected millions of people for the first time.

In 2008, the world’s Internet population hit 1.5 billion. At the same time, China overtook the United States in the number of connected users.

No one could have predicted social networking or viral video. Nor could anyone have imagined the economic and political impact that resulted from that simple exchange four decades ago.

Source: Associated Press

Photo Credit: Getty Images

Filed under: Article of the Week,

Happy 20th Birthday, World Wide Web

CERN on March 13 celebrates the 20th anniversary of a proposal entitled, “Information Management: A Proposal,” by Tim Berners-Lee, which would become the blueprint for the World Wide Web.

By Larry Greenemeier in 60-Second Science Blog

day-the-web-was-born_1

Twenty years ago this month, a software consultant named Tim Berners-Lee at the European Organization for Nuclear Research (better known as CERN) hatched a plan for an open computer network to keep track of research at the particle physics laboratory in the suburbs of Geneva, Switzerland. Berners-Lee’s modestly titled “Information Management: A Proposal,” which he submitted to get a CERN grant, would become the blueprint for the World Wide Web.

The Web was not an overnight success. In fact, it took nearly two years before Berners-Lee—with help from CERN computer scientist Robert Cailliau and others—on Christmas Day 1990 set up the first successful communication between a Web browser and server via the Internet. This demonstration was followed by several more years of tireless lobbying by Berners-Lee, now 53, to convince professors, students, programmers and Internet enthusiasts to create more Web browsers and servers that would soon forever change the world of human communication.

On Friday March 13, Berners-Lee, Cailliau and other Web pioneers will gather at CERN to celebrate the 20th anniversary of that original proposal. To get the inside story on how the Web came to be, not to mention the man behind the idea, SciAm.com spoke with Scientific American editor Mark Fischetti, who in 1999 collaborated with Berners-Lee to write Weaving the Web: The Past, Present and Future of the World Wide Web by its Inventor, a seminal work that analyzed and commemorated Berners-Lee’s achievement a decade after the Web’s birth.

[An edited transcript of the interview follows.]

Why was the Web invented at CERN?
Tim Berners-Lee was a software consultant at CERN in the 1980s when he began writing Tangle, an application to help him keep track of CERN’s many scientists, projects and incompatible computers. Thousands of researchers would travel to CERN, do their experiments using their own computers (which they brought with them), and then go home to crunch the data. It was a major pain at CERN to accommodate the many incompatible computers, which also had to work with the CERN mainframe that actually ran the mammoth particle accelerators. Tim was responsible for helping everything and everyone work together. He thought it would be a whole lot simpler if the computers could swap their information directly, even though, at that time, computers didn’t communicate with one another.

March 2009 marks 20 years since Tim Berners-Lee first proposed a project that would become the World Wide Web. What inspired the larger vision?
He made the proposal to CERN management in March 1989 for funding and an official okay to use some of his time to work on this project. But in thinking about solving the incompatibility problem, he realized that it would be even more cool if the scientists, after they went back to their labs, could still share their data. They might even be able to run some of their experiments at CERN over a network from wherever they were located, if the distant CERN computers could talk over the Internet. The Internet itself is just a set of wires and a protocol for sending information over those wires. The Web would be an application that ran on the Internet. It just so happens that the Web turned out to be the killer app of all time. (Other Internet applications already existed, including File Transfer Protocol, or FTP, and e-mail.)

What were the key innovations that formed the Web? Who created them?
The three main innovations are HTTP (hypertext transfer protocol); URLs (universal resource locators, which Tim originally referred to as URIs, for universal resource indicators); and HTML (hypertext markup language). HTTP allows you to click on a link and be brought to that document or Web page. URLs serve as an address for finding that document or page. And HTML gives you the ability to put links in documents and pages so they connect. Tim created all three of these pieces of software code from October to December of 1990.

What’s the best analogy for explaining how the Web works?
Tim likens it to a market economy: anyone can trade with anyone else without having to go to a physical market square to do it. The traders just need to know the rules. The hardest thing for people to grasp about the Web is that it has no center; any computer (or node, in mathematical terms) can link to any other computer directly, without having to go through a central connection point. They just need to know the rules for communicating.

Berners-Lee accessed the first Web page, on the first Web server, using the first Web browser on Christmas Day 1990. Why did it take until 1993 before the public became aware of the creation?

Once Tim and Robert Cailliau established that the Web worked, they wanted to spread the word. After getting CERN to buy in, Tim spent 1991 flying around the world meeting with people who were interested in hypertext and the Internet and linking to create Web browsers to access what was a growing repository of information on Tim’s CERN computer. He also encouraged enthusiasts to start their own servers. From there, listservs helped spread the word; so did university computer science programs, which saw the coding of browsers and servers as a great way to get students to experiment. (One of the best known of these projects was headed by the University of Illinois’s Marc Andreessen, who would later transform his creation into the Netscape Web browser.) Tim began to get concerned, though, about universities and companies like Microsoft creating their own networks that might compete with the Web, or charging for content, which would violate his core principle: that everyone should be able to communicate freely with everyone else. To stop this from happening, he got management at CERN to release all of his source code under a general license so that any programmer anywhere could use it for free. He thought that if the whole world was building the Web together, no one company could take control of it.

What caused the Web to finally take off?
Tim designed the Web to be a social medium, first, rather than a technical one—a system that would connect people through their computers, and the grassroots building [of the Web] took off because of that. However, the general public didn’t really enter that picture until the mid-1990s, when companies like Netscape and AOL [America Online] commercialized browsers. These companies would snail mail free CDs with their browser software so people would get on the Web, hoping that once they got there, they would discover services the companies offered for a fee, such as e-mail.

Why did Berners-Lee abruptly leave CERN to begin the World Wide Web Consortium at the Massachusetts Institute of Technology in 1994, just as the Web began to rapidly expand?
At that point, the Web was clearly becoming a juggernaut, and commercial forces did indeed threaten those core principles. CERN was not in the business of overseeing Internet systems or applications—it existed to do high-energy physics experiments. Tim couldn’t be the caretaker and stay there, so he moved on to M.I.T.’s Laboratory for Computer Science,  which became the host for a new World Wide Web Consortium, where Tim has been ever since.

What has most surprised him about the Web’s evolution?
What surprised Tim most is that for years people were so much more interested in simply browsing for and reading content rather than in creating it. His very first browser—WorldWideWeb—was actually both a browser and an editor. It let you write your own pages, post them online, and edit pages posted by others. But the commercial browsers didn’t offer editing capabilities. This frustrated him for a number of years. The whole point of the Web, to him, was not to just see information but to publish it, too. This didn’t really happen until blogs emerged, followed by sites like Facebook, where people can easily post content.

What does the future hold for the Web, given that the openness that Berners-Lee built into it is continually exploited by miscreants?
It’s hard to implement controls on the Web—because it was created in the ethos of the Internet—in that it’s totally open. But for Tim, confronting issues like privacy and protection of intellectual property is not a matter of a technical fix. First, you need a social fix. If the Web is open to good people, it’s open to bad people, too. The way you deal with security and other problems on the Web is the same way you deal with it in society: You need laws and social conventions that guide people’s behavior. Once those are developed, then the technical ways to implement them can be created.

Facts about web’s creation

First program by Tim Berners-Lee that attempted to link bits of data:
—Enquire, 1980, for Berners-Lee’s personal use as a software consultant at CERN; he later left and the code was lost

Second program:
—Tangle, 1984, when Berners-Lee returned, to help him keep track of CERN’s many scientists, projects and incompatible computers

Early names for the Web:
—Information Mesh, Mine of Information, The Information Mine (But Berners-Lee thought the acronym, TIM, was too egocentric!)

Computer the Web code was written on, and Web browser was designed on:

NeXT, by NeXT, Inc., founded by Steve Jobs, who had started Apple Computer earlier and returned to it later

Programming language used:
—C

Time taken to write the code:
—Three months

First Web browser:
—Called WorldWideWeb; it could edit Web pages as well as access them; it worked only on the NeXT platform

First server address:
—nxoc01.cern.ch (NeXT, Online Controls, 1), with an alias of info.cern.ch

First full demonstration:
—Christmas Day 1990, operating over the Internet from Berners-Lee’s NeXT machine to the NeXT computer of his office partner and now Web co-developer, Robert Cailliau

Content of first Web page:
—The CERN phone directory

First U.S. Web server:
—April 1991, hosted by the Stanford University Linear Accelerator lab

Hits (pages viewed) on the info.cern.ch server:
August 1991: 100 a day
August 1992: 1,000 a day
August 1993: 10,000 a day

First Web browsers:
WorldWideWeb, December 1990, for the NeXT platform, by Berners-Lee
Erwise, April 1992, for Unix, by students at Helsinki University of Technology
Viola, May 1992, for Unix, by student Pei Wei at the University of California, Berkeley
Samba, summer 1992, for Macintosh, by Robert Cailliau at CERN, finished by intern Nicola Pellow

Notable early servers that showed the Web’s complex capabilities:
—1992, virtual museum of objects in the Vatican, by programmer Frans van Hoesel
—1992, virtual geographic maps, with pan and zoom, by Steve Putz at Xerox PARC

Courtesy: Scientific American

Berners-Lee returns to CERN to reminisce on the Web’s past and focus on its future

By Larry Greenemeier in 60-Second Science Blog

Computer scientists, engineers and journalists converged on the CERN particle physics lab in the suburbs of Geneva, Switzerland, today to pay homage to a piece of paper—several pieces of paper, actually—that together form Tim Berners-Lee’s March 1989 proposal that would come to be the blueprint for the World Wide Web.

Berners-Lee, the one-time CERN software consultant who went on to invent the Web and found the World Wide Web Consortium (W3C) at the Massachusetts Institute of Technology (M.I.T.), began his keynote today commemorating the 20th anniversary of his proposal with a copy of his now-famous document in hand. “I wrote it 20 years ago, 20 years ago nothing happened,” he said, referring to the seven months the proposal languished on his supervisor’s desk before in September that year he was given money to buy some computers and pursue his idea. (For more coverage of the Web’s 20th anniversary, see Scientific American.com‘s in-depth report.)

The Web came to life on Christmas day 1990 and grew exponentially from that moment on. One of the reasons for the its phenomenal success was Berners-Lee’s insistence that there be one Web for everyone to use, regardless of the type of computer, software program or documents they were using. For years, he says he was concerned that the original Web would split into many specialized webs, such as one for academia and another for businesses. But in the end his vision prevailed. “Universality,” he said, “that was the rule, and it worked.”

In a speech that touched on the past but also emphasized the Web’s future (including its potential benefits as well as dangers), Berners-Lee pointed out that there are 100 billion Web pages today, roughly the same number of neurons in the human brain. The difference, he added, is that the number of pages grows as the Web ages, whereas the number of nerve cells shrinks as we get on in years.

“One of the dangers of celebrating anything is having people look back and [focus on] what we did,” he said today. “But the rate of creative new design on the Web is getting faster and faster. The Web is not done; it’s just the tip of the iceberg. I’m convinced that the things that are going to happen will rock the boat even more.”

One of the W3C’s priorities is promoting access to the Web via mobile devices, phones in particular. “There are more browsers on phones than on laptops, by a long shot,” Berners-Lee said. Another major priority is maintaining the universality that he had in mind when he envisioned the Web. “Eighty percent of people can’t access the Web,” he said, because, among other reasons, they can’t get a connection or the Web pages they access are written in a language they can’t read.

As the Web grows, so do concerns about the confidentiality of private info on it. The main worry is that a hacker will access and use personal information such as credit card, banking account or Social Security numbers. One possible way to avoid this, says Berners-Lee, is to specify how the data may be used. “As the data is moved around,” he said, “the appropriate use is tagged along with it.” The tag, or set of instructions accompanying the data, would prevent it from being misused.

This suggestion is part of Berners-Lee’s vision for a “Semantic Web” that would be easier to surf than the Web is today. In the Semantic Web, search engines would focus more on finding the information you’re looking for, rather than simply locating Web pages that might contain that information. One way to do this, he said, would be to change the way new data is added to the Web so that it can be immediately linked to other data, making it easier to find. W3C is working on a way to do this through its Linked Open Data project, a key component of the Semantic Web.

An example of how the Semantic Web would work, which the W3C has posted to its Web site: you could populate your Web-based appointment calendar (such as those offered by Google or Yahoo) with appointments  as well as with other dated information to which you have access (such as bank statements and digital photos).

cern-globe
Image of CERN’s Globe building, where Tim Berners-Lee and others celebrated on March 13, by Jim Shank via Flickr

Courtesy: Scientific American


Filed under: Article of the Week, , , , ,

Article of the week

131934661.jpg 

Ripples at sea 

Damage to under-sea cables and disruption in Internet connectivity raise worrisome questions.

Once is happenstance, twice is coincidence, the third time is enemy action.

Goldfinger, the James Bond villain

R.K.Raghavan

Many of us literally worship the Internet for the amazing speed with which it responds when we either need information or want to communicate expeditiously with someone in a distant Continent. It is a reliable friend who rarely lets us down in an emergency. This reputation for high dependability, however, gets a beating once in a while due to an intervention by Nature or by man’s own proclivity to abuse all of the world’s gifts to him.

We know that earthquakes often cause a problem to Internet connectivity. An instance in point was the dislocation caused in parts of Asia in December 2006 by an undersea earthquake off the coast of Taiwan. This was taken in its stride as a natural phenomenon. Four recent incidents leading to widespread Internet disruption are, however, a greater cause for concern because there is no conclusive view yet with regard to what triggered them. They should provoke a renewed debate on how secure this medium is and how it can be protected from mischief.

Instances of outage

In the first case, the damage suffered by two under-sea cables in the Mediterranean on January 30 led to an unprecedented Internet failure in most of West Asia and parts of India, Sri Lanka and Pakistan. This happened off the coast of Alexandria, and the affected cables, through which nearly 90 per cent of the data traffic through the Suez flows, were within kilometres of each other, strengthening the surmise that a single event was responsible. Speculation was that a few cuts in the fibre-optic cables connecting Europe with Egypt led to the outage. It took a few days for the service to be restored to about 75 million people faced with loss of transmission.

Before those wedded to the Net could recover from the horror that this communication failure was, there was the report of a problem in two additional cables. These were the ones travelling from one island in Qatar and another in the UAE. The disruption caused here was relatively low because one of the affected cables catered only to regional needs, and the other was just a redundant strand of fibre. There is no corroboration to a first report that Iran had been badly hit by this. In fact, very recent reports carried by Economist (February 7, 2008) suggest otherwise. Whatever be the case, the two disruptions, coming close to each other, showed how fragile the Internet was.

A third happening was the going down of a cable between Qatar and the UAE on February 3. There has been no controversy surrounding the fourth, because here the operator himself took the network off because of a power failure.

What was the root cause of trouble in the first three incidents? There are several speculations, some rational and the others a little too wild for acceptance.

In the first case, an early report suggested that the damage to cables was from the anchors of ships passing through the waters in the area. A spokesperson of the company that owned the cable said that, for some unexplained reasons, ships here were asked to anchor at a spot different from the usual one on the day of the mishap, and this possibly accounted for the cuts seen on the cables. When the second incident took place two days later, experts were not all that sure that it was ships that were the villain. Actually, according to an Egyptian government spokesman, no ships were in the area at the time of the damage to the cable.

A theory that quickly started floating around hinted at sabotage, and a number of bloggers were active in propagating this. The needle of suspicion was on terrorists.

This is countered by some observers saying that the former did not have much to gain from such an attack. Nor did they have the kind of equipment needed to cut the cables in question. These were at best surmises which we cannot wholly go by.

According to one observer, it was quite possible that the US Navy was active in the area, trying to tap the undersea fibre-optic cable for intelligence purposes. This is rejected by experts who claim that it is difficult to tap such cables because they do not leak radio frequency signals. Most of us are ignorant on the subject, and we have to meekly submit ourselves to be confused!

What is more persuasive, however, is the information furnished by Global Marine Systems (quoted again by Economist), a firm in the business of marine cable repairs, that damage to undersea cables is a common occurrence, and that in the Atlantic alone there were 50 instances last year.

These occurrences in West Asia cannot go undebated worldwide. Thousands of cables crisscross the oceans and provide the lifeline for modern communication.

Protecting them from routine maritime traffic is one thing, and guarding them from spy agencies and terrorists is an entirely different proposition. The logistics are forbidding.

This is somewhat analogous to the nagging question that anti-terrorist agencies keep wrestling with: How does one ensure that the huge containers that arrive in thousands from different parts of the globe at large ports can be scanned to eliminate the scope for introducing explosives and similar devices. Technology is improving but not as fast as law enforcement would wish. For terrorists, disrupting Internet connectivity is not such great priority. But it still offers scope to throw modern routine into chaos and disarray if not fear, their principal objective.

The writer is a former CBI Director who is currently Adviser (Security) to TCS Ltd.

Filed under: Article of the Week,

Archives

Infobreak

Infobreak

Real time News on Kendriya Vidyalayas on the web

Little Open Library (LOLib)

Tools for Every Teacher (TET)

Reader of the Month (Nov. 2019)

Nikhilesh Joshi

Master Nikhilesh Joshi (IX A)

Face a Book Challenge

e-reading hub @ Your Library

Follow Us on Twitter

Learn anything freely with Khan Academy Library of Content

A free, world-class education for anyone, anywhere.

Interactive challenges, assessments, and videos, on any topic of your interest.

Child Line (1098)

CHILDLINE 1098 service is a 24 hour free emergency phone outreach service for children in need of care and protection.

CBSE Toll Free Tele/Online Helpline

Students can call 1800 11 8004 from any part of the country. The operators will answer general queries and also connect them to the counselors for psychological counseling. The helpline will be operational from 08 a.m to 10 p.m. On-line counseling on: counselling.cecbse@gmail.com

Population Stabilization in India Toll Free Helpline

Dial 1800-11-6555 for expert advice on reproductive, maternal and child health; adolescent and sexual health; and family planning.

S. L. FAISAL
Librarian
Kendriya Vidyalaya (Shift-I)
Pattom
Thiruvananthapuram-695 004
Kerala India

Mail: librarykvpattom at gmail.com