Saturday 7 January 2006

Is the Internet Safe?

Posted by Emily Listiane john 08:57, under | No comments

From Legal Affairs:

The Internet was a digital backwater in 1988, and many people might assume that, while computer viruses are still an inconvenience, computers are vastly more secure than they were back in the primitive days when Morris set his worm loose. But although hundreds of millions of personal computers are now connected to the Internet and ostensibly protected by firewalls and antivirus software, our technological infrastructure is in fact less secure than it was in 1988. Because the current computing and networking environment is so sprawling and dynamic, and because its ever-more-powerful building blocks are owned and managed by regular citizens rather than technical experts, our vulnerability has increased substantially with the heightened dependence on the Internet by the public at large. Well-crafted worms and viruses routinely infect vast swaths of Net-connected personal computers. In January 2003, for instance, the Sapphire/Slammer worm attacked a particular kind of Microsoft server and infected 90 percent of those servers—around 120,000 servers in total—within 10 minutes. In August 2003, the "sobig.f" virus managed, within five days of its release, to account for approximately 70 percent of worldwide e-mail traffic; it deposited 23.2 million virus-laden e-mails on AOL's doorstep alone. In May 2004, a version of the Sasser worm infected more than half a million computers in three days. If any of these pieces of malware had been truly "mal"—for example, programmed to erase hard drives or to randomly transpose numbers inside spreadsheets or to add profanity at random intervals to Word documents found on infected computers—nothing would have stood in the way.

In the absence of a fundamental shift in current computing architecture or practices, most of us stand at the mercy of hackers whose predilections to create havoc have so far fallen short of their casually obtained capacities to ruin our PCs. In an era in which an out-of-the-box PC can be compromised within a minute of being connected to the Internet, such self-restraint is a thin reed on which to rest our security. It is plausible that in the next few years, Internet users will experience a September 11 moment—a system-wide infection that does more than create an upward blip in Internet data traffic or cause an ill-tempered PC to be restarted more often than usual.

How might such a crisis unfold? Suppose that a worm is released somewhere in Russia, exploiting security flaws in a commonly used web server and in a web browser found on both Mac and Windows platforms. The worm quickly spreads through two mechanisms. First, it randomly "knocks" on the doors of Internet-connected machines, immediately infecting the vulnerable web servers that answer. Unwitting consumers, using vulnerable web browsers, visit the infected servers, which infect users' computers. Compromised machines are completely open to instruction by the worm, and some worms ask the machines to remain in a holding pattern, awaiting further direction. Computers like this are known, appropriately enough, as "zombies." Imagine that our worm asks its zombies to look for other nearby machines to infect for a day or two and then tells the machines to erase their own hard drives at the stroke of midnight. (A smart virus would naturally adjust for time zones to make sure the collective crash took place at the same time around the globe.)

This is not science fiction. It is merely a reapplication of the template of the Morris episode, a template that has been replicated countless times. The Computer Emergency Response Team Coordination Center, formed in the wake of the Morris worm, took up the task of counting the number of security incidents each year. The increase in incidents since 1997 has been roughly geometric, doubling nearly every year through 2003. CERT/CC announced in 2004 that it would no longer keep track of the figure, since attacks had become so commonplace and widespread as to be indistinguishable from one another.

Combine one well-written worm of the sort that can evade firewalls and antivirus software with one truly malicious worm-writer, and we have the prospect of a networked meltdown that can blight cyberspace and spill over to the real world: no check-in at some airline counters; no overnight deliveries or other forms of package and letter distribution; the inability of payroll software to produce paychecks for millions of workers; the elimination, release, or nefarious alteration of vital records hosted at medical offices, schools, town halls, and other data repositories that cannot afford a full-time IT staff to perform backups and ward off technological demons.

....

The Internet was a digital backwater in 1988, and many people might assume that, while computer viruses are still an inconvenience, computers are vastly more secure than they were back in the primitive days when Morris set his worm loose. But although hundreds of millions of personal computers are now connected to the Internet and ostensibly protected by firewalls and antivirus software, our technological infrastructure is in fact less secure than it was in 1988. Because the current computing and networking environment is so sprawling and dynamic, and because its ever-more-powerful building blocks are owned and managed by regular citizens rather than technical experts, our vulnerability has increased substantially with the heightened dependence on the Internet by the public at large. Well-crafted worms and viruses routinely infect vast swaths of Net-connected personal computers. In January 2003, for instance, the Sapphire/Slammer worm attacked a particular kind of Microsoft server and infected 90 percent of those servers—around 120,000 servers in total—within 10 minutes. In August 2003, the "sobig.f" virus managed, within five days of its release, to account for approximately 70 percent of worldwide e-mail traffic; it deposited 23.2 million virus-laden e-mails on AOL's doorstep alone. In May 2004, a version of the Sasser worm infected more than half a million computers in three days. If any of these pieces of malware had been truly "mal"—for example, programmed to erase hard drives or to randomly transpose numbers inside spreadsheets or to add profanity at random intervals to Word documents found on infected computers—nothing would have stood in the way.

In the absence of a fundamental shift in current computing architecture or practices, most of us stand at the mercy of hackers whose predilections to create havoc have so far fallen short of their casually obtained capacities to ruin our PCs. In an era in which an out-of-the-box PC can be compromised within a minute of being connected to the Internet, such self-restraint is a thin reed on which to rest our security. It is plausible that in the next few years, Internet users will experience a September 11 moment—a system-wide infection that does more than create an upward blip in Internet data traffic or cause an ill-tempered PC to be restarted more often than usual.

How might such a crisis unfold? Suppose that a worm is released somewhere in Russia, exploiting security flaws in a commonly used web server and in a web browser found on both Mac and Windows platforms. The worm quickly spreads through two mechanisms. First, it randomly "knocks" on the doors of Internet-connected machines, immediately infecting the vulnerable web servers that answer. Unwitting consumers, using vulnerable web browsers, visit the infected servers, which infect users' computers. Compromised machines are completely open to instruction by the worm, and some worms ask the machines to remain in a holding pattern, awaiting further direction. Computers like this are known, appropriately enough, as "zombies." Imagine that our worm asks its zombies to look for other nearby machines to infect for a day or two and then tells the machines to erase their own hard drives at the stroke of midnight. (A smart virus would naturally adjust for time zones to make sure the collective crash took place at the same time around the globe.)

This is not science fiction. It is merely a reapplication of the template of the Morris episode, a template that has been replicated countless times. The Computer Emergency Response Team Coordination Center, formed in the wake of the Morris worm, took up the task of counting the number of security incidents each year. The increase in incidents since 1997 has been roughly geometric, doubling nearly every year through 2003. CERT/CC announced in 2004 that it would no longer keep track of the figure, since attacks had become so commonplace and widespread as to be indistinguishable from one another.

Combine one well-written worm of the sort that can evade firewalls and antivirus software with one truly malicious worm-writer, and we have the prospect of a networked meltdown that can blight cyberspace and spill over to the real world: no check-in at some airline counters; no overnight deliveries or other forms of package and letter distribution; the inability of payroll software to produce paychecks for millions of workers; the elimination, release, or nefarious alteration of vital records hosted at medical offices, schools, town halls, and other data repositories that cannot afford a full-time IT staff to perform backups and ward off technological demons.

...

A profoundly fortuitous convergence of historical factors has led us to today's marvelous status quo, and many of us (with a few well-known exceptions like record company CEOs and cyber-stalkees) have enjoyed the benefits of the generative Internet/PC grid while being at most inconvenienced by its drawbacks. Unfortunately, this quasi-utopia can't last. The explosive growth of the Internet, both in amount of usage and in the breadth of uses to which it can be put, means we now have plenty to lose if our connectivity goes seriously awry. The same generativity that fueled this growth poses the greatest threat to our connectivity. The remarkable speed with which new software from left field can achieve ubiquity means that well-crafted malware from left field can take down Net-connected PCs en masse. In short, our wonderful PCs are fundamentally vulnerable to a massive cyberattack.

To link to the Internet, online consumers have increasingly been using always-on broadband, and connecting with ever more powerful computers—computers that are therefore capable of creating far more mischief should they be compromised. For example, many viruses and worms do more than propagate nowadays, even if they fall short of triggering PC hard drive erasure. Take, for instance, the transmission of spam. It is now commonplace to find viruses that are capable of turning a PC into its own Internet server, sending spam by the thousands or millions to e-mail addresses harvested from the hard disk of the machine itself or randomized Web searches—all this happening in the background as the PC's owner notices no difference in the machine's behavior.

In an experiment conducted in the fall of 2003, a researcher named Luke Dudney connected to the Internet a PC that simulated running an "open proxy," a condition in which a PC acts to forward Internet traffic from others. Within nine hours, the computer had been found by spammers, who began attempting to send mail through it. In the 66 hours that followed, they requested that Dudney's computer send 229,468 individual messages to 3,360,181 would-be recipients. (Dudney's computer pretended to forward the spam, but threw it away.)

A massive set of always-on powerful PCs with high bandwidth run by unskilled users is a phenomenon new to the 21st century. Today's viruses are highly and near-instantly communicable, capable of sweeping through a substantial worldwide target population in a matter of hours. The symptoms may reveal themselves to users instantly, or the virus could spread for a while without demonstrating any symptoms, at the choice of the virus author. Even protected systems can fall prey to a widespread infection, since the propagation of a virus can disrupt network connectivity. Some viruses are programmed to attack specific network destinations by seeking to access them again and again. Such a "distributed denial-of-service" attack can disrupt access to all but the most well-connected and well-defended servers.


And the punchline of the article:

THE MODERN INTERNET IS AT A WATERSHED MOMENT. Its generativity, and that of the PC, has produced extraordinary progress in the development of information technology, which in turn has led to extraordinary progress in the development of forms of creative and political expression. Regulatory authorities have applauded this progress, but many are increasingly concerned by its excesses. To them, the experimentalist spirit that made the most of this generativity seems out of place now that millions of business and home users rely on the Internet and PCs to serve scores of functions vital to everyday life.

The challenge facing those interested in a vibrant global Internet is to maintain that experimentalist spirit in the face of these pressures.

One path leads to two Internets: a new, experimentalist one that would restart the generative cycle among a narrow set of researchers and hackers, and that would be invisible and inaccessible to ordinary consumers; and a mainstream Internet where little new would happen and existing technology firms would lock in and refine existing applications.

Another, more inviting path would try to maintain the fundamental generativity of the existing grid while solving the problems that tend to incite the enemies of the Internet free-for-all. It requires making the grid more secure—perhaps by making some of the activities to which regulators most object more regulable—while continuing to enable the rapid deployment of the sort of amateur programming that has made the Internet such a stunning success.

How might this be achieved? The starting point for preserving generativity in this new computing environment should be to refine the principle of "end-to-end neutrality." This notion, sacred to Internet architects, holds that the Internet's basic purpose is to indiscriminately route packets of data from point A to point Z, and that any added controls or "features" typically should be incorporated only at the edges of the network, not in the middle. Security, encryption, error checking—all these actions should be performed by smart PCs at the "ends" rather than by the network to which they connect. This is meant to preserve the flexibility of the network and maximum choice for its users.

...

Collaborative is the key word. What is needed at this point, above all else, is a 21st century international Manhattan Project which brings together people of good faith in government, academia, and the private sector for the purpose of shoring up the miraculous information technology grid that is too easy to take for granted and whose seeming self-maintenance has led us into an undue complacence. The group's charter would embrace the ethos of amateur innovation while being clear-eyed about the ways in which the research Internet and hobbyist PC of the 1970s and 1980s are straining under the pressures of serving as the world's information backbone.

The transition to a networking infrastructure that is more secure yet roughly as dynamic as the current one will not be smooth. A decentralized and, more important, exuberantly anarchic Internet does not readily lend itself to collective action. But the danger is real and growing. We can act now to correct the vulnerabilities and ensure that those who wish to contribute to the global information grid can continue to do so without having to occupy the privileged perches of established firms or powerful governments, or conduct themselves outside the law.

Or we can wait for disaster to strike and, in the time it takes to replace today's PCs with a 21st-century Mr. Coffee, lose the Internet as we know it.

0 comments:

Post a Comment

Tags

Labels