SCN: Security

Steve steve at advocate.net
Thu Aug 30 07:19:04 PDT 2001


x-no-archive: yes

=============================


Every security researcher and every Net user who happens to find a 
security flaw is vulnerable. The witness stand could only be a 
mouse-click away.  


(Damien Cave, Salon)---Brian K. West simply wanted to see how his 
company's advertisement would look in the online edition of the 
Poteau Daily News & Sun, his local Oklahoma newspaper. But while 
trying to create a mockup, he discovered a security flaw that let him 
put the ad on the actual home page of the newspaper. No password 
or permission was required. In fact, anyone with Microsoft's 
FrontPage -- a Web site development program used to create the 
newspaper's Web pages -- could go in and redesign at will, 
wreaking havoc on the home page's structure, color and text.  

West, a 24-year-old sales and support employee of a nearby 
Internet service provider, didn't put his ad on the page or make any 
of these changes. He downloaded some files, apparently to verify 
the hole, then called the newspaper's editor in chief to let him know 
that his Web site wasn't secure -- that anyone could get in and "edit 
your stories."  

But instead of thanking him, the suspicious editor contacted the 
police, setting in motion a chain of events that would lead to an 18-
month FBI investigation and an invitation to appear before a grand 
jury Sept. 5.  

In the community of hackers, the details outlined above could be 
expected to result in West's immediate treatment as a hero, a well-
meaning altruist trapped by an undiscriminating justice system. 
Protests could have been scheduled, money raised. Like the 
recently indicted Russian programmer Dmitry Sklyarov, accused of 
illegally distributing code that unlocks electronic books, West might 
have become a poster child for reforms to laws that, according to 
critics, treat security research as a crime rather than a virtuous act 
of science.  

But even though charges have not yet been filed, West is not getting 
the hacker hero treatment. The reason? According to court 
documents West didn't just warn the Poteau Daily News about the 
hole; among the items he downloaded were files containing source 
code and passwords for the proprietary software that the 
newspaper's editors used to post stories from remote locations. It 
was only a beta version, and it's not clear whether West knew what 
he was downloading, but because the newspaper bought the 
software from an Internet service provider that was a competitor to 
West's company, the act itself did much to tarnish West's "good 
Samaritan" image.  

So, instead of becoming an icon, a victim and a martyr, he's instead 
a lightning rod for debate. Hundreds of people have written to the 
U.S. attorney in charge of the case since Aug. 17, when an 
abbreviated version of West's story appeared on the geek news site 
LinuxFreak.org. And while the prosecutor and West's lawyers 
exchange responses to the public outcry -- the latest volley 
appeared last Friday -- heavyweights in the world of security don't 
know what to make of West's actions. Some, like Richard M. Smith, 
CTO of the Privacy Foundation, argue that West went too far, while 
others argue that West "is just a guy who found a flaw and tried to 
fix it," as cryptography expert Bruce Schneier puts it. Even if he 
poked around a bit, these defenders say, he shouldn't be treated 
like a criminal. "The punishment doesn't fit the crime," Schneier 
says.  

The debate itself is not new. It's been almost 20 years since 
hackers, geeks and lawmakers first started struggling with the 
question of how software vulnerabilities should be handled. Hackers 
-- as distinguished from crackers, who break and enter computer 
systems for purposes of profit or destruction -- have long argued 
that by pointing out security holes in software they are doing a 
public service. The companies who are the recipients of hacker 
explorations, and the vendors of software that is found to be 
vulnerable, often disagree, seeing hacker activity as illegal 
trespassing or worse. It's a tension that is at the core of hacker life; 
one could even argue that the "public service" theory is, at least in 
part, a rationalization aimed at justifying the results of hacker 
curiosity.  

But even though the debate is old, the stakes keep rising. The laws 
as currently written are unfriendly to "unauthorized access," 
regardless of what the intent is. The passage of the Digital 
Millennium Copyright Act (DMCA) in 1998, which, among other 
things, made it illegal to do so much as reveal how copyright 
controls can be circumvented, has also upped the ante for those 
who like tinkering with other people's software. But while high-profile 
cases such as Sklyarov's and the DeCSS lawsuit wend their way 
through the courts, few experts in the technology community have 
offered clear alternatives that can be applied in the real world.  

There's still not an accepted set of guidelines for how people like 
West should proceed -- and that's "a serious problem," says 
Jennifer Granick, a San Francisco attorney who regularly defends 
hackers. Until consensus is reached -- which won't be easy, she 
says -- West's mistakes are destined to be repeated. Every security 
researcher and every Net user who happens to find a security flaw 
is vulnerable. The witness stand could only be a mouse-click away.  

Today's discussion of Internet security can be traced at least as far 
back as Robert Tappan Morris. In 1988, the 23-year-old doctoral 
student at Cornell released a 99-line program that ate its way 
through the Internet, propagating uncontrollably and slowing data 
transmission across the network nearly to a halt. In response to the 
unexpected shock, DARPA, (the Defense Advanced Research 
Projects Agency), a federal agency that oversaw the Net, formed a 
group of experts who could coordinate responses to worms like 
Morris'.  

The group soon called itself CERT -- for Computer Emergency 
Response Team -- and the plan it came up with seemed simple. 
People were supposed to send information on vulnerabilities to the 
group; CERT would then verify that the hole existed and alert the 
vendor. Publishing only occurred once the vendor plugged the hole.  

CERT still maintains the procedure, but after a few years, people 
started to rebel. "There were three main complaints," writes 
Schneier in an essay on the issue of publicizing vulnerabilities. 
"First, CERT got a lot of vulnerabilities reported to it, and there were 
complaints about CERT being slow in verifying them. Second, the 
vendors were slow about fixing the vulnerabilities once CERT told 
them. And third, CERT was slow about publishing reports even after 
the fixes were implemented."  

Hackers who spotted vulnerabilities weren't the only ones unhappy 
with CERT's lack of speed. The larger community of computer 
scientists and, in particular, systems administrators and security 
specialists entrusted with the responsibility of keeping networks 
safe and reliable, also chafed at the ponderous pace. By the time a 
vendor plugged a hole in its software, a great deal of mischief could 
already have occurred.  

Frustration with CERT led to what's now called "the full-disclosure 
movement" -- based on the hacker-friendly philosophy that more 
information is always better. Scott Chasin led the way, creating a 
mailing list in 1993 called Bugtraq that promised to publish 
vulnerabilities regardless of vendor response. Bugtraq's policies 
led to friction with vendors of software. Not only do software 
companies detest the bad publicity that is associated with news 
reports announcing serious problems with the software, but they are 
also wont to argue that publicizing a breach before a fix is available 
is tantamount to inviting a horde of juvenile delinquents to rummage 
through your unlocked home.  

But "the environment at that time was such that vendors weren't 
making any patches," says Elias Levy, an early Bugtraq subscriber 
who has moderated the list since 1996. "So the focus was on how to 
fix software that companies weren't fixing."  

Only a few hundred people signed up at first. In 1996, only 2,000 
people subscribed.  

But the messy dangers of security research hit home while Bugtraq 
was just getting started. In 1993, Randal Schwartz, an independent 
contractor working for Intel, decided to run a program that tested the 
vulnerability of passwords on the company's network. The program 
(called Crack) found 48 "weak" passwords (words that would be easy 
to guess) but Schwartz was hardly rewarded for his vigilance. 
Instead, he became the target of a criminal investigation, at the 
direct request of his own employer. An indictment came down in 
1994 and in 1995, an Oregon judge sentenced him to 480 hours of 
community service, five years of probation, 90 days in jail and 
$68,471.45 in restitution. The Oregon Court of Appeals eventually 
suspended the jail time and reversed the restitution order, but 
upheld all the convictions.  

"I'm now a triple felon for merely wanting to help my main client of 
five years, by running a simple tool to gather evidence that another 
group within the company was not providing the minimum company-
mandated standard level of protection," Schwartz says. "This is 
crazy. All I wanted to do was help."  

Then, Internet mania struck. With millions coming online, dot-coms 
appearing out of thin air and Web-based services like Hotmail 
growing exponentially, the security environment radically changed. 
More holes appeared and more people found them. Today, Bugtraq 
counts 46,000 subscribers, many of them journalists who spread 
news of vulnerabilities to millions.  

The expanded attention at Bugtraq and other places on the Net has 
fueled the already heated debate. The discussion that had once 
taken place in the equivalent of a small theater has now moved into 
a cacophonous coliseum. Some maintain that those who exploit a 
vulnerability in order to prove that it exists are violating property 
rights. Others follow CERT's moderate stance, arguing that testing a 
hole was fine as long as the tester told the vendor about the hole 
and kept the vulnerability private.  

At the other end of the spectrum sit those who take a more 
libertarian line. They argue that ferreting out vulnerabilities -- by any 
means possible -- is the best way to keep them from forming in the 
future. Some diehards even declare that high-profile crackers like 
Kevin Mitnick -- the notorious computer expert who spent five years 
in jail for illegally accessing corporate networks -- should be lauded 
as heroes, cyber-investigators who showed the world how fragile 
networks could be.  

"These problems are complex and ambiguous," says Smith of the 
Privacy Foundation.  

"It's an extremely difficult issue," adds Schneier, echoing the 
sentiments of other security experts. "The more I look at it, the 
harder it seems to get."  

West's case sidesteps a few of these difficulties. He didn't attempt 
to publish the vulnerability at the Poteau Daily News, and, according 
to his lawyer, didn't intentionally copy valuable security software as 
Mitnick did.  

But his case is powerfully relevant. Experts say that his actions at 
the Poteau site -- from finding the hole to downloading a competitor's 
publishing software and a file which had the passwords and log-ins 
that offered access to that software -- reignite many of the difficult 
questions that the technology community and courts are still trying 
to answer.  

Does everyone have a right to look under the hood of every product 
they buy, of every Web site they can access? Once someone finds 
a possible vulnerability, must he or she inform whatever company 
might be affected by it? If someone exploits a vulnerability in order 
to verify that it exists, should the access be considered criminal, or 
does it depend on what is gained through the act of exploitation? Or, 
even more subjectively, does it depend on the intent of the hacker?  

Even before West discovered the Poteau Daily News flaw, he had 
some experience with such queries. A few months prior, he noticed 
that his bank's online services included his account number in the 
URL, so by plugging in other numbers, he could (and allegedly did) 
access other peoples' accounts. He never changed these accounts, 
and told the bank about the flaw. They fixed it, without calling the 
cops.  

West could have been prosecuted for his bank discovery too, just 
as was Randal Schwartz. The courts haven't given any clear 
answers to the burning questions surrounding computer access, 
says lawyer Granick. Although other people have found holes and 
been prosecuted for accessing private files, and in some cases for 
extortion -- charges that arise when people demand money for 
information on how to patch a given hole -- few of these cases went 
to trial. Most were settled without a judge's decision. There are 
exceptions, such as the DeCSS case, in which the publisher of the 
magazine 2600 was enjoined from distributing code that decrypts 
DVDs. But for the most part, the courts haven't clarified the laws 
surrounding security, so enforcement tends to be subjective.  

"The whole concept of 'unauthorized access' is in question," Granick 
says. "There isn't enough case law to go on."  

So, in the absence of legal authority, can the ambiguities be 
eliminated, or at least diminished? Granick, Smith, Levy and other 
security experts suggest that a formal, accepted set of guidelines -- 
voted on and supported by the security industry -- would improve 
the situation.  

Granick argues that the resulting code should treat the Internet as an 
entity unto itself, rather than some kind of electronic home.  

"The problem lies with the notion of 'went in,'" she says. "There's a 
barrier to going into a house or store that doesn't make sense in a 
computer context. If you type something in and see something 
you're not supposed to see, it's not the same as walking into 
someone's house. It's more like walking by a window without the 
shades being drawn."  

Schwartz holds to a similar line. "There must be safe harbor for the 
people trying to help," he says, because otherwise holes will 
proliferate. When the law doesn't allow researchers the freedom to 
find and plug holes, bugs will go unreported; fear will keep the 
helpful away, leaving room for the intentionally malicious. 
"Everyone loses," he says. "And as the law currently stands, it's the 
whistleblowers (like me) that stand to lose the most."  

But others disagree with Granick's logic. Tony Morgan, co-owner of 
Cyberlink, the ISP that wrote the software West copied, argues that 
West didn't just see the vulnerability. "He exploited it," Morgan 
says. "Finding the hole wasn't wrong; I back the hackers and 
crackers on that. The illegal part is when someone takes or destroys 
something. We feel that [in West's case] the line was crossed."  

And Morgan -- who claims the software West downloaded could be 
sold for about $5,000 -- isn't the only one arguing that computers 
should be treated like offline property.  

"If you screw with a service [as opposed to a product], you're 
screwing with someone's property," says Levy of Bugtraq. "Most 
people who have been doing security research for a while wouldn't 
have done what Brian did. Most people would know that the first 
thing you should do is get a waiver to verify the vulnerability."  

On the other hand, the DMCA is also problematic precisely because 
it treats digital content as its own unique animal. While traditional 
copyright law allows people to, say, copy a book for a school 
project, the DMCA makes no room for such fair uses of digital 
content. Simply showing people how to unlock an electronic book, as 
Sklyarov is now discovering, becomes cause for imprisonment.  

People already think the Internet and other new technologies are 
more unique than they actually are, says Schneier. And because the 
general public errs on the side of fear rather than respect, he says 
"the law needs to be technologically neutral."  

David Touretzky, a computer science professor at Carnegie Mellon 
who testified at the DeCSS trial, believes that new technologies 
should be treated like your local bank.  

"It's a place of business, open to the public," he says. "But not 
every inch is open to the public. Suppose I go wandering down the 
hall and walk into some guy's private office and walk over to the 
desk and take a look at the papers lying out in plain view. Am I guilty 
of breaking and entering? No. Am I trespassing? Well, yeah, but the 
building was open the public."  

At this point, because he would be somewhere he wasn't supposed 
to be, "the bank would be right to ask me to leave, maybe even tell 
me never to come back again," he says. "But having me arrested for 
wandering into an office? Nah. That would be overkill."  

Still, with so many ideas swirling about, can a coherent set of 
guidelines ever form? At least one security expert -- Chris Wysopal, 
head of research and development at the security firm @Stake -- is 
making the attempt. But Wysopal, a former hacker who's known 
online as "Weld Pond," has just begun gathering industry input. 
Even though the Net would be better off "with a set of moral codes," 
says Schneier, the community probably won't come up with anything 
useful anytime soon.  

"The only way to do it is through case law," he says. "That's how we 
did it with phones and wiretaps, and that's how it will happen here." 
West should not be punished harshly for his mistakes, he says, but 
regardless, the case may actually improve the present security 
environment. The only problem, he adds, is that the law moves 
slowly.  

"It will take years to figure this out," Schneier says. "When the legal 
system hits Internet time, the results are a mess."  

Brian West probably agrees.  


Copyright 2001 Salon.com





* * * * * * * * * * * * * *  From the Listowner  * * * * * * * * * * * *
.	To unsubscribe from this list, send a message to:
majordomo at scn.org		In the body of the message, type:
unsubscribe scn
==== Messages posted on this list are also available on the web at: ====
* * * * * * *     http://www.scn.org/volunteers/scn-l/     * * * * * * *



More information about the scn mailing list