Security

Steve steve at advocate.net
Sun Jul 25 00:09:11 PDT 1999


x-no-archive: yes

=====================

The Mole In the Machine 

The Government promises better protection of top-secret information,
like the nuclear secrets leaked from Los Alamos National Laboratory.
But the truth is, there is no such thing as a secure computer
network.

Charles C. Mann
NY Times 7/25/99


Several years ago I visited Scott D. Yelich, the administrator of a
computer network, at his Santa Fe, N.M., apartment. For a
computer-systems manager, he had an apartment that was, perhaps,
nothing out of the ordinary. A pool table covered with Asian
fighting sticks dominated the living room. One bathroom featured a
six-foot python, alive, breathing and wrapped around the shower rod.
Much of a spare bedroom was filled by the administrator's personal
computer system, with two 21-inch monitors crowding a desk and
several big beige boxes wheezing in the closet. Yelich was trying to
help me understand that people can easily break into computers. He
explained how he'd discovered that someone who called himself Info
was breaking into the network he ran at the Santa Fe Institute, a
highfalutin physics think tank. Annoyed, Yelich had monitored the
intrusions, discovering that they seemed to have originated at a
small software firm in town. 

Soon he learned who owned the E-mail account that Info was misusing:
the software company president, who was also a researcher at S.F.I.
(Yelich made me promise not to reveal his name.) The president had
E-mail accounts on the networks at both places. 

"It's all simple, trivial stuff," Yelich said. Info had first broken
into the machine at the software company, then searched for the word
"password" in all E-mail files on the network. It turned out that
the president had E-mailed co-workers, asking them to check his
incoming messages while he was on vacation -- and had provided his
password. "Boom!" Yelch said. "There it is." 

Then he said: "I think this is something you'll like." He pointed to
a line of type on the monitor. The president also had an account at
Los Alamos National Laboratory. It had the same password as the
S.F.I. account ("Capital-D dumb," Yelich said). He scrolled down and
pointed to the screen: "Info followed him in there, too." Indeed,
Yelich's monitoring had recorded Info as he logged on to the lab's
system, typed in the president's user ID and password and played
around with what he found.

Info, as it turned out, was only a confused lower-middle-class kid
from Portland, Ore. -- the Federal Bureau of Investigation caught him
and didn't even press charges. But I found myself recalling him when
the news broke in March that the F.B.I. discovered that Wen Ho Lee, a
researcher at Los Alamos, had filled one of his computers with
top-secret files about nuclear-weapons design. Lee's machine was on
the network subverted by Info. 

Since then, Los Alamos has been vilified in Congress for its lack of
computer-system security. But computer-security professionals have
not joined the chorus. For them, the Los Alamos case simply
exemplifies the inherent difficulty of controlling the flow of
digital information -- not just in laboratories, but anywhere. 

The heart of these problems is that computer networks -- and the
Internet -- are extremely difficult to protect. "The only system that
is truly secure is one that is switched off and unplugged, locked in
a titanium safe, buried in a concrete vault on the bottom of the sea
and surrounded by very highly paid armed guards," says Eugene H.
Spafford, director of the Purdue Center for Education and Research in
Information Assurance and Security. "Even then I wouldn't bet on it."

Computers can be secured better only if users accept Draconian
restrictions that make their systems much harder to use -- which, as
a rule, users refuse to do. Alas, the risks of willful ignorance are
growing. Never have the methods of attack been so powerful and easy
to use. Never has the Internet made it possible to disseminate those
methods everywhere in minutes. And never have so many livelihoods --
and lives -- depended on systems so vulnerable to attack. "I can't
say that 1999 will be the year when a blowup happens," says A.
Padgett Peterson, chief of information security at Lockheed-Martin.
"But what I can say is that, compared to what is possible, what we
have seen so far has been almost benign in nature." 

Roughly 8,000 researchers are scattered about the 43-square-mile
campus of Los Alamos. Each and every one has a computer; many have
more than one; almost all are wired, one way or another, into a
laboratory network. Additional thousands of administrators and
support personnel have access to computers. So do the 3,000 visitors
who work temporarily at the laboratory each year. This year, 450 of
the employees are foreign citizens, including more than 30 from
Russia, China, North Korea and seven other "sensitive" countries.
Keeping track of what these thousands of people are up to is
difficult, to say the least. Los Alamos's efforts to do so illustrate
both the difficulty of the task and why users so often rebel against
their own security measures. 

To channel this river of zeroes and ones, Los Alamos -- like other
facilities with classified work -- gave scientists a "black" computer
for their secret material and a "white" computer for contacting the
outside world. Lee worked with 300 other researchers in the
weapon-design group, known melodramatically as the "X Division." All
the researchers had both machines chewing up real estate on their
desks. In theory, scientists are not supposed to copy work from one
machine to the other; in practice, the temptation is strong. 

"The military insists on reviewing all the software you put on your
black computer," says Jeffrey I. Schiller, network manager for the
Massachusetts Institute of Technology and a director of the security
group for the Internet Engineering Task Force, the ad hoc
international body that develops technical standards for the
Internet. Such a review process, invariably, is slow. "It guarantees
that their systems are up to three years out of date. You want to
write a report with the latest version of Microsoft Word on your
insecure computer or on some piece of junk with your secure computer?"

The superior usefulness of the unclassified systems, Schiller
speculates, may be why the former Central Intelligence Agency chief
John Deutch was caught last year with top-secret files on his home
computer. "People load up the classified stuff on their white
computer, then delete it," Schiller says. "But you do that enough
times, you start to lose track." Eventually, the white machine
becomes gray. 

Military installations try to contain classified data by
"air-gapping" their networks. Recognizing that they cannot control
the flow of material within a network, they seek to fence off the
entire network -- maintaining a literal gap of air between the
computer and all connection to the outside world. That is more easily
said than done; computers can transmit information in multiple ways.
To air-gap its computers, the National Security Agency not only
controls modems, floppy disks, Zip drives, optical disks, CD-ROM's and
Ethernet cards, it installs "blacker boxes" -- special devices that
prevent data from leaking out through the power cable. 

That is not enough, says Simson Garfinkel, the author, with Eugene
Spafford, of "Practical Unix and Internet Security." "It's very
difficult to be sure data is ever actually taken off a hard drive,"
Garfinkel says. "You might think you could just wipe the disk but that
won't do it." The reason is that a six-gigabyte drive often has an
additional four gigabytes of storage. The excess, which is invisible
to the software, replaces "blocks" of storage capacity on the drive
when they begin to fail. As a block goes bad, the disk copies data
from the failing block to the reserve blocks. Disk-wiping software
typically erases only the roster of good blocks, ignoring hidden
blocks, which may still contain data. 

The military used to safeguard data by grinding up old disks,
Garfinkel says. Now that doesn't work anymore. Tiny pieces of hard
drives can contain sizable amounts of information. According to a
rough calculation by Garfinkel, a one-sixteenth-inch-square piece of
a 6-gigabyte hard drive can hold 750,000 bytes -- enough for a
300-page book. "A spy could remove a hard disk, grind it up and
smuggle out the data in little pieces like pocket lint," he says. 

Los Alamos has split off all machines with secret information from
all machines with access to the Internet, according to John Morrison,
deputy director of the lab's computing, information and
communications division. Since the controversy, when researchers have
access to both networks, the lab eliminates their floppy drives to
block them from copying files between them. It is demanding that
researchers on classified projects communicate electronically only
over encrypted lines. It is telling people not only to turn off their
cell phones but to remove the batteries. It is doing enough, in fact,
to drive the users crazy. 

"In theory, cutting the cable works great," says Dan Wallach, a
security researcher at Rice University. "Nobody from outside can
attack you or steal your files. Except suddenly you can't do as much
stuff as you used to. And then people start feeling their security
system is a pain in the neck, so they take active steps to subvert
it." Laboratories are especially prone to such sabotage, because they
are filled with people who know a lot about computers. Eventually,
compromises are made, and the original problem comes back. 

"Suppose people at Los Alamos are collaborating with people in
Britain," says Jon R. David, an internationally recognized security
consultant who is the senior editor of the journal Computers and
Security. "They have to talk to somebody in the U.K. by computer, and
so the lab arranges this encrypted line, which promises to be secure.
But is it true? No. The information on the line is safe, because it's
encrypted, but everything else -- the machines at the other end, the
connection points -- can be attacked, and have been attacked
successfully." 

By coincidence, his point was precisely demonstrated to me last
spring. A man I have come into contact with is an officer in Army
intelligence. It is safe to say that this man is greatly concerned
with security in general, and with computer security in particular.
His facility, too, has two networks -- one classified, one very
classified -- with security precautions much like those being
installed at Los Alamos. 

In March, the computer virus Melissa flooded more than 100,000
computers across the world. Seizing users' machines, it E-mailed
copies of itself to the top 50 names in their address books (if they
used a particular Microsoft program), and so on, until hundreds of
networks were swamped by a mail storm of Melissa copies. 

The Army intelligence man I contacted knew all about Melissa. On his
classified network -- his guarded, protected, sealed-off network --
he had received some 80 copies of the virus. 

A network's security problems, of course, are not only due to its
users. There are all the folks outside who are trying to break in. A
terminological note: computer cognoscenti use the word "crackers" to
refer to the vandals who bust into computer systems illicitly;
"hacker," in their lexicon, is reserved for expert programmers and
network administrators. Thus, Info was a cracker, not a hacker. The
people who disabled the Web sites of the F.B.I., the Departments of
Energy and the Interior, the Senate and the White House last May were
crackers. And when, soon after, the Pentagon took its Web site off
the air to shore up security, it feared being cracked, not hacked. 

These cracking incidents disturbed security researchers. Not because
the break-ins were unusual: crackers went after the U.S. military
more than 250,000 times in 1996, the most recent year for which
official estimates are available, and thousands -- perhaps tens of
thousands -- actually broke in. Nor were the attacks disturbing
because they succeeded in doing permanent damage. (They didn't.) Nor
were the crackers able to make off with classified data -- the
breached networks had none (in theory, anyway). What distressed
security researchers is how far these crackers got, considering that
they almost certainly labored under the handicap of having no idea of
what they were doing. 

Most crackers are what real hackers refer to disdainfully as "script
kiddies." Just as the great majority of Netscape or Explorer users
have no idea how the software puts Web images on their screens, the
great majority of crackers blindly try out scripts -- routines
written and posted on the Internet by the tiny number of crackers who
know something about pro-gramming. 

>From a network administrator's point of view, the result is like
being besieged by a million monkeys randomly firing catapults -- some
of them, by virtue of their sheer numbers, are bound to get a hit.
"You see it all the time," Schiller says. "Someone slips in with some
incredibly sophisticated technique." 

Even as the number of script kiddies rises, the "malware" -- the
security maven's term for malicious software -- they use is becoming
more sophisticated, dangerous and rapidly disseminated. Last year,
Microsoft issued 20 software updates to stop break-ins. In the first
six months of this year, it released 22. Bruce Schneier, president of
Counterpane, a computer-security company in Minnesota, calls 1999 "a
pivotal year for malicious software" -- the year that problems really
have become serious.

The line was crossed first by Melissa, which caused such havoc that
it obscured the antics of an almost equally unpleasant virus known as
CIH or Chernobyl, so named because the virus appears on April 26, the
anniversary of the Chernobyl disaster. CIH infected as many as
600,000 computers in South Korea alone; damage estimates run up to $10
million. Then, in June, came the Explore worm. 

Technically, a worm is a program that crawls through networks,
automatically making and distributing copies of itself. (Viruses, by
contrast, need human help to propagate.) The Explore worm appeared in
a victim's E-mail in the guise of a note from a correspondent. 

Attached to the note was a file: zipped_files .exe. When recipients
open the file, they see an error message and assume nothing has
happened. In fact, the worm has effectively erased documents,
spreadsheets and presentations. In addition, the worm infects other
machines on the network, hitting even people who haven't recently used
their E-mail. 

To the relief of security experts, no one has yet combined the
unbelievably rapid spread of Melissa (sending out 50 copies of itself
at once) and the toxic actions of the Explore worm (trashing files).
But it might not be long before someone does, and Schneier believes
that the results may be catastrophic. "It's a really big deal," he
argued in a warning widely distributed over the Internet. With
cracking scripts posted on the Web by their inventors, Schneier
wrote, the new malware programs "don't propagate over weeks and
months; they propagate in seconds." 

When a cracker can figure out a new way of attack in the morning and
spread it across the world in the afternoon, users cannot possibly be
protected by, say, updating their antiviral software and security
patches every week. (And how many users do even that?) A new approach
will be needed, Schneier believes. Society will have to insist that
security is built into software from the beginning. 

"From a security point of view, the software in everyday use is
nonfunctional," says Rice University's Wallach. "There's an old joke.
If cars were like computers, they would go 300 m.p.h. and get a
hundred miles to the gallon and cost $50. Except that twice a month
someone a thousand miles away would be able to blow up the car,
killing everyone nearby." Believing there is no cry for better
security, software vendors simply ignore the issue, he says. 

A prime example, security experts say, is the current trend toward
blurring data and programs. In the past, because data -- letters,
charts, tables -- did not perform actions, it was impossible to
disguise malware as data. Now data are full of little programs. Some
of them make it possible to reach the Web or send E-mail from within a
document that otherwise would seem to have nothing to do with the
Internet. A result, Spafford says, is "a complete forest of
interconnections that are waiting to be exploited" by malware.
"Springing this kind of software on unknowing users is
unconscionable," he says. 

According to its Web site, Microsoft is developing "collaboration
data objects," which "allow easy access to E-mail systems embedded in
Microsoft Windows products." "Unless you understand exactly what is
being said, it is easy to gloss over," Peterson explains. "Does
anyone who buys Microsoft software actually want to be able to send
mail from Word, Excel, or an .exe file or is it something that is
only used by writers of malicious software?" 

Ultimately, Spafford thinks, all is not hopeless. A solution may lie
on the horizon: the Y2K problem. Like the asbestos lawyers involved
in the tobacco lawsuits, Y2K lawyers will start looking for new
targets. They will discover that many of society's most valued assets
have been entrusted to badly designed and unsafe software. "I tell
you," Spafford says, "they won't be able to believe their good
fortune." 

Back in Santa Fe, Yelich has been trying to secure his current
network against a cracker who calls himself "u4ea." Unable to break
in, u4ea has been overwhelming Yelich's machines with tidal waves of
incoming messages -- an attack that requires no skill, but cannot
possibly be guarded against. For the time being, more people will end
up spending their time much like Yelich, trying to construct
barricades around their machines instead of using them. 

Copyright 1999 The New York Times Company 





* * * * * * * * * * * * * *  From the Listowner  * * * * * * * * * * * *
.	To unsubscribe from this list, send a message to:
majordomo at scn.org		In the body of the message, type:
unsubscribe scn
==== Messages posted on this list are also available on the web at: ====
* * * * * * *     http://www.scn.org/volunteers/scn-l/     * * * * * * *



More information about the scn mailing list