Shortly before covering security analyst Craig Ozancin's LinuxWorld Conference & Expo session on Linux security, I wandered into the Geek Bowl quiz competition in progress. Through an odd bit of synchronicity, the two events segued rather nicely: One of the many questions the panelists blew completely was "In the movie Tron, the lead character Flynn's voice was provided by what actor?"
I told Eric Raymond we'd have to send him back to the geek reeducation camps, for missing that one. But it helped put me in the proper frame of mind for a security panel -- once you correct the movie's minor flaw of depicting the wrong side as the heroes. In a nutshell, you (the system administrator) are in the villain's role in that computerist's classic, the Master Control Program. Your problem: How do you keep out Jeff Bridges (the outside attacker)?
Ozancin's talk dwelled at length on the methods and tools an attacker (a term he advocates over hacker) uses to select you as his target and worm his way in.
The attacker may also use specialized network-vulnerability scanners (see Resources): Nessus, the older SATAN and SAINT packages, Firewalk (which probes and identifies a network's firewall ruleset), or proprietary scanners such as Internet Security Systems' Internet Scanner and Axxent Technologies' NetRecon -- as well as checking Websites on the target network for known-exploitable CGI scripts.
Or the attacker may skip the fancy network scanners and concentrate on stealing one of your passwords. In my experience, that is the bad guys' usual way in and absurdly easy on most systems. If one of your users uses Telnet or (nonanonymous) FTP, or POP3 to reach your system remotely, the user's login name and password can be snagged with trivial effort at any point between the two machines. Alternatively, the malefactor may use as low-tech a means as shoulder surfing (watching the login as it's being typed in), or a variety of social engineering techniques. People are often astonishingly willing to give their passwords over the telephone to a stranger with a plausible reason for asking. Or they email passwords and other confidential data across the open Internet, ripe for interception.1 At the minimum, the attacker may telephone the firm to glean people's names and positions, or get that information from the company Webpages. He may then be able to predict valid usernames and try them with likely password combinations.
Then there are the truly embarrassing password techniques that amount to walking into an open, unguarded bank vault. There are still services that ship with default remote administrative passwords, as evidenced by Red Hat Software's recent Piranha gaffe, as well as sites reckless enough to use null passwords, the username as the password, or the username reversed (e.g., toor for the root account). Or the attacker may use remote techniques to read a copy of /etc/passwd (on systems without shadow passwords enabled). Many such past exploits have relied on insecure CGI scripts provided by default with Web servers that are also unnecessarily running with root authority. (The Apache Web server most commonly used on Linux no longer ships with either of those faults.)
Any attacker who can grab an unshadowed password file has hit the jackpot because he can then crack your passwords in private, at his leisure. That is done by automatically encrypting large lists of words in various permutations and comparing the crypted versions against the target password entries, looking for matches. The traditional tool for that task, crack, now has a next-generation replacement, John the Ripper, with better performance and a broader reach of target passwords. But the real clincher is the advent of distributed password-crackers such as mio-star, saltine-cracker, and slurpie, which can make entire networks of machines work cooperatively on cracking your password file via those dictionary attacks.
Why all that firepower concentrated on cracking your password files? Because, once the attacker is on your machine, posing as a legitimate shell user, vastly greater avenues towards total control of your machine (root access) beckon: he can attempt that through manipulation of any of your system's privileged programs, instead of just those advertising remote network services. That is what I call Moen's First Law of Security: "It's easier to break in from the inside."
If the attacker is not able to pose as a legitimate user, then his avenues of attack are more limited but still numerous. Every month, security advisories about new holes in network software are issued, more often than not in the form of buffer overflows: examples of poor input validation that permit running attacker-specified code as if it were part of the program, abusing its authority. Some overflow-based attacks directly open shells or other direct access mechanisms for the attacker; others act more indirectly by yielding the contents of /etc/passwd or /etc/shadow, creating a new account, changing the password of an existing account, creating a custom .rhosts file, and so on.
However, regardless of whether your attacker entered via the front or back door, his next priority after gaining root access is to cover his tracks, preventing the administrator from noticing his presence and locking him out. He'll do that by sabotaging the system logs and accounting software, disabling any security-monitoring software, and installing trojan horse (trojaned) software to conceal his activities, gain additional intelligence, and create back doors in case he needs another way in.
The trojaned software usually includes replacement binaries for the genuine login, netstat, ps, ifconfig, du, df, ls, top, syslogd, tcpd, locate, and various servers run by the inetd superserver. The aim is to hide the attacker's tools, logs, and processes, so that they are invisible to the legitimate root user.
And tomorrow the world!
Some of those processes will be spy programs, running to capture login information entered by local users for remote systems elsewhere. Those will be logged and conveyed back to the attacker, giving him new targets. Some may be network sniffers, monitoring the traffic passing nearby, to or from other nearby machines, and likewise capturing private information for the bad guys. Those work by putting your network interface in promiscuous mode, in which the normal disregarding of other machines' network traffic gets disabled. Some may be clandestine network services, such as file-swapping, that are useful for the attacker and his friends. Most distressing of all, some may be carrying out attacks on other systems. The older variety of those involved flooding distant machines with either normal or deliberately malformed network traffic (ping, ping of death, smurf, SYN flooding, teardrop, land, bonk), as a denial of service (DoS) attack. Then starting last year, the more-organized DDoS tools (trinoo, Tribal Flood Network, stacheldraht, Trank, and so on) came to sudden public attention when they were used to overwhelm popular Internet sites. The third-party, subverted machines (zombies) used to carry out those attacks appear to have been university machines, favored for their lax security and high Internet bandwidth, but your Linux hosts could be the attackers' next tools.
Even if your machines don't cause you that order of embarrassment, the other risks are equally grim: you can reveal confidential data with business and/or personal consequences, lose that data entirely, see it corrupted or sabotaged, be involved in wrongful or even criminal activity, lose access to your computing resources, and indirectly cause harm to your staff and business associates. Your Website can be defaced or modified, or visitors might be redirected by sabotaged company DNS servers to entirely different sites.
What would the Master Control Program do?
As Ozancin pointed out, to prevent, detect, and recover from such attacks, your first step is to spend some time thinking like an attacker. Spend some time exploring your network with Nessus, nmap2, and Firewalk, discovering its vulnerabilities as if you were an outsider peeking in. Set John the Ripper loose on your password files to discover any trivial-to-break passwords with which your users are damaging your security posture. Subscribe to the security-alert mailing list for your Linux distribution. Install one or more security-checking packages LIDS, LogCheck, Tripwire, or HostSentry, or simply generate and store (off-system) MD5 checksums for all critical system files (see Resources).
Disable all network services you're not sure you need (if you're wrong, you'll find out), including those in /etc/inetd.conf), and the CGI scripts on Web servers. (Never place scripting executables such as the Perl interpreter in your CGI-BIN directory.) If you wish to leave the user-information service finger running, make sure it's not one that lists all logged-in users if you run finger @hostname (substituting your machine's name for hostname). Stay current on security-related revisions, especially for the network services you leave enabled. The foregoing measures are probably the second most valuable precautions you can take.
The most valuable measure would involve password policies. you'll want to always used shadowed passwords. The utility pwconv will switch you over to those (populating /etc/shadow, and removing all passwords from /etc/passwd) if you aren't running shadowed already. That essentially eliminates the risk of password cracking.
You'll also want to set a minimum password length. Most Linux distributions require five or six minimum characters, but Ozancin suggests changing that to require a full eight characters. Since most Linux systems these days use a Pluggable Authentication Module (PAM) security architecture, the minimum length can usually be set easily in the /etc/pam.d/passwd configuration file.
In addition, you should consider avoiding plaintext-password network services: The POP3, FTP, and Telnet daemons pose a special risk because their passwords pass unencrypted across the open network, sniffable by any nearby machine along the way. SSH (the secure-shell suite) and stunnel can replace or protect the vulnerable protocols, and you can use SSL encryption for any sensitive Web-based information. For best protection, Ozancin recommends two-factor authentication3: adding an additional security mechanism to the password one such as a smart card or an encryption key pair. Users should be encouraged (and equipped) to encrypt any sensitive email using PGP or GNU Privacy Guard (see Resources). Also, given the possibility of network sniffing, use switched Ethernet hubs wherever possible to isolate traffic (thus minimizing the amount of sniffable information).
Ozancin wagged his finger at the sendmail mail transfer agent (MTA) as the cause of many past security exploits, including but not limited to buffer overflows. That is true but slightly unfair, as sendmail has a much longer history than most MTAs, and it has been clean for quite some time. However, his point about the program's slightly risky, monolithic design is well taken, and cautious sites may wish to adopt Postfix (which is open source licensed) or Qmail (which has an almost open source license) (see Resources).
Truly paranoid administrators may also want to run their Web server programs in a chroot (artificial root) environment as a precaution in case of buffer overflows, misbehaved CGI scripts, etc.). Ozancin warns that the minor security gain such a setup provides may not justify its administrative overhead.
As a security-tightening measure on individual Linux boxes, Ozancin recommends reviewing security-sensitive files, especially those installed set-UID or set-GID to run with the root user's authority (or equivalent). He recommends making security-sensitive files unreadable by ordinary users and removing the SUID/SGID bits where they are not needed. I can say from personal experience that that recommendation must be approached with caution: keep good records of what you change, as you may find things unexpectedly breaking from such efforts to tighten security post-installation.
Ozancin also mentioned the possibility of dedicated monitoring hosts. One such machine might be a dedicated loghost to which the syslog daemons on your other machines report their operations. Ideally, that machine would have no network connection (as it would be a prime break-in target), and be reported to via null-modem serial cable only. The other type would be network-based intrusion-detectors such as a machine running Marcus J. Ranum's Network Flight Recorder (see Resources), as opposed to host-based detectors such as Tripwire. I have my doubts about network-based intrusion detectors, as their ability to reassemble and analyze packets in realtime is going to be strained with any reasonable underlying hardware, but they have their proponents.
Last, and lest we forget, Linux can be firewalled at several levels. Machines inside of your network can be partially concealed through IP masquerading (the version of Network Address Translation most used in Linux). Basic filtering of traffic allowed and disallowed at the network interfaces can be done (with Linux 2.2 kernels), using the related IP Chains rules, perhaps building the firewall rulesets using Mason, and allowable traffic can be specified at the level of individual services through the TCP Wrappers control files in /etc, hosts.allow, and hosts.deny. The truly paranoid may elect to use the 2.4 kernels' Netfilter4 facility (adding stateful packet filtering) or a commercial application-level proxy gateway. And traffic between geographically separated company networks can be routed through a virtual private network (VPN) tunnel, instead of being exposed to the open Internet.
Ozancin's talk was fairly comprehensive and typical of such security talks in its emphasis: it focused almost entirely on prevention and detection.
Prevention and detection are, of course, very good things, but ideally they should be part of a better-rounded effort at risk assessment and management. That should include damage reduction (what is at risk?), defense in depth (how can we avoid having all our eggs in one basket?), hardening (e.g., jumpering the SCSI drives read-only for some filesystems, and altering Ethernet hardware to make promiscuous mode impossible), identification of the attackers, and recovery from security incidents. Explicit security policies, security auditing, the design and testing of backup systems, automatic and manual log analysis, handling of dialup access, physical security for the network, the special problems posed by laptop users, security training and documentation, and disaster recovery and costing are necessary parts of such an effort.
After all, if the Master Control Program had only been better at risk management, the movie might have had a happy ending.
1. In accordance with Moen's Second Law of Security: "A system can be only as secure as the dumbest action it permits its dumbest user to perform."
2. Among nmap's interesting features is the ability to estimate your chances of successfully predicting TCP sequence numbers on a target system, a method described by Steven Bellovin in 1989 and reportedly used by Kevin Mitnick in 1994 to remotely take over security consultant Tsutomu Shimomura's Unix host. Ozancin used nmap to show that such guessing is thousands of times more difficult against a remote machine running a generic Linux 2.2 kernel than against one running MS Windows NT 4.0 with the latest service pack. nmap can also operate in decoyed mode, in which a high percentage of the probe packets purport to come from elsewhere entirely. One series of probes against Pentagon systems in December 1998 seemed to originate from addresses all over Russia, but is now thought to have involved a single university machine running nmap.
3. The rule of thumb in security is that you can maximize security through a three-factor approach: something you know, something you have, and something you are. Passwords typify something you know, smart cards are an example of something you have, and biometric scanning techniques exemplify something you are. The latter may seem science-fictional, but I saw a mouse device that incorporates a fingerprint reader at the August 2000 Stanford Cypherpunks meeting: they are being mass produced as you read this.
4. Linux's 2.4 kernel series is planned to provide for a capabilities model in which processes can reach only resources their roles require, but the exact way that should be done is still being hotly debated.
Dictionary attack tools:
Mail Transfer Agents:
Artificial root environments:
Setting up Linux firewalls: