Ozancin wagged his finger at the sendmail mail transfer agent (MTA) as
the cause of many past security exploits, including but not limited to
buffer overflows. That is true but slightly unfair, as sendmail has a
much longer history than most MTAs, and it has been clean for quite
some time. However, his point about the program's slightly risky,
monolithic design is well taken, and cautious sites may wish to adopt
Postfix (which is open source licensed) or Qmail (which has an almost
open source license) (see Resources).
Truly paranoid administrators may also want to run their Web server
programs in a chroot (artificial root) environment as a precaution in
case of buffer overflows, misbehaved CGI scripts, etc.). Ozancin warns
that the minor security gain such a setup provides may not justify its
As a security-tightening measure on individual Linux boxes, Ozancin
recommends reviewing security-sensitive files, especially those
installed set-UID or set-GID to run with the root user's authority (or
equivalent). He recommends making security-sensitive files unreadable
by ordinary users and removing the SUID/SGID bits where they are not
needed. I can say from personal experience that that recommendation
must be approached with caution: keep good records of what you change,
as you may find things unexpectedly breaking from such efforts to
tighten security post-installation.
Ozancin also mentioned the possibility of dedicated monitoring hosts.
One such machine might be a dedicated loghost to which the syslog
daemons on your other machines report their operations. Ideally, that
machine would have no network connection (as it would be a prime break-
in target), and be reported to via null-modem serial cable only. The
other type would be network-based intrusion-detectors such as a machine
running Marcus J. Ranum's Network Flight Recorder (see Resources), as
opposed to host-based detectors such as Tripwire. I have my doubts
about network-based intrusion detectors, as their ability to reassemble
and analyze packets in real-time is going to be strained with any
reasonable underlying hardware, but they have their proponents.
Last, and lest we forget, Linux can be firewalled at several levels.
Machines inside of your network can be partially concealed through IP
masquerading (the version of Network Address Translation most used in
Linux). Basic filtering of traffic allowed and disallowed at the
network interfaces can be done (with Linux 2.2 kernels), using the
related IP Chains rules, perhaps building the firewall rule-sets using
Mason, and allowable traffic can be specified at the level of
individual services through the TCP Wrappers control files in /etc,
hosts.allow, and hosts.deny. The truly paranoid may elect to use the
2.4 kernels' Netfilter4 facility (adding stateful packet filtering) or
a commercial application-level proxy gateway. And traffic between
geographically separated company networks can be routed through a
virtual private network (VPN) tunnel, instead of being exposed to the
Ozancin's talk was fairly comprehensive and typical of such security
talks in its emphasis: it focused almost entirely on prevention and
Prevention and detection are, of course, very good things, but ideally
they should be part of a better-rounded effort at risk assessment and
management. That should include damage reduction (what is at risk?),
defense in depth (how can we avoid having all our eggs in one basket?),
hardening (e.g., jumpering the SCSI drives read-only for some
filesystems, and altering Ethernet hardware to make promiscuous mode
impossible), identification of the attackers, and recovery from
security incidents. Explicit security policies, security auditing, the
design and testing of backup systems, automatic and manual log
analysis, handling of dialup access, physical security for the network,
the special problems posed by laptop users, security training and
documentation, and disaster recovery and costing are necessary parts of
such an effort.
After all, if the Master Control Program had only been better at risk
management, the movie might have had a happy ending.
4. Linux's 2.4 kernel series is planned to provide for a
capabilities model in which processes can reach only resources
their roles require, but the exact way that should be done is
still being hotly debated.