September 06, 2006, 10:26 AM — Sadly, it is in the aftermath of a big network crash that some of us do our clearest thinking about what we should have done before the disaster struck. We might learn that, even with a powerful UPS, we can find ourselves in a situation in which a sudden loss of power leaves our computer center in the dark, our servers and networking equipment in unknown condition and us with a big problem to solve. How do we get our systems back online with some semblance of control and efficiency when we're not sure what we're starting with?
First, let's look at one scenario that could leave us in the dark. A loss of power that occurs when a UPS is in bypass (maintenance) mode will leave us nearly as unprotected as if we had no UPS at all. Alternately, if our UPS runs out of battery power after a blackout has lasted for an hour or two, we could lose power abruptly. If we are not set up with automated shutdown software, a UPS may extend the time we have before our systems crash, but will not prevent them from crashing.
So, let's imagine that our UPS is in bypass mode when the building power takes a dive. With our UPS in bypass mode, our systems will have been running on utility power and our UPS was likely providing only voltage conditioning. In short, we were protected from power surges but little else when that tree shorted out the power line a mile or two outside of town.
When power returns, some of our systems will boot with ease. We may pause to appreciate that we converted some of your file systems to logging file systems because of their near immunity to corruption.
If we have some kind of networking monitoring application installed, such as HP OpenView, we may be able to get a quick overview of the status of our computer center but the accuracy of our view will depend on the sophistication on our software. A downed network switch might be easy to identify or it may create the impression that scattered systems are down when, in actuality, they are merely inaccessible.
A good network recovery plan involves several questions: How do we quickly assess the damage? From what location or system do we gain access to the systems that we need to repair and/or reboot? How do we prioritize our systems and decide what to boot first?
Identifying Critical Resources
Critical resources vary from one site to another, but some of the services we are likely to want to get back online quickly include DNS servers, NIS/NIS+ and LDAP servers, NFS services and other forms of network storage devices, switches and routers (without which your ability to connect to many of your servers will be impaired), console servers (which allow you to log into systems which may be waiting for a root password and an "fsck -y" before they will boot).