January 04, 2001, 4:09 PM — Keeping an eye on the performance of all the segments and carriers on today's networks can stress even the most nimble network manager. And forget trying to put together a legible report for management. Is there any relief in sight?
We think so. We found Ganymede Software's Pegasus 2.1 provided invaluable information for solving a lingering performance problem involving systems located on our enterprise network and across the Internet in Chicago.
Pegasus 2.1 is an automated testing and reporting application that combines the Pegasus Network Monitor and the Pegasus Application Monitor. Building on technology first developed for the company's Chariot product, Pegasus supplies an enterprise management solution for network and application monitoring.
Pegasus 2.1 earns our World Class Award for its ability to provide valuable network and application performance statistics; its integration with other Ganymede products; and its strong performance, documentation and reporting abilities.
Duke University, where these tests were conducted, has been working with Pegasus from the initial development stage. We have also worked extensively with Chariot, which made it easier for us to understand and effectively deploy Pegasus 2.1.
The heart of Pegasus 2.1 is the Pegasus server. Installed on an NT Server or Workstation, the server provides a centralized repository for network and application statistics. It collates performance measurements gathered by the Network Performance Endpoints (NPE) and initiates alerts when thresholds are exceeded or net performance degrades.
Just such a degradation of network throughput gave us the opportunity to put Pegasus to the test in a real-life scenario. A research project was serving up image files through an Open Database Connectivity (ODBC) database on our campus. For security reasons, the ODBC database was protected by a proxy server at a different site from the database server.
Soon after the project became operational, some participants notified administrators that performance at several sites had degraded. This was of particular concern because one of the sites was the project's largest user, located at the University of Chicago. The project director asked to provide long-term throughput and response-time statistics.
Our enterprise network is divided into two distinct topologies: one heavily bridged, the other heavily routed. The database and proxy servers were located on the bridged section of the network. We installed NPEs on systems on both sides of the network and at the University of Chicago. We then defined a series of Pegasus connections -- pairs of endpoints -- and selected a test script for each connection that resembled the application traffic -- specifically, one that sent files of many megabytes.