A fine steed, indeed

RELATED TOPICS

Keeping an eye on the performance of all the segments and carriers on today's networks can stress even the most nimble network manager. And forget trying to put together a legible report for management. Is there any relief in sight?

We think so. We found Ganymede Software's Pegasus 2.1 provided invaluable information for solving a lingering performance problem involving systems located on our enterprise network and across the Internet in Chicago.

Pegasus 2.1 is an automated testing and reporting application that combines the Pegasus Network Monitor and the Pegasus Application Monitor. Building on technology first developed for the company's Chariot product, Pegasus supplies an enterprise management solution for network and application monitoring.

Pegasus 2.1 earns our World Class Award for its ability to provide valuable network and application performance statistics; its integration with other Ganymede products; and its strong performance, documentation and reporting abilities.

Duke University, where these tests were conducted, has been working with Pegasus from the initial development stage. We have also worked extensively with Chariot, which made it easier for us to understand and effectively deploy Pegasus 2.1.

The heart of Pegasus 2.1 is the Pegasus server. Installed on an NT Server or Workstation, the server provides a centralized repository for network and application statistics. It collates performance measurements gathered by the Network Performance Endpoints (NPE) and initiates alerts when thresholds are exceeded or net performance degrades.

Just such a degradation of network throughput gave us the opportunity to put Pegasus to the test in a real-life scenario. A research project was serving up image files through an Open Database Connectivity (ODBC) database on our campus. For security reasons, the ODBC database was protected by a proxy server at a different site from the database server.

Soon after the project became operational, some participants notified administrators that performance at several sites had degraded. This was of particular concern because one of the sites was the project's largest user, located at the University of Chicago. The project director asked to provide long-term throughput and response-time statistics.

Our enterprise network is divided into two distinct topologies: one heavily bridged, the other heavily routed. The database and proxy servers were located on the bridged section of the network. We installed NPEs on systems on both sides of the network and at the University of Chicago. We then defined a series of Pegasus connections -- pairs of endpoints -- and selected a test script for each connection that resembled the application traffic -- specifically, one that sent files of many megabytes.

We configured Pegasus to provide us with a variety of repports, including Network Throughput Status and Network Throughput Summary. After 24 hours of gathering data from scripts that ran every 15 minutes we had our answer. Throughput and response time to the Chicago site were fine. The problem was the section of the network between the proxy server and database server.

Located on a different section of the net, the proxy server's performance was poor because of network congestion between it and the database server. Users weren't aware of this; all they noticed was the slowdown. We moved the proxy server and database server to the same location and put them on an Ethernet switch. No more congestion.

These tests also demonstrated the power of the Pegasus reporting engine. The Pegasus server provides Web-based access to all reports, making report generation a hands-free task. You can also generate reports "on the fly." The product offers a wide variety of reports with various levels of information, from "Executive Overview" to in-depth statistics.

Effective use of Pegasus requires you to have a good understanding of network protocols and topology. For instance, we installed endpoints on several systems that were connected to the campus network through an asymmetric digital subscriber line (ADSL) service. During testing, we noticed significantly higher throughputs being reported than the ADSL lines were capable of providing. Other testing tools confirmed that the Pegasus report was not accurate.

We looked into the problem. Using the Pegasus script editor, we took a look at our test script. Out of the box, Pegasus loops through each script only once to minimize the impact on both systems and network. Our ADSL service provider uses frame relay to trunk traffic to our campus. The frame network was buffering data, and with only one iteration, Pegasus didn't have enough data to generate accurate results.

The only other glitch we discovered involved the reporting features. While Pegasus lets you define throughput measurement units as either K byte/second or K bit/second, these settings only apply to automatically generated reports. On-the-fly reports revert to the Pegasus default of K byte/second, and no method of defining a global preference was provided.

Nevertheless, we found Pegasus 2.1 to be one of the few "can't do without" tools in our kit.

Scorecard

Functionality
40%
Administration
35%
Performance
10%
Installation
10%
Documentation
5%
Total
910910109.5

Note: Individual category scores are based on a scale of 1-10. Percentages are the weight given each category in determining the total score. The World Class Award goes to products that earn 9.0 or above on our scorecard.

Net Results Ganymede Software

1100 Perimeter Park Drive Suite 104

Morrisville, N.C. 27560

(919) 469-0997

Web site

Pricing:Application Monitor and Network Monitor start at $25,000 each

Pros:Low-impact network monitoring and testing

Results easily verified and accurate

Wide variety of reports from "Executive Overview" to in-depth statistics Cons: Requires detailed understanding of underlying network for effective utilization

Manually initiated reports revert to "stock" parameters

This story, "A fine steed, indeed" was originally published by Network World.

RELATED TOPICS
What’s wrong? The new clean desk test
View Comments
You Might Like
Join the discussion
Be the first to comment on this article. Our Commenting Policies