Performance monitoring and capacity planning

Unix Insider –

Performance monitoring and capacity planning are infamous subjects to managers of mainframe-based data centers. In brief, an Information Technology manager must decide how much computing resources an organization needs, when they are needed, and answer why the "how much" estimation always changes.

Managers of distributed, client/server environments wrestle with this problem too. But they must endure the added complexity of dealing with multiple sites.

In supporting the functions of capacity planning and performance monitoring, IT organizations are really assessing the service levels they provide, which are decided by customer expectations:

  • What service will IT provide?

    The customers define this.

  • What service can a given configuration provide?

    Provided by the IT organization, these are the goals for user performance, factoring in the budgetary restrictions.

As those associated with the mainframe data centers know, this is not easy, and even old-hands guess wrong. The difficulties arise from several reasons:

  • Predicting the future is hard.

    We are dealing with future hardware and software and how this vaporware may work together, future users and their requirements, and the organization's future mission. (Pop quiz: How much CPU capacity did you plan for your public Web server this year? Oh, you didn't even plan a Web server?)

  • Users can't predict the future either.

    Smart people always find new ways of using all of the computing resources available. (Pop quiz: Ask your marketing manager how many Web pages your organization will publish next April.)

  • Distributed client/server environments are more complex than mainframe data centers.

    There is now a wider variety of potential configurations, and each new generation of hardware and software will introduce new capabilities and costs. (Pop quiz: Ask your Webmaster what Java- and VRML-enabled Web pages mean to your organization's network load next April.)

  • The rate of change in new technologies continues to accelerate.

    Very often, by the time a person understands and feels comfortable with a new technology it is obsolete. Both as an institution and as individuals, IT staffs must keep abreast of new technologies or risk being left behind. (Pop quiz: Did your IT department spearhead your Web server development and deployment?)

  • Organizations are complex.

    Complexity breeds specialization. Experts in one field are rarely experts in another. When planning computer and network use it helps speak the language of your constituency.

While some view capacity planning as an art form, you can approach it scientifically. Keep in mind it is easier to find that a configuration will not support a specific service level than to predict it will. For example, it is easy to determine that a system with a single disk drive cannot achieve random access throughput rates of 130 accesses per second if that one disk can handle only 65. However, a system with two disks (each of which can handle 65 accesses per second) may or may not handle the same load, because the bottleneck may not be in the disk subsystem.

Also, actual use follows Parkinson's Law and will eventually employ all available resources. (This is most distressful when customers devour an over-configured system by piling on unplanned tasks.)

To properly analyze a distributed environment, view it as a series of connected components (computers, peripherals, software, networks, etc.), and break these down into their individual, measurable components. For example, client machines are composed of CPUs, memory, software, busses, and peripheral devices. Each of these can be monitored as to its effects on the entire configuration's total performance. As with the old saying about a chain being as strong as its weakest link, a distributed environment's performance is often set by its weakest component.

A distributed environment has to be viewed both from a low-level perspective, examining each system and subsystem, and from a high-level perspective where the entire network of systems is considered. Where there used to be a data center made up of a single, large mainframe, there is now an entire network (or networks) making up the data center. Hence our phrase, "The network is the data center." This view is important in preparing to install new systems and tuning a system after it is installed. Most performance and tuning efforts are largely after-the-fact (after systems are installed) analyses of encountered bottlenecks. As far as tuning a system (or a network), the process is really that of finding the weakest component and making it stronger, thus removing that bottleneck.

Configuring and capacity planning

This section provides information about how to configure a system for a database management environment. The concentration is on load characteristics and usage and how the usage interacts with the machine architecture to affect end-user performance.

The information comes from several years of experience and research at Sun Microsystems and is hardly the final word. The authors acknowledge the research and testing by all of the IT staff at Sun Microsystems. We especially want to thank Rich Stehn for his research in this area. His writings form the basis of this column.

Here's an outline of what's ahead:

Configuring DBMS servers

Probably the most common single class of applications for client/server systems is database management systems (DBMSs). There are several popular DBMSs, each with quite different characteristics. Because of these differences, the following discussion is general. While it is almost impossible to determine exactly how many users a system will support, using a few basic strategies can help you make an informed prediction.

All DBMSs are different

Both database-oriented applications and DBMSs themselves vary widely in nature, and cannot be pegged as purely transaction- or data-intensive.

While there are several fundamental database architectures available today, most Unix users settle on the relational model from either Oracle, Informix, Sybase, or Ingres.

Even with most systems operating under the same broad conceptual framework, there are architectural differences between the products. The most significant is the implementation of the DBMS itself. The are two major classes being "2N" and "multithreaded."

The older "2N" tools use a process on the server for each client, even if the client is running on a physically different system. Therefore, each client application uses two processes, one on the server and one on the client.

Multithreaded applications are designed to avoid the extra expense of managing so many processes, and typically have a cluster of processes (from one to five) running on the server. These processes are multithreaded internally so that they service requests from multiple clients. Most major DBMS vendors use a multithreaded implementation, or are moving in that direction.

Given the diversity of applications, DBMS implementations, workloads, users, and requirements, any vendor willing to provide a definite answer to "How many users will this system support?" is either winging it or has made a detailed, in-depth analysis of an actual installation just like yours. It's easy to tell which type of answer you're getting.

Application load characterization

Characterizing the load generated by a database application is impossible without detailed information about what the application accomplishes. Even with such information, many assumptions must be made about how the DBMS itself will access data, how effective the DBMS's disk cache might be on specific transactions, or even what the mix of transactions might be. For now, the most helpful characterization of the load generated is "light," "medium," "heavy," and "very heavy." The heaviest common workloads are those associated with very large applications such as Oracle Financials.

The primary application class that falls into the "very heavy" workload is decision support. Because of the diverse nature of decision support queries, it is very difficult for database administrators or the DBMS itself to provide extensive, useful optimization. Decision support queries stress the underlying system due to frequent multi-way joins.

Configuration guidelines for DBMS servers

Although configuration guidelines can (and will) be provided, their usefulness is drastically affected by application considerations. The efficiencies of the application and DBMS are much more important than the host machine configuration. There are literally hundreds of examples of small changes in applications or database schema making 100- or 1,000-fold (or more) improvements in performance.

For instance, a database select statement that requests one specific record may cause the DBMS to read one record from the tables or every record in the table, depending on whether or not the table is indexed by the lookup key. Often a table must be indexed by more than one key (or set of keys) to accommodate the different access patterns generated by the applications. Careful indexing can have dramatic effects on total systems performance. After systems are installed, it is worthwhile monitoring to decide if changes should be made to the database (even for internally-developed or "off-the-shelf" third-party applications). It is often possible to improve the performance of an application by reorganizing the database (even without changing anything in the application's source code).

Another consideration that receives little notice but often affects performance is internal lock contention. The DBMS must lock data against conflicting simultaneous access. Any other process that requires access to this data will be unable to go on until the lock is released. A system may perform poorly due to an inefficient locking strategy.

DBMSs offers many tunable parameters, some having dramatic effects on performance. Below are our recommendations for applications and DBMSs that have already been tuned.

Configuration checklist

The following questions summarize the process of arriving at an accurate DBMS configuration:

  • Which DBMS is being used? Is it a "2N" or multithreaded implementation?

  • How big is the raw size of the database?

  • What transaction processing monitors are being used (if any)?

  • Is it feasible to use a client/server configuration?

  • How many users will be active simultaneously?

  • What is the basic or dominant access pattern? Which queries dominate the load?

  • What is the indexing strategy? Which queries will be optimized by indexing (converted from serial access to random access) and which queries are required to be carried out as full or partial table scans?

  • Are there sufficient disk drives and SCSI host adapters configured to accommodate the anticipated access load? Are there separate disks for DBMS logs and archives?

  • Is there sufficient disk storage capacity to accommodate the raw data, the indexes, the temporary table spaces, as well as room for data growth?

  • Are sufficient processors configured to handle the anticipated users?

  • Is a dedicated network between client and server systems required?

  • Is the anticipated backup policy consistent with the type, number, and SCSI location of the backup devices?

Client/server considerations

Most DBMS applications consist of three logical parts:

  • A user interface
  • Application processing
  • A back-end DBMS service provider.

The user interface and application processing are usually combined in a single binary, although some elaborate applications are now providing multithreaded front-end processing disconnected from presentation services. Often, the back-end DBMS server is run on a dedicated computer to provide as little contention for resources as possible.

When it is feasible to do so, use of the client/server model to separate the front-end processing and presentation services from the DBMS provider usually provides a substantial improvement in total system performance. This lets the critical resource, the DBMS provider, operate unimpeded on its host system. This is particularly true for systems dominated by activities, such as driving hundreds or thousands of terminals.

The opposite of client/server mode is "timesharing." Timesharing usually delivers higher performance only when the presentation requirements are very light or when the concurrent user load is light. Applications that have forms-based presentation are usually never light.

In summary:

Related:
1 2 3 Page 1
ITWorld DealPost: The best in tech deals and discounts.
Shop Tech Products at Amazon