Computer World –
Datacenter Server wants to be your enterprise operating system -- or at least Microsoft Corp. wants it to be.
With features like 32-processor symmetric multiprocessing support, load balancing and clustering capabilities, the newest version of Windows looks good on paper. But it takes more than good specifications to gain entry to the data center.
Computerworld's Robert L. Mitchell recently spoke with Peter Conway, director of Windows enterprise server marketing at Microsoft. They discussed how Datacenter stacks up against the competition and how it proposes to meet the demands of data center managers.
Q: Why should corporate IT bring Datacenter Server into the data center?
A: We really want our customers to understand what a serious commitment Microsoft is making to them through this product. We're making a significant investment in the relationship with those customers and our joint partners. We understand that it's the combination that makes this happen. Making a significant business commitment . . . is very different [from] what Microsoft has done previously. It's a huge sea change for us.
Q: In the data center, IT managers say they're concerned with maintaining stable applications over a long period of time. Traditionally, Microsoft has forced people to re-evaluate their applications and system hardware every time the company comes out with a new operating system revision. Will this happen with Datacenter Server?
A: No. We will continue to encourage our customers to move forward and upgrade. We, of course, want to be the most competitive platform, but customers have requirements to stay on versions for extended periods of time, and we've made allowances for that.
For example, our [reseller] partners have an extended window of support . . . for the base platform. They have a minimum five-year support requirement. For those relationships that [go] beyond that five-year window, we will work against the business needs of [those] customers.
Q: Who are the early adopters of Datacenter Server, and what types of applications are they running?
A: They are in very aggressive businesses. The financial sector, ASPs [application service providers], dot-coms. I am working with about 30 [customers], and each is looking for cost advantages and competitive advantages relative to their competition. Most are looking for very fast deployments and lots of flexibility because they're unsure of how their businesses will evolve.
Q: Microsoft says it has made special efforts to ensure that Datacenter Server is reliable, yet at least some IT managers may be hesitant to bring Windows into the data center, citing a legacy of reliability issues surrounding Windows. What is Microsoft doing to change that perception?
A: We are delivering a combinaation of product services, support and partners to execute on the promise of reliability. Your question actually belies our problem in that Windows is prevalent in the industry. So, as a result, even if we have 100 customers running over [99.99%] of business availability, we still get hurt by the one customer that's not running at that level.
Before we stand up and pound our chest and say how reliable we are, our customers have to say it first. We will make some noise once we have customers speaking on our behalf. We need to let them speak for us.
Q: What's the compelling business case for building a new data center application on top of Datacenter Server?
A: The compelling value is time to deployment. The availability of solutions enable our customers to get to deployment a heck of a lot faster on the Datacenter platform than on other platforms.
Q: Is there a business case for migrating applications to Datacenter Server from midrange systems from companies like Sun Microsystems and IBM?
A: These days, with the integration of the legacy environment with new classes of Web-based applications, there is a massive amount of innovation going on. Many customers in a proprietary environment lose flexibility over time in that they're tied to a particular [hardware] supplier. Then they get involved in a forklift remove-and-replace of the hardware, and they've got to port the application to another environment.
Datacenter offers high degrees of reliability and scalability and [the] whole choice and flexibility changes. The scalability is of great interest from a features standpoint, because [customers] can run much larger database applications with some guaranteed business-level commitments from the [resellers] to the customers.
Q: But migrating from, say, Solaris to Datacenter would require a forklift upgrade and a port of the applications as well, wouldn't it?
A: Not necessarily. We have an interoperability framework. We are serious about protecting our customers' existing investments and have a variety of products which offer interoperability and migration of applications between Unix and the Datacenter environments at the network, application, data and management levels.
For example, Microsoft Services for Unix provides a set of interoperability components that make it easy to integrate Windows 2000 operating systems into existing Unix-based environments. Microsoft Interix provides a robust environment that allows organizations to run Unix-based applications and scripts on Windows 2000 directly.
Q: Datacenter can support symmetrical multiprocessing with as many as 32 processors. Some Unix systems already support 64 processors or more. Do you see this as a competitive disadvantage for Microsoft?
A: Not at all. It's not how many processors you support but how well you support your customer's complete requirements. I'd rather have 32 fast industry-standard processors than 64 proprietary slow ones. I think I have the competitive advantage. Feedback from customers, so far, as been very positive.
Q: In the data center, administrators may use operating system partitioning to run multiple applications on the same machine. Some systems allow dynamic reconfiguration of processor and memory resources between those partitions without rebooting. Does Datacenter Server support this?
A: Datacenter Server includes the Process Control tool, a new job object management tool that allows the administrator to dynamically control the allocation of system processors and memory to specific processes and process groups in each operating system image, without rebooting the operating system.
Our partners who manufacture and distribute the server hardware platforms have differing degrees of hardware partitioning, which allows multiple instances of the Windows 2000 Datacenter Server to run on a single instance of thee server platform.
Q: Traditionally, the most mission-critical, high-end applications in the data center haven't run on Microsoft products. And in some cases, Microsoft has been locked out of the midrange business in the data center, as well, because companies don't want the hassle of integrating and supporting an additional operating system. How can Datacenter Server accommodate that?
A: We've invested significantly in interoperability products and technology at multiple levels.
At the lowest level, we chose TCP/IP as the base connectivity. We support [Microsoft Message Queuing], the [IBM] MQSeries protocols, [Network File System] protocols and NetWare file and print protocols. We support SNA Interconnect and hosting up through LU6.2 terminal sessions.
We think we're the best application platform out there. You can build it faster on our platform than you can on an alternative architecture.
That doesn't even begin to address the third parties. We expect that all the leading branded databases will be on Datacenter. So we think we have a pretty compelling offering to go into the data center.
Q: What changes will Whistler, the pending new version of Datacenter, bring to Datacenter Server this year?
A: There will be some modest additions in terms of features and capabilities, but the data center is a place where continuity is almost more important than lots of new features all the time. So I wouldn't want to set an expectation that it's a major server release with lots of new features.