Order Number: BA554-90022
OpenVMS Cluster availability, scalability, and system management benefits are highly dependent on configurations, applications, and operating environments. This guide provides suggestions and guidelines to help you maximize these benefits.
Revision/Update Information: This manual supersedes Guidelines for OpenVMS Cluster Configurations Version 7.3--2.
OpenVMS Version 8.4 for Integrity servers
OpenVMS Alpha Version 8.4
Palo Alto, California
© Copyright 2010 Hewlett-Packard Development Company, L.P.
Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license.
The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Intel and Itanium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
Microsoft, Windows, and Windows NT are US registered trademarks of Microsoft Corporation.
OSF, OSF/1, and Motif are trademarks of The Open Group in the US and other countries.
Printed in the US
The HP OpenVMS documentation set is available on CD-ROM.
This document can help you design an OpenVMS Cluster configuration to suit your business, application, and computing needs.
It provides information to help you choose systems, interconnects, storage devices, and software. It can also help you combine these components to achieve high availability, scalability, performance, and ease of system management.
This manual is applicable only for a combination of Integrity server systems and Alpha systems. For Alpha and VAX or Alpha systems combination, see the previous version of the manual.
This document is for people who purchase or recommend the purchase of OpenVMS Cluster products and for people who configure OpenVMS Cluster systems. It assumes a basic understanding of computers and OpenVMS Cluster concepts.
OpenVMS Cluster systems are designed to act as a single virtual system, even though they are made up of many components and features, as shown in Figure 1.
Figure 1 OpenVMS Cluster System Components and Features
Understanding the components and features of an OpenVMS Cluster configuration can help you to get the most out of your cluster. Table 1 shows how this guide is organized to explain these cluster concepts.
|Read...||Chapter Title||So that you can...|
|Chapter 1||Overview of OpenVMS Cluster System Configurations||Understand OpenVMS Cluster hardware, software, and general concepts|
|Chapter 2||Determining Business and Application Requirements||Learn to analyze your business and application needs and how they apply to your cluster|
|Chapter 3||Choosing OpenVMS Cluster Systems||Understand your computer requirements and make appropriate choices|
|Chapter 4||Choosing OpenVMS Cluster Interconnects||Learn about cluster interconnects and make appropriate choices|
|Chapter 5||Choosing OpenVMS Cluster Storage Subsystems||Learn to analyze your storage requirements and make appropriate choices|
|Chapter 6||Configuring Multiple Paths to SCSI and Fibre Channel Storage||Learn how to configure multiple paths to storage using Parallel SCSI or Fibre Channel interconnects, thereby increasing availability|
|Chapter 7||Configuring Fibre Channel as an OpenVMS Cluster Storage Interconnect||Learn how to configure an OpenVMS Cluster with Fibre Channel as a storage interconnect|
|Chapter 8||Configuring OpenVMS Clusters for Availability||Understand how to increase the availability of a cluster system|
|Chapter 9||Configuring OpenVMS Clusters for Scalability||Learn how to expand an OpenVMS Cluster system in all of its dimensions, while understanding the tradeoffs|
|Chapter 10||OpenVMS Cluster System Management Strategies||Understand and deal effectively with some of the issues involved in managing an OpenVMS Cluster system|
|Appendix A||SCSI as an OpenVMS Cluster Interconnect||Configure multiple hosts and storage on a single SCSI bus so that multiple hosts can share access to SCSI devices directly|
|Appendix B||MEMORY CHANNEL Technical Summary||Learn why, when, and how to use the MEMORY CHANNEL interconnect|
|Appendix C||Multiple-Site OpenVMS Clusters||Understand the benefits, the configuration options and requirements, and the management of multiple-site OpenVMS Cluster systems|
For additional information on the topics covered in this manual, see the following documents:
For additional information about HP OpenVMS products and services, see:
HP welcomes your comments on this manual. Please send your comments tor suggestions to:
For information about how to order additional documentation, see:
VMScluster systems are now referred to as OpenVMS Cluster systems. Unless otherwise specified, references to OpenVMS Clusters or clusters in this document are synonymous with VMSclusters.
In this manual, every use of DECwindows and DECwindows Motif refers to DECwindows Motif for OpenVMS software.
The following conventions may be used in this manual:
|Ctrl/ x||A sequence such as Ctrl/ x indicates that you must hold down the key labeled Ctrl while you press another key or a pointing device button.|
|PF1 x||A sequence such as PF1 x indicates that you must first press and release the key labeled PF1 and then press and release another key or a pointing device button.|
In examples, a key name enclosed in a box indicates that you press a
key on the keyboard. (In text, a key name is not enclosed in a box.)
In the HTML version of this document, this convention appears as brackets, rather than a box.
A horizontal ellipsis in examples indicates one of the following
|bold type||Bold type represents the introduction of a new term. It also represents the name of an argument, an attribute, or a reason.|
|italic type||Italic type indicates important information, complete titles of manuals, or variables. Variables include information that varies in system output (Internal error number), in command lines (/PRODUCER= name), and in command parameters in text (where dd represents the predefined code for the device type).|
|UPPERCASE TYPE||Uppercase type indicates a command, the name of a routine, the name of a file, or the abbreviation for a system privilege.|
|Example||This typeface indicates code examples, command examples, and interactive screen displays. In text, this type also identifies URLs, UNIX commands and pathnames, PC-based commands and folders, and certain elements of the C programming language.|
This chapter contains information about OpenVMS Cluster hardware and
software components, as well as general configuration rules.
1.1 OpenVMS Cluster Configurations
An OpenVMS Cluster systems is a group of OpenVMS systems, storage subsystems, interconnects, and software that work together as one virtual system. An OpenVMS Cluster system can be homogeneous, that is, all systems are the same architecture (Integrity servers (based on the Intel Itanium architecture) or Alpha or VAX systems) all running OpenVMS. An OpenVMS Cluster can be heterogeneous, that is, a combination of two architectures with all systems running OpenVMS. The two valid combinations are Alpha and VAX or Alpha and Integrity server systems.
In an OpenVMS Cluster system, each system:
In addition, an OpenVMS Cluster system is managed as a single entity.
In a heterogeneous cluster, only one architecture is supported per system disk and per system boot block.
Table 1-1 shows the benefits that an OpenVMS Cluster system offers.
|Resource sharing||Multiple systems can access the same storage devices, so that users can share files clusterwide. You can also distribute applications, batch, and print-job processing across multiple systems. Jobs that access shared resources can execute on any system.|
|Availability||Data and applications remain available during scheduled or unscheduled downtime of individual systems. A variety of configurations provide many levels of availability up to and including disaster-tolerant operation.|
|Flexibility||OpenVMS Cluster computing environments offer compatible hardware and software across a wide price and performance range.|
|Scalability||You can add processing and storage resources without disturbing the rest of the system. The full range of systems, from high-end multiprocessor systems to smaller workstations, can be interconnected and easily reconfigured to meet growing needs. You control the level of performance and availability as you expand.|
|Ease of management||OpenVMS Cluster management is efficient and secure. Because you manage an OpenVMS Cluster as a single system, many tasks need to be performed only once. OpenVMS Clusters automatically balance user, batch, and print work loads.|
|Open systems||Adherence to IEEE, POSIX, OSF/1, Motif, OSF DCE, ANSI SQL, and TCP/IP standards provides OpenVMS Cluster systems with application portability and interoperability.|
An OpenVMS Cluster system comprises many hardware components, such as systems, interconnects, adapters, storage subsystems, and peripheral devices. Table 1-2 describes these components and provides examples. See the Software Product Description for the complete list of supported components.
A cabinet that contains one or more processors, memory, and
input/output (I/O) adapters that act as a single processing body.
Reference: See Chapter 3 for more information about OpenVMS systems.
|OpenVMS Cluster systems can contain any supported Integrity sever, Alpha or VAX system.|
The hardware connection between OpenVMS Cluster nodes over which the
Reference: See Chapter 4 for more information about OpenVMS Cluster interconnects.
An OpenVMS Cluster system can have one or more of the following
Devices on which data is stored and the optional controllers that
manage the devices.
Reference: See Chapter 5 for more information about OpenVMS storage subsystems.
Storage subsystems can include:
Devices that connect nodes in an OpenVMS Cluster to interconnects and
Reference: See Chapter 4 for more information about adapters.
The adapters used on Peripheral Component Interconnect (PCI) and
PCI-Express (PCIe) systems include the following:
OpenVMS Cluster system software can be divided into the following types:
The operating system manages proper operation of hardware and software components and resources.
Table 1-3 describes the operating system components necessary for OpenVMS Cluster operations. All of these components are enabled by an OpenVMS operating system license with an OpenVMS Cluster license.
|Record Management Services (RMS) and OpenVMS file system||Provide shared read and write access to files on disks and tapes in an OpenVMS Cluster environment.|
|Clusterwide process services||Enables clusterwide operation of OpenVMS commands, such as SHOW SYSTEM and SHOW USERS, as well as the ability to create and delete processes clusterwide.|
|Distributed Lock Manager||Synchronizes access by many users to shared resources.|
|Distributed Job Controller||Enables clusterwide sharing of batch and print queues, which optimizes the use of these resources.|
|Connection Manager||Controls the membership and quorum of the OpenVMS Cluster members.|
|SCS (System Communications Services)||Implements OpenVMS Cluster communications between nodes using the OpenVMS System Communications Architecture (SCA).|
|MSCP server||Makes locally connected disks to which it has direct access available to other systems in the OpenVMS Cluster.|
|TMSCP server||Makes locally connected tapes to which it has direct access available to other systems in the OpenVMS Cluster.|
Figure 1-1 shows how the hardware and operating system components fit together in a typical OpenVMS Cluster system.
Figure 1-1 Hardware and Operating System Components
Not all interconnects are supported on all three architectures of OpenVMS. The CI, DSSI, and FDDI interconnects are supported on Alpha and VAX systems. Memory Channel and ATM interconnects are supported only on Alpha systems.