It has over 80, licenses implemented worldwide. HP's Serviceguard allows up to 16 HP servers, called nodes, to be configured in a cluster. A node in a Serviceguard cluster can also be a physical partition, called an nPartition, or a virtual partition, called a VPAR. At a minimum, one LAN connection should be used exclusively to carry heartbeat messages. The other LAN connection is used to provide redundancy for heartbeat messages and can also be used to connect clients to the cluster.
|Published (Last):||6 May 2004|
|PDF File Size:||18.14 Mb|
|ePub File Size:||1.20 Mb|
|Price:||Free* [*Free Regsitration Required]|
Serviceguard allows you to create high availability clusters of HP or HP Integrity servers or a mixture of both; see the release notes for your version for details and restrictions. A high availability computer system allows application services to continue in spite of a hardware or software failure.
Highly available systems protect users from software failures as well as from failure of a system processing unit SPU , disk, or local area network LAN component. In the event that one component fails, the redundant component takes over. Serviceguard and other high availability subsystems coordinate the transfer between components. A Serviceguard cluster is a networked grouping of HP or HP Integrity servers or both , known as nodes , having sufficient redundancy of software and hardware that a single point of failure will not significantly disrupt service.
A package groups application services individual HP-UX processes together. There are failover packages, system multi-node packages, and multi-node packages:.
The typical high availability package is a failover package. It usually is configured to run on several nodes in the cluster, and runs on one at a time. If a service, node, network, or other package resource fails on the node where it is running, Serviceguard can automatically transfer control of the package to another cluster node, allowing services to remain available with minimal interruption. There are also packages that run on several cluster nodes at once, and do not fail over.
These are called system multi-node packages and multi-node packages. A system multi-node package must run on all nodes that are active in the cluster. If it fails on one active node, that node halts. System multi-node packages are supported only for HP-supplied applications. A multi-node package can be configured to run on one or more cluster nodes. It is considered UP as long as it is running on any of its configured nodes. Each package has a separate group of disks associated with it, containing data needed by the package's applications, and a mirror copy of the data.
Note that both nodes are physically connected to both groups of mirrored disks. In this example, however, only one node at a time may access the data for a given group of disks. In the figure, node 1 is shown with exclusive access to the top two disks solid line , and node 2 is shown as connected without access to the top disks dotted line. Similarly, node 2 is shown with exclusive access to the bottom two disks solid line , and node 1 is shown as connected without access to the bottom disks dotted line.
Mirror copies of data provide redundancy in case of disk failures. In addition, a total of four data buses are shown for the disks that are connected to node 1 and node 2. Note that the network hardware is cabled to provide redundant LAN interfaces on each node. Any host system running in a Serviceguard cluster is called an active node. Under normal conditions, a fully operating Serviceguard cluster monitors the health of the cluster's components on all its active nodes.
Most Serviceguard packages are failover packages. When you configure a failover package, you specify which active node will be the primary node where the package will start, and one or more other nodes, called adoptive nodes , that can also run the package. After this transfer, the failover package typically remains on the adoptive node as long the adoptive node continues running.
If you wish, however, you can configure the package to return to its primary node as soon as the primary node comes back online. Alternatively, you may manually transfer control of the package back to the primary node at the appropriate time. In order to remove all single points of failure from the cluster, you should provide as many separate power circuits as needed to prevent a single point of failure of your nodes, disks and disk mirrors.
Each power circuit should be protected by an uninterruptible power source. Event Monitoring Service EMS , which lets you monitor and detect failures that are not directly handled by Serviceguard;. HP recommends these products; in conjunction with Serviceguard they provide the highest degree of availability. More options. Technical documentation. Complete book in PDF. Using Serviceguard Manager. Printable version. Privacy statement. Using this site means you accept its terms.
Feedback to webmaster.
I will just show the configuration steps. I am not going to discuss any theoretical concept like why someone will use a cluster or what actually a single point of failure is etc etc. There is a handful of discussion on these topics on internet. During the configuration I felt the lack of a well documented step by step configuration guide.
The following sequence is very important. However, if the RAC volume groups are unknown at this time, the cluster should be configured minimally with a lock volume group. By now, the cluster lock volume group should have been created. The Oracle kernel now handles global cache management transparently. The necessary changes have to be made with this file for the cluster. For example, change the ClusterName and adjust the heartbeat interval and node timeout to prevent unexpected failovers due to GCM traffic. You can get personalized Oracle training by Donald Burleson, right at your shop!