Creating initial resource groups

Goal

Setting up useful resource groups is essential to making full and efficient use of cluster capabilities. Following the steps below, you create resource groups that set aside specific hosts for management duties and divvy up the remainder of your hosts based on maximum memory.

Description

You have created your consumer tree based on your business needs and you have added most of your hosts to your cluster but have not yet set up an extensive resource plan, created new resource groups, or modified default resource groups. You are preparing to customize the plan for your applications and want to divide your hosts by memory as you expect to run varied workload with some requiring not less than 1000 MB of maximum memory and others requiring very little memory at all. You want to ensure the following for your workload:
  • They have access to hosts with the necessary amount of maximum memory

  • They have no need to wait for appropriate hosts to become available

  • The workload that requires very little memory does not get hosts with a large maximum memory

At a glance

  1. Plan your groups

  2. Check the ManagementHosts resource group

  3. Review and modify the master host candidate list

  4. Create new dynamic resource groups

  5. Create new resource group by host name

  6. Modify your resource plan for new resource groups

  7. How to grow: Advanced resource groups

Plan your groups

Resource groups overview

Resource groups are logical groups of hosts. Resource groups provide a simple way of organizing and grouping resources (hosts) for convenience; instead of creating policies for individual resources, you can create and apply them to an entire group. Groups can be made of resources that satisfy a specific static requirement in terms of OS, memory, swap space, CPU factor, and so on, or that are explicitly listed by name.

The cluster administrator can define multiple resource groups, assign them to consumers, and configure a distinct resource plan for each group. For example:

  • Define multiple resource groups: A major benefit in defining resource groups is the flexibility to group your resources based on attributes that you specify. For example, if you run workload or use applications that need a Linux OS with not less than 1000 MB of maximum memory, then you can create a resource group that only includes resources meeting those requirements.

    Note:

    No hosts should overlap between resource groups. Resource groups are used to plan resource distribution in your resource plan. Having overlaps causes the hosts to be double-counted (or more) in the resource plan, resulting in recurring under-allocation of some consumers.

  • Configure a resource plan based on individual resource groups: Tailoring the resource plan for each resource group requires you to complete several steps. These include adding the resource group to each desired top-level consumer (thereby making the resource group available for other sub-consumers within the branch), along with configuring ownership, enabling lending/borrowing, specifying share limits and share ratio, and assigning a consumer rank within the resource plan.

Resource groups generally fall into one of three categories:

  • Resource groups that include compute hosts with certain identifiable attributes a consumer may require in a requested resource (for example, resources with large amounts of memory; considered “dynamic”—new hosts added to the cluster that meet the requirements are automatically added to the resource group)

  • Resource groups that only include certain compute hosts (for example, so that specified resources are accessed by approved consumers; considered “static”—any new hosts added to the cluster have to be manually added to the resource group)

  • Resource groups that encompass management hosts only (reserved for running services, not a distributed workload; for example, the out-of-the-box “ManagementHosts” group)

Resource groups are either specified by host name or by resource requirement using the select string.

By default, EGO comes configured with three resource groups: InternalResourceGroup, ManagementHosts, and ComputeHosts. InternalResourceGroup and ManagementHosts should be left untouched, but ComputeHosts can be kept, modified, or deleted as required.

Note:

Requirements for resource-aware allocation policies (where only certain resources that meet specified requirements are allocated to a consumer) can be met by grouping resources with common features and configuring them as special resource groups with their own resource plans.

Gather the facts

You need to know which hosts you have reserved as management hosts. You identified these hosts as part of the installation and configuration process. If you want to select different management hosts than the ones you originally chose, you must uninstall and then reinstall EGO on the compute hosts that you now want to designate as management hosts (a master host requires installing the full package), and then run egoconfig mghost. The tag mg is assigned to the new management host, in order to differentiate it from a compute host. The hosts you identify as management hosts are subsequently added to the ManagementHosts resource group.

Management hosts run the essential services that control and maintain your cluster and you therefore need powerful, stable computers that you can dedicate to management duties. Note that management hosts are expected to run only services, not to execute workload.

Ensure that you designate one of your managements host as the master host, and another one or two hosts as failover candidates to the master (the number of failover candidates is up to you, and may depend on the size of your production cluster).

  1. Make a list of hosts that have been installed with the full package, and that have the tag mg assigned to them (from having run egoconfig mghost).

    You should be able to get a list from the person who installed your cluster.

  2. Review the list of management hosts.

    Ask yourself if these are your most trusted hosts with the reliability they need to be responsible for the entire cluster.

  3. (Optional) Remove any listed management hosts you do not trust.
    1. If you have configured automatic startup during your cluster setup, then run egoremoverc.sh.

      Doing this prevents automatic startup when the host reboots, which keeps the host from being re-added dynamically to the cluster.

    2. Run egoconfig unsetmghost to remove the host from the management host group.

      Running this command removes the host entry from ego.cluster.cluster_name.

    3. If the host is a master candidate, run egoconfig masterlist to remove the host from the failover order.
    4. Restart the master host to change the local host from a management host to a compute host, and for the cluster file to get read again.
  4. (Optional) Designate different management hosts.
    1. For each Linux/UNIX host you wish to designate as a management host, including master candidates, do the following:
      1. Run the egoconfig mghost command:

        egoconfig mghost EGOshare

        where EGOshare is the shared directory that contains important files such as configuration files to support master host failover (once the egoconfig mghost command is run and the files are copied over).

        For example, egoconfig mghost /share/ego

        Note that the shared directory is the same for all management hosts.

      2. Set the environment on the local host so that EGO_CONFDIR gets set properly and the changes take effect.

        Doing this changes EGO_CONFDIR from a local to shared directory.

      3. Restart the master host so that the cluster file gets read again.

    2. For each Windows host you wish to designate as a management host, including the master candidates, do the following:
      1. Run the egoconfig mghost command:

        egoconfig mghost EGOshare domain_name\user_name password

        where EGOshare is the shared directory that contains important files such as configuration files to support master host failover (once the egoconfig mghost command is run and the files are copied over), user_name is the egoadmin account, and password is the egoadmin password.

        For example, egoconfig mghost \\Hostx.mycompany.com\EGO\share mycompany.com\egoadmin mypasswd

        Note:

        The shared directory is the same for all management hosts. Also, be sure to use a fully qualified domain name.

      2. Restart the master host so that the cluster file gets read again.

You now have a list of hosts you would like as management hosts. You use this list to check hosts that actually belong to the management hosts resource group.

Recognize the default configurations

To help orient you, here is a list of the default resource groups and resource plan components you see and work with in the Platform Management Console:

  • Resource groups:

    • ComputeHosts (executes workload)

    • InternalResourceGroup (runs important EGO components and services)

    • ManagementHosts (runs important EGO components and services)

    In this tutorial, we work with the ComputeHosts resource group and create new resource groups.

  • Resource plan (default resource group upon opening page is ComputeHosts):

    Only consumers registered to a selected resource group show. Select different resource groups to modify corresponding resource plans.

    In this tutorial, we update the resource plan to include the new resource group you create.

Check the ManagementHosts resource group

You must be logged on to the Platform Management Console as a cluster administrator.

The ManagementHosts resource group is created during the installation and configuration process. Each time you install and configure the full package on a host, that host is statically added to the ManagementHosts resource group.

You need to ensure that the trusted hosts you identified in the section Gather the facts (above) are the same as the hosts that were configured to be management hosts.

  1. From the Platform Management Console, click Resources > Configure Resource Groups > Resource Groups.

    A list of all resource groups displays.

    By default, your resource groups are ComputeHosts, InternalResourceGroup, and ManagementHosts.

  2. From the list, click ManagementHosts.

    The properties for ManagementHosts display.

    CAUTION:

    Do not, under any circumstances, modify any of the ManagementHosts properties (except for the description). You could seriously damage your cluster.

  3. Note and compare the hosts listed in the Member hosts section at the bottom.

    The hosts that are members of the ManagementHosts resource group are listed here.

    Do these hosts match the list of hosts you made in the section Gather the facts? If not, contact the person in charge of installation and make sure each management host is configured properly.

    You need the exact host name(s) for the next topic.

You have made sure the hosts you want as management hosts belong to the ManagementHosts resource group. The installation and configuration matches your desired cluster setup.

Review and modify the master host candidate list

You must be logged on to the Platform Management Console as a cluster administrator.

Once you have reviewed your ManagementHosts resource group, you need to make sure your master host candidate list is correct.

  1. Select Cluster > Summary.

    A summary displays.

  2. Click Master Candidates.

    The master host is the first host in the list displayed in the right column. Other host names may be listed as candidates or as available hosts (right and left columns, respectively).

  3. Review master and candidates.

    The master host is the host listed first in the candidates column. All others under the candidate list should be eligible hosts that are also part of the ManagementHosts resource group.

    1. Check the host names against the list you made when you checked the ManagementHosts resource group.
    2. Use the controls to move hosts around. Add any hosts that you want as master candidates into the candidates column in the order you want them to failover.

      You cannot remove the master host.

Create new dynamic resource groups

You must be logged on to the Platform Management Console as a cluster administrator. You should not be running any workload while you perform this task because it involves removing an existing resource group.

When you delete a resource group, those hosts are no longer assigned to a consumer. Therefore, you should complete this task before changing your resource plan for the first time. If you have modified the resource plan and want to save those changes, export the resource plan before starting this task.

You can create resource groups that automatically place all your compute hosts in two (or more) different resource groups. You can split your hosts up this way if some of the applications or workload you plan to run on the Symphony cluster have distinct or important memory requirements.

You can logically group hosts into resource groups based on any criteria that you find important to the applications and workload you intend to run. For example, you may wish to distinguish hosts based on OS type or CPU factor.

  1. Select Resources > Configure Resource Groups > Resource Groups.

    A list of your existing resource groups displays.

    By default, your resource groups are ComputeHosts, InternalResourceGroup, and ManagementHosts.

    CAUTION:

    The InternalResourceGroup, ManagementHosts, and ComputeHosts groups should never be deleted. They are special resource groups that contain hosts used for EGO services and out-of-the-box applications.

  2. Select Global Actions > Create a Resource Group.

    The resource group properties display.

  3. Fill in the resource group properties.
    1. Type a name that describes the hosts that you are going to select for this group. In this example, we use “maxmem_high”.
    2. Do one of the following to define the number of slots per host:

      If th e parameter EGO_ENABLE_SLOTS_EXPR = N in the ego.conf file, select 1 slot per CPU; otherwise, define the calculation for the number of slots based on maximum memory in the host:

      1. Choose Number of slots per host is equal to.

      2. Select Maximum Memory from the resource list.

      3. Select / from the list of operators.

      4. Enter 500 in the text box.

    3. Make sure the resource selection method is Dynamic (Requirements).
    4. Under Hosts to Show in List, select Hosts filtered by resource requirement.
    5. In the Resource Requirement String field, type select(!mg && maxmem > 1000).

      The command select ignores any hosts belonging to the ManagementHosts resource group (!mg) and add any non-management host that has a maximum memory of 1001 MB or more (maxmem > 1000).

    6. Click Refresh Host List.

      In the Member hosts section, a list of any hosts (as found in the current cluster) that meet the requirements you specified with the select string is generated.

    7. Review the hosts in the member section and make any modifications you need to the select string until the member list is correct.

      Only hosts that currently match the requirements are displayed here. However, the list is dynamic. As you add hosts to the cluster that meet these requirements, they are automatically added to this resource group.

    8. Click Check for overlaps in the member hosts section to make sure the member hosts do not belong to any other resource groups.

      If you have overlaps, modify your selection string until overlaps no longer exist. Hosts must never overlap between resource groups. Having overlaps causes the hosts to be double-counted (or more) in the resource plan, resulting in recurring under-allocation of some consumers. The exception is with hosts listed in InternalResourceGroup—although all hosts in the cluster are listed here, they are not “double-counted” in the resource plan.

    9. Once you have no overlaps, click Create.
  4. Click Resource Groups again.

    A list of resource groups displays, including the maxmem_high group you just created.

  5. Create a second resource group.
    Note:

    You can skip this step and go to Create a new resource group by host names instead.

    Follow the same steps above with the following differences.

    1. Name the second resource group “maxmem_low”.
    2. Add the selection string select(!mg && !(maxmem > 1000)).

      This resource group is now made up of any compute host not belonging to the ManagementHosts resource group and excluding hosts you specified for the maxmem_high resource group.

      We recommend that you specify one resource group that excludes all other resource groups or selection string requirements (specify using "not" (!)). That way, all your hosts fall into one resource group or another.

    You have now deleted the ComputeHosts resource group and split all your hosts, except those belonging to the ManagementHosts resource group, into two new groups: one made up of hosts with memory over 1000 MB (maxmem_high) and one made up of all other hosts with memory of 1000 MB or less (maxmem_low).

Create a new resource group by host names

If you did not create two resource groups in the following task or did not include all hosts in one of the two resource groups, you can now create a resource group by listing host names.

You must be logged on to the Platform Management Console.

You should have already added most of your hosts to the cluster.

Create a new resource group by host name to include any hosts that may not be already included in a resource group that is dynamic.

Any new compute hosts that are later added to the cluster, and that you want to add to this resource group, must be manually added.

  1. In the Platform Management Console, click Resources > Configure Resource Groups > Resource Groups
  2. From the Global Actions drop-down list, select Create a Resource Group.
  3. Identify the new resource group in the top section of the Properties page:
    1. Specify a resource group name.

      In this example, we use "my_static".

      Resource group names must consist of letters and numbers only (no spaces or special characters) and must be 64 characters or less.

    2. Include a description (max. 200-characters) of the resource group.
    3. Leave the default setting of 1 slot per CPU for Workload Slots (this defines how many slots per host you would like to have the system count; unless you are an advanced user, do not change this setting).
    4. For Resource Selection Method, select Static (List of Names).

      Static resource selection means that you are manually selecting specific hosts to belong to this resource group.

  4. Under Hosts to Show in List, select All hosts.

    A list of all hosts that belong to your cluster displays.

  5. Review the hosts found in your cluster:
    1. Click Member hosts to expand the section and review the hosts found in your cluster.
    2. Review your member hosts and select the hosts you want using the check boxes.

      If you select no member hosts, all hosts in your cluster are added to this resource group when you create it.

    3. Click Check for overlaps.

      If any hosts overlap, remove them from this resource group or remove them from the overlapping resource group.

      No hosts should overlap between resource groups. Resource groups are used to plan resource distribution in your resource plan. Having overlaps causes the hosts to be double-counted (or more) in the resource plan, resulting in recurring under-allocation of some consumers. The exception is with hosts listed in InternalResourceGroup—although all hosts in the cluster are listed here they are not “double-counted” in the resource plan.

  6. Click Create.

Assign the new resource groups to a consumer

You must have already created the consumers that you want.

You need to assign new resource groups to consumers.

  1. Click Consumers > Consumers & Plans > Consumers.
  2. Select a consumer to assign the new resource group to.
    • If you have already created your consumers by modifying the out-of-box structure, using the tree, locate and click the consumer to which you want to assign the new resource group.

    • If you have not modified the consumer tree, click SampleApplications from the consumer tree pane on the left to assign the new resource group to this consumer.

  3. Click Consumer Properties.
  4. Specify one or more resource groups that this consumer should have access to.
  5. Click Apply.

    The Consumer Properties page updates and your changes are saved.

Modify your resource plan for new resource groups

If you know you intend to create more resource groups, do that first even if you do not know all the details of the resource groups.

Any time you add, modify, or delete a resource group, you need to manage resource distribution for these resource groups using the resource plan.

  1. Click Consumers > Consumers & Plans > Resource Plan.
  2. Use the Resource Group drop-down menu to switch between resource groups and modify your resource plan details for each resource group.
    Note:

    Resources groups that do not yet have consumers assigned to them do not appear in the drop-down menu. Consumers must first be assigned from the Consumers & Plans > Consumers page.

    Never make any changes to the ManagementHosts resource group in the resource plan.

How to grow: Advanced resource groups

Now that you have basic resource groups (one for your management hosts and two or more for your compute hosts) you can begin to specialize and split up one resource group that is based on available memory.

For example, if you know that an application you run requires not only machines with 1001 MB of available memory or more, but also two or more CPUs, you can create a new resource group (and then modify the existing “maxmem_high” resource group) to make these specific resources available to any consumer. The new resource group “maxmemhighmultiCPU” would have the selection string:

select(!mg && maxmem > 1000 && ncpus>=2)

You would then modify the existing resource group “maxmem_high” to read:

select(!mg && !(ncpus>=2) && maxmem > 1000)

As a result, the maxmem_high group uses only single CPU hosts.