lsf.licensescheduler

The lsf.licensescheduler file contains Platform LSF License Scheduler configuration information. All sections except ProjectGroup are required.

The command blparams displays configuration information from this file.

Changing lsf.licensescheduler configuration

After making any changes to lsf.licensescheduler, run the following commands:
  • bladmin reconfig to reconfigure bld

  • badmin mbdrestart to restart mbatchd

Parameters section

Description

Required. Defines License Scheduler configuration parameters.

Parameters section structure

The Parameters section begins and ends with the lines Begin Parameters and End Parameters. Each subsequent line describes one configuration parameter. All parameters are mandatory.
Begin Parameters 
ADMIN=lsadmin 
HOSTS=hostA hostB hostC 
LMSTAT_PATH=/etc/flexlm/bin 
LM_STAT_INTERVAL=30 
PORT=9581 
End Parameters 

Parameters

  • ADMIN

  • AUTH

  • DISTRIBUTION_POLICY_VIOLATION_ACTION

  • ENABLE_INTERACTIVE

  • HOSTS

  • LIB_RECVTIMEOUT

  • LM_REMOVE_INTERVAL

  • LM_STAT_INTERVAL

  • LMSTAT_PATH

  • LS_DEBUG_BLD

  • LS_LOG_MASK

  • LS_MAX_TASKMAN_SESSIONS

  • LS_PREEMPT_PEER

  • PORT

  • BLC_HEARTBEAT_FACTOR

ADMIN

Syntax

ADMIN=user_name ...

Description

Defines the License Scheduler administrator using a valid UNIX user account. You can specify multiple accounts.

AUTH

Syntax

AUTH=Y

Description

Enables License Scheduler user authentication for projects for taskman jobs.

DISTRIBUTION_POLICY_VIOLATION_ACTION

Syntax

DISTRIBUTION_POLICY_VIOLATION_ACTION=(PERIOD reporting_period CMD reporting_command)

reporting_period

Specify the keyword PERIOD with a positive integer representing the interval (a multiple of LM_STAT_INTERVAL periods) at which License Scheduler checks for distribution policy violations.

reporting_command

Specify the keyword CMD with the directory path and command that License Scheduler runs when reporting a violation.

Description

Optional. Defines how License Scheduler handles distribution policy violations. Distribution policy violations are caused by non-LSF workloads; LSF License Scheduler explicitly follows its distribution policies.

License Scheduler reports a distribution policy violation when the total number of licenses given to the LSF workload, both free and in use, is less than the LSF workload distribution specified in WORKLOAD_DISTRIBUTION. If License Scheduler finds a distribution policy violation, it creates or overwrites the LSF_LOGDIR/bld.violation.service_domain_name.log file and runs the user command specified by the CMD keyword.

Example

The LicenseServer1 service domain has a total of 80 licenses, and its workload distribution and enforcement is configured as follows:
Begin Parameter
 ... 
DISTRIBUTION_POLICY_VIOLATION_ACTION=(PERIOD 5 CMD /bin/mycmd) 
... 
End Parameter
Begin Feature 
NAME=ApplicationX 
DISTRIBUTION=LicenseServer1(Lp1 1 Lp2 2)
WORKLOAD_DISTRIBUTION=LicenseServer1(LSF 8 NON_LSF 2) 
End Feature

According to this configuration, 80% of the available licenses, or 64 licenses, are available to the LSF workload. License Scheduler checks the service domain for a violation every five scheduling cycles, and runs the /bin/mycmd command if it finds a violation.

If the current LSF workload license usage is 50 and the number of free licenses is 10, the total number of licenses assigned to the LSF workload is 60. This is a violation of the workload distribution policy because this is less than the specified LSF workload distribution of 64 licenses.

ENABLE_INTERACTIVE

Syntax

ENABLE_INTERACTIVE=Y

Description

Optional. Globally enables one share of the licenses for interactive tasks.
Tip:

By default, ENABLE_INTERACTIVE is not set. License Scheduler allocates licenses equally to each cluster and does not distribute licenses for interactive tasks.

HOSTS

Syntax

HOSTS=host_name.domain_name ...

Description

Defines License Scheduler hosts, including License Scheduler candidate hosts.

Specify a fully qualified host name such as hostX.mycompany.com. You can omit the domain name if all your License Scheduler clients run in the same DNS domain.

LIB_RECVTIMEOUT

Syntax

LIB_RECVTIMEOUT=seconds

Description

Specifies a timeout value in seconds for communication between LSF License Scheduler and LSF.

Default

0 seconds

LM_REMOVE_INTERVAL

Syntax

LM_REMOVE_INTERVAL=seconds

Description

Specifies the minimum time a job must have a license checked out before lmremove can remove the license. lmremove causes lmgrd and vendor daemons to close the TCP connection with the application. They then retry the license checkout.

Default

180 seconds

LM_STAT_INTERVAL

Syntax

LM_STAT_INTERVAL=seconds

Description

Defines a time interval between calls that License Scheduler makes to collect license usage information from FLEXlm license management.

Default

60 seconds

LMSTAT_PATH

Syntax

LMSTAT_PATH=path

Description

Defines the full path to the location of the FLEXlm command lmstat.

LS_DEBUG_BLD

Syntax

LS_DEBUG_BLD=log_class

Description

Sets the debugging log class for the LSF License Schedulerbld daemon.

Specifies the log class filtering to be applied to bld. Messages belonging to the specified log class are recorded. Not all debug message are controlled by log class.

LS_DEBUG_BLD sets the log class and is used in combination with MASK, which sets the log level. For example:
LS_LOG_MASK=LOG_DEBUG LS_DEBUG_BLD="LC_TRACE" 
To specify multiple log classes, use a space-separated list enclosed in quotation marks. For example:
LS_DEBUG_BLD="LC_TRACE"

You need to restart the bld daemon after setting LS_DEBUG_BLD for your changes to take effect.

If you use the command bladmin blddebug to temporarily change this parameter without changing lsf.licensescheduler, you do not need to restart the daemons.

Valid values

Valid log classes are:
  • LC_AUTH - Log authentication messages

  • LC_COMM - Log communication messages

  • LC_FLEX - Log everything related to FLEX_STAT or FLEX_EXEC Macrovision APIs

  • LC_LICENSE - Log license management messages (LC_LICENCE is also supported for backward compatibility)

  • LC_PREEMPT - Log license preemption policy messages

  • LC_RESREQ - Log resource requirement messages

  • LC_TRACE - Log significant program walk steps

  • LC_XDR - Log everything transferred by XDR

Valid values

Valid log classes are the same as for LS_DEBUG_CMD.

Default

Not defined.

LS_ENABLE_MAX_PREEMPT

Syntax

LS_ENABLE_MAX_PREEMPT=Y

Description

Enables maximum preemption time checking for taskman jobs.

When LS_ENABLE_MAX_PREEMPT is disabled, preemption times for taskman job are not checked regardless of the value of parameters LS_MAX_TASKMAN_PREEMPT in lsf.licensescheduler and MAX_JOB_PREEMPT in lsb.queues, lsb.applications, or lsb.params.

Default

N

LS_LOG_MASK

Syntax

LS_LOG_MASK=message_log_level

Description

Specifies the logging level of error messages for LSF License Scheduler daemons. If LS_LOG_MASK is not defined in lsf.licensescheduler, the value of LSF_LOG_MASK in lsf.conf is used. If neither LS_LOG_MASK nor LSF_LOG_MASK is defined, the default is LOG_WARNING.

For example:
LS_LOG_MASK=LOG_DEBUG
The log levels in order from highest to lowest are:
  • LOG_WARNING

  • LOG_DEBUG

  • LOG_DEBUG1

  • LOG_DEBUG2

  • LOG_DEBUG3

The most important License Scheduler log messages are at the LOG_WARNING level. Messages at the LOG_DEBUG level are only useful for debugging.

Although message log level implements similar functionality to UNIX syslog, there is no dependency on UNIX syslog. It works even if messages are being logged to files instead of syslog.

License Scheduler logs error messages in different levels so that you can choose to log all messages, or only log messages that are deemed critical. The level specified by LS_LOG_MASK determines which messages are recorded and which are discarded. All messages logged at the specified level or higher are recorded, while lower level messages are discarded.

For debugging purposes, the level LOG_DEBUG contains the fewest number of debugging messages and is used for basic debugging. The level LOG_DEBUG3 records all debugging messages, and can cause log files to grow very large; it is not often used. Most debugging is done at the level LOG_DEBUG2.

Default

LOG_WARNING

LS_MAX_TASKMAN_PREEMPT

Syntax

LS_MAX_TASKMAN_PREEMPT=integer

Description

Defines the maximum number of times taskman jobs can be preempted.

Maximum preemption time checking for all jobs is enabled by LS_ENABLE_MAX_PREEMPT.

Default

unlimited

LS_MAX_TASKMAN_SESSIONS

Syntax

LS_MAX_TASKMAN_SESSIONS=integer

Description

Defines the maximum number of taskman jobs that run simultaneously. This prevents system-wide performance issues that occur if there are a large number of taskman jobs running in License Scheduler.

LS_PREEMPT_PEER

Syntax

LS_PREEMPT_PEER=Y

Description

Enables bottom-up license token preemption in hierarchical project group configuration. License Scheduler attempts to preempt tokens from the closest projects in the hierarchy first. This balances token ownership from the bottom up.

Default

Not defined. Token preemption in hierarchical project groups is top down.

PORT

Syntax

PORT=integer

Description

Defines the TCP listening port used by License Scheduler hosts, including candidate License Scheduler hosts. Specify any non-privileged port number.

BLC_HEARTBEAT_FACTOR

Syntax

BLC_HEARTBEAT_FACTOR=integer

Description

Enables bld to detect blcollect failure. Defines the number of times that bld receives no response from a license collector daemon (blcollect) before bld resets the values for that collector to zero. Each license usage reported to bld by the collector is treated as a heartbeat.

Default

3

Clusters section

Description

Required. Lists the clusters that can use License Scheduler.

When configuring clusters for a WAN, the Clusters section of the master cluster must define its slave clusters.

Clusters section structure

The Clusters section begins and ends with the lines Begin Clusters and End Clusters. The second line is the column heading, CLUSTERS. Subsequent lines list participating clusters, one name per line:
Begin Clusters 
CLUSTERS 
cluster1 
cluster2
End Clusters 

CLUSTERS

Defines the name of each participating LSF cluster. Specify using one name per line.

ServiceDomain section

Description

Required. Defines License Scheduler service domains as groups of physical license server hosts that serve a specific network.

ServiceDomain section structure

Define a section for each License Scheduler service domain.

This example shows the structure of the section:

Begin ServiceDomain 
NAME=DesignCenterB 
LIC_SERVERS=((1888@hostD)(1888@hostE)) 
LIC_COLLECTOR=CenterB 
End ServiceDomain

Parameters

  • NAME

  • LIC_SERVERS

  • LIC_COLLECTOR

  • LM_STAT_INTERVAL

NAME

Defines the name of the service domain.

LIC_SERVERS

Syntax

LIC_SERVERS=([(host_name | port_number@host_name |(port_number@host_name port_number@host_name port_number@host_name))] ...)

Description

Defines the FLEXlm license server hosts that make up the License Scheduler service domain. For each FLEXlm license server host, specify the number of the port that FLEXlm uses, then the at symbol (@), then the name of the host. If FLEXlm uses the default port on a host, you can specify the host name without the port number. Put one set of parentheses around the list, and one more set of parentheses around each host, unless you have redundant servers (three hosts sharing one license file). If you have redundant servers, the parentheses enclose all three hosts.

Examples

  • One FLEXlm license server host:
    LIC_SERVERS=((1700@hostA))
  • Multiple FLEXlm license server hosts with unique license.dat files:
    LIC_SERVERS=((1700@hostA)(1700@hostB)(1700@hostC))
  • Redundant FLEXlm license server hosts sharing the same license.dat file:
    LIC_SERVERS=((1700@hostD 1700@hostE 1700@hostF))

LIC_COLLECTOR

Syntax

LIC_COLLECTOR=licence_collector_name

Description

Optional. Defines a name for the license collector daemon (blcollect) to use in each service domain. blcollect collects license usage information from FLEXlm and passes it to the License Scheduler daemon (bld). It improves performance by allowing you to distribute license information queries on multiple hosts.

You can only specify one collector per service domain, but you can specify one collector to serve multiple service domains. Each time you run blcollect, you must specify the name of the collector for the service domain. You can use any name you want.

Default

Undefined. The License Scheduler daemon uses one license collector daemon for the entire cluster.

LM_STAT_INTERVAL

Syntax

LM_STAT_INTERVAL=seconds

Description

Defines a time interval between calls that License Scheduler makes to collect license usage information from FLEXlm license management.

The value specified for a service domain overrides the global value defined in the Parameters section. Each service domain definition can specify a different value for this parameter.

Default

Undefined: License Scheduler applies the global value.

Feature section

Description

Required. Defines license distribution policies.

Feature section structure

Define a section for each feature managed by License Scheduler.
Begin Feature 
NAME=vcs 
FLEX_NAME=vcs 
DISTRIBUTION=lanserver1 (Lp1 1 Lp2 4/6) 
lanserver2 (Lp3 1 Lp4 10/8)
wanserver (Lp1 1 Lp2 1 Lp3 1 Lp4 1) 
End Feature

Parameters

  • NAME

  • FLEX_NAME

  • DISTRIBUTION

  • ALLOCATION

  • GROUP

  • GROUP_DISTRIBUTION

  • LOCAL_TO

  • LS_FEATURE_PERCENTAGE

  • NON_SHARED_DISTRIBUTION

  • PREEMPT_RESERVE

  • SERVICE_DOMAINS

  • WORKLOAD_DISTRIBUTION

  • ENABLE_DYNAMIC_RUSAGE

  • DYNAMIC

  • LM_REMOVE_INTERVAL

  • ENABLE_MINJOB_PREEMPTION

NAME

Required. Defines the token name—the name used by License Scheduler and LSF to identify the license feature.

Normally, license token names should be the same as the FLEXlm Licensing feature names, as they represent the same license. However, LSF does not support names that start with a number, or names containing a dash or hyphen character (-), which may be used in the FLEXlm Licensing feature name.

FLEX_NAME

Optional. Defines the feature name—the name used by FLEXlm to identify the type of license. You only need to specify this parameter if the License Scheduler token name is not identical to the FLEXlm feature name.

FLEX_NAME allows the NAME parameter to be an alias of the FLEXlm feature name. For feature names that start with a number or contain a dash (-), you must set both NAME and FLEX_NAME, where FLEX_NAME is the actual FLEXlm Licensing feature name, and NAME is an arbitrary license token name you choose.

For example
Begin Feature 
FLEX_NAME=201-AppZ 
NAME=AppZ201 
DISTRIBUTION=LanServer1(Lp1 1 Lp2 1) 
End Feature

DISTRIBUTION

Syntax

DISTRIBUTION=[service_domain_name([project_name number_shares[/number_licenses_owned]] ... [default] )] ...

service_domain_name

Specify a License Scheduler service domain (described in the ServiceDomain section) that distributes the licenses.

project_name

Specify a License Scheduler project (described in the Projects section) that is allowed to use the licenses.

number_shares

Specify a positive integer representing the number of shares assigned to the project.

The number of shares assigned to a project is only meaningful when you compare it to the number assigned to other projects, or to the total number assigned by the service domain. The total number of shares is the sum of the shares assigned to each project.

number_licenses_owned

Optional. Specify a slash (/) and a positive integer representing the number of licenses that the project owns.

default

A reserved keyword that represents the default License Scheduler project if the job submission does not specify a project (bsub -Lp).

Default includes all projects that have not been defined in the PROJECTS section of lsf.licensescheduler. Jobs that belong to projects that are defined in lsf.licensescheduler do not get a share of the tokens when the project is not explicitly defined in the distribution.

Description

Required if GROUP_DISTRIBUTION is not defined. Defines the distribution policies for the license. The name of each service domain is followed by its distribution policy, in parentheses. The distribution policy determines how the licenses available in each service domain are distributed among the clients.

The distribution policy is a space-separated list with each project name followed by its share assignment. The share assignment determines what fraction of available licenses is assigned to each project, in the event of competition between projects. Optionally, the share assignment is followed by a slash and the number of licenses owned by that project. License ownership enables a preemption policy. (In the event of competition between projects, projects that own licenses preempt jobs. Licenses are returned to the owner immediately.)

GROUP_DISTRIBUTION and DISTRIBUTION are mutually exclusive. If they are both defined in the same feature, the License Scheduler daemon returns an error and ignores this feature.

Examples

DISTRIBUTION=wanserver (Lp1 1 Lp2 1 Lp3 1 Lp4 1)
In this example, the service domain named wanserver shares licenses equally among four License Scheduler projects. If all projects are competing for a total of eight licenses, each project is entitled to two licenses at all times. If all projects are competing for only two licenses in total, each project is entitled to a license half the time.
DISTRIBUTION=lanserver1 (Lp1 1 Lp2 2/6)

In this example, the service domain named lanserver1 allows Lp1 to use one third of the available licenses and Lp2 can use two thirds of the licenses. However, Lp2 is always entitled to six licenses, and can preempt another project to get the licenses immediately if they are needed. If the projects are competing for a total of 12 licenses, Lp2 is entitled to eight licenses (six on demand, and two more as soon as they are free). If the projects are competing for only six licenses in total, Lp2 is entitled to all of them, and Lp1 can only use licenses when Lp2 does not need them.

ALLOCATION

Syntax

ALLOCATION=project_name (cluster_name [number_shares] ... )] ...

cluster_name

Specify LSF cluster names that licenses are to be allocated to.

project_name

Specify a License Scheduler project (described in the PROJECTS section) that is allowed to use the licenses.

number_shares

Specify a positive integer representing the number of shares assigned to the cluster.

The number of shares assigned to a cluster is only meaningful when you compare it to the number assigned to other clusters. The total number of shares is the sum of the shares assigned to each cluster.

Description

Defines the allocation of license features across clusters and between LSF jobs and non-LSF interactive jobs.

ALLOCATION ignores the global setting of the ENABLE_INTERACTIVE parameter because ALLOCATION is configured for the license feature.

You can configure the allocation of license shares to:
  • Change the share number between clusters for a feature

  • Limit the scope of license usage and change the share number between LSF jobs and interactive tasks for a feature

Tip:

To manage interactive (non-LSF) tasks in License Scheduler projects, you require the LSF Task Manager, taskman. The Task Manager utility is supported by License Scheduler. For more information about taskman, contact Platform.

Default

Undefined. If ENABLE_INTERACTIVE is not set, each cluster receives one share, and interactive tasks receive no shares.

Each example contains two clusters and 12 licenses of a specific feature.

Example 1

ALLOCATION is not configured. The ENABLE_INTERACTIVE parameter is not set.
Begin Parameters
 ... 
ENABLE_INTERACTIVE=n 
... 
End Parameters 
Begin Feature 
NAME=ApplicationX 
DISTRIBUTION=LicenseServer1 (Lp1 1) 
End Feature

Six licenses are allocated to each cluster. No licenses are allocated to interactive tasks.

Example 2

ALLOCATION is not configured. The ENABLE_INTERACTIVE parameter is set.
Begin Parameters
 ... 
ENABLE_INTERACTIVE=y
... 
End Parameters 
Begin Feature 
NAME=ApplicationX 
DISTRIBUTION=LicenseServer1 (Lp1 1) 
End Feature

Four licenses are allocated to each cluster. Four licenses are allocated to interactive tasks.

Example 3

In the following example, the ENABLE_INTERACTIVE parameter does not affect the ALLOCATION configuration of the feature.

ALLOCATION is configured. The ENABLE_INTERACTIVE parameter is set.
Begin Parameters
 ...
ENABLE_INTERACTIVE=y 
... 
End Parameters 
Begin Feature 
NAME=ApplicationY 
DISTRIBUTION=LicenseServer1 (Lp2 1)
ALLOCATION=Lp2(cluster1 1 cluster2 0 interactive 1) 
End Feature

The ENABLE_INTERACTIVE setting is ignored. Licenses are shared equally between cluster1 and interactive tasks. Six licenses of ApplicationY are allocated to cluster1. Six licenses are allocated to interactive tasks.

Example 4

In the following example, the ENABLE_INTERACTIVE parameter does not affect the ALLOCATION configuration of the feature.

ALLOCATION is configured. The ENABLE_INTERACTIVE parameter is not set.
Begin Parameters
 ...
ENABLE_INTERACTIVE=n 
... 
End Parameters 
Begin Feature 
NAME=ApplicationZ 
DISTRIBUTION=LicenseServer1 (Lp1 1) 
ALLOCATION=Lp1(cluster1 0 cluster2 1 interactive 2) 
End Feature

The ENABLE_INTERACTIVE setting is ignored. Four licenses of ApplicationZ are allocated to cluster2. Eight licenses are allocated to interactive tasks.

GROUP

Syntax

GROUP=[group_name(project_name... )] ...

group_name

Specify a name for a group of projects.

project_name

Specify a License Scheduler project (described in the PROJECTS section) that is allowed to use the licenses. The project must appear in the DISTRIBUTION.

A project should only belong to one group.

Description

Optional. Defines groups of projects and specifies the name of each group. The groups defined here are used for group preemption and replace single projects with group projects.

This parameter is ignored if GROUP_DISTRIBUTION is also defined.

GROUP_DISTRIBUTION

Syntax

GROUP_DISTRIBUTION=top_level_hierarchy_name

top_level_hierarchy_name

Specify the name of the top level hierarchical group.

Description

Required if DISTRIBUTION is not defined. Defines the name of the hierarchical group containing the distribution policy attached to this feature.

GROUP_DISTRIBUTION and DISTRIBUTION are mutually exclusive. If they are both defined in the same feature, the License Scheduler daemon returns an error and ignores this feature.

If GROUP is also defined, it is ignored in favour of GROUP_DISTRIBUTION.

Example

The following example shows the GROUP_DISTRIBUTION parameter hierarchical scheduling for the top-level hierarchical group named groups. The SERVICE_DOMAINS parameter defines a list of service domains that provide tokens for the group.
Begin Feature 
NAME = myjob2 
GROUP_DISTRIBUTION = groups 
SERVICE_DOMAINS = LanServer wanServer 
End Feature

LOCAL_TO

Syntax

LOCAL_TO=cluster_name | location_name(cluster_name [cluster_name ...])

Description

Configures token locality for the license feature. You must configure different feature sections for same feature based on their locality. By default, If LOCAL_TO is not defined, the feature is available to all clients and is not restricted by geographical location. When LOCAL_TO is configured, for a feature, License Scheduler treats license features served to different locations as different token names, and distributes the tokens to projects according the distribution and allocation policies for the feature.

LOCAL_TO allows you to limit features from different service domains to specific clusters, so License Scheduler only grants tokens of a feature to jobs from clusters that are entitled to them.

For example, if your license servers restrict the serving of license tokens to specific geographical locations, use LOCAL_TO to specify the locality of a license token if any feature cannot be shared across all the locations. This avoids having to define different distribution and allocation policies for different service domains, and allows hierarchical group configurations.

License Scheduler manages features with different localities are different resources. Use blinfo and blstat to see the different resource information for the features depending on their cluster locality.

License features with different localities must be defined in different feature sections. The same Service Domain can appear only once in the configuration for a given license feature.

A configuration like LOCAL_TO=Site1(clusterA clusterB) configures the feature for more than one cluster.

A configuration like LOCAL_TO=clusterA configures locality for only one cluster. This is the same as LOCAL_TO=clusterA(clusterA).

Cluster names must be the names of clusters defined in the Clusters section of lsf.licensescheduler.

Examples

Begin Feature
NAME = hspice
DISTRIBUTION = SD1 (Lp1 1 Lp2 1)
LOCAL_TO = siteUS(clusterA clusterB)
End Feature
Begin Feature
NAME = hspice
DISTRIBUTION = SD2 (Lp1 1 Lp2 1)
LOCAL_TO = clusterA
End Feature
Begin Feature
NAME = hspice
DISTRIBUTION = SD3 (Lp1 1 Lp2 1) SD4 (Lp1 1 Lp2 1)
End Feature
Or use the hierarchical group configuration (GROUP_DISTRIBUTION):
Begin Feature
NAME = hspice
GROUP_DISTRIBUTION = group1
SERVICE_DOMAINS = SD1
LOCAL_TO = siteUS(clusterA clusterB)
End Feature
Begin Feature
NAME = hspice
GROUP_DISTRIBUTION = group1
SERVICE_DOMAINS = SD2
LOCAL_TO = clusterA
End Feature
Begin Feature
NAME = hspice
GROUP_DISTRIBUTION = group1
SERVICE_DOMAINS = SD3 SD4
End Feature

Default

Not defined. The feature is available to all clusters and interactive jobs, and is not restricted by cluster.

LS_FEATURE_PERCENTAGE

Syntax

LS_FEATURE_PERCENTAGE=Y | N

Description

Configures license ownership in percentages instead of absolute numbers. When not combined with hierarchical projects, affects DISTRIBUTED and NON_SHARED_DISTRIBUTION values only. When using hierarchical projects, percentage is applied to OWNERSHIP, LIMITS, and NON_SHARED values.

Example 1

Begin Feature
LS_FEATURE_PERCENTAGE = Y
DISTRIBUTION = LanServer (p1 1 p2 1 p3 1/20)
...
End Feature

The service domain LanServer shares licenses equally among three License Scheduler projects. P3 is always entitled to 20% of the total licenses, and can preempt another project to get the licenses immediately if they are needed.

Example 2

With LS_FEATURE_PERCENTAGE=Y in feature section and using hierarchical project groups:

Begin ProjectGroup
GROUP      SHARES    OWNERSHIP    LIMITS  NON_SHARED
(R (A p4))  (1  1)   ()         ()         ()
(A (B p3))  (1  1)     (- 10)     (- 20)     ()
(B (p1 p2)) (1  1)     (30 -)     ()       (- 5)
End ProjectGroup

Project p1 owns 30% of the total licenses, and project p3 owns 10% of total licenses. P3's LIMITS is 20% of total licenses, and p2's NON_SHARED is 5%.

Default

N (Ownership is not configured with percentages but with absolute numbers.)

NON_SHARED_DISTRIBUTION

Syntax

NON_SHARED_DISTRIBUTION=service_domain_name ([project_name number_non_shared_licenses] ... ) ...

service_domain_name

Specify a License Scheduler service domain (described in the ServiceDomain section) that distributes the licenses.

project_name

Specify a License Scheduler project (described in the Projects section) that is allowed to use the licenses.

number_non_shared_licenses

Specify a positive integer representing the number of non-shared licenses that the project owns.

Description

Optional. Defines non-shared licenses. Non-shared licenses are not shared with other license projects. They are available only to that project.

Use blinfo -a to display NON_SHARED_DISTRIBUTION information.

Example

Begin Feature 
NAME=f1  # total 15 on LanServer and 15 on WanServer 
FLEX_NAME=VCS-RUNTIME 
DISTRIBUTION=LanServer(Lp1 4 Lp2 1) WanServer (Lp1 1 Lp2 1/3)
NON_SHARED_DISTRIBUTION=LanServer(Lp1 10) WanServer (Lp1 5 Lp2 3)
PREEMPT_RESERVE=Y 
End Feature
In this example:
  • 10 non-shared licenses are defined for the Lp1 project on LanServer

  • 5 non-shared licenses are defined for the Lp1 project on WanServer

  • 3 non-shared licenses are defined for the Lp2 project on WanServer

The remaining licenses are distributed as follows:
  • LanServer: The remaining 5 (15-10=5) licenses on LanServer is distributed to the Lp1 and Lp2 projects with a 4:1 ratio.

  • WanServer: The remaining 7 (15-5-3=7) licenses on WanServer is distributed to the Lp1 and Lp2 projects with a 1:1 ratio. If Lp2 uses fewer than 6 (3 privately owned+ 3 owned) licenses, then a job in the Lp2 can preempt Lp1 jobs.

PREEMPT_LSF

Syntax

PREEMPT_LSF=Y

Description

Optional. With the flex grid interface integration installed, enables on-demand preemption of LSF jobs for important non-managed workload. This guarantees that important non-managed jobs do not fail because of lack of licenses.

Default

LSF workload is not preemtable

PREEMPT_RESERVE

Syntax

PREEMPT_RESERVE=Y

Description

Optional. Enables License Scheduler to preempt either licenses that are reserved or already in use by other projects. The number of jobs must be greater than the number of licenses owned.

Default

Y: reserved licenses are preemtable

SERVICE_DOMAINS

Syntax

SERVICE_DOMAINS=service_domain_name ...

service_domain_name

Specify the name of the service domain.

Description

Required if GROUP_DISTRIBUTION is defined. Specifies the service domains that provide tokens for this feature.

WORKLOAD_DISTRIBUTION

Syntax

WORKLOAD_DISTRIBUTION=[service_domain_name(LSF lsf_distribution [/enforced_distribution] NON_LSF non_lsf_distribution)] ...

service_domain_name

Specify a License Scheduler service domain (described in the ServiceDomain section) that distributes the licenses.

lsf_distribution

Specify the share of licenses dedicated to LSF workloads. The share of licenses dedicated to LSF workloads is a ratio of lsf_distribution:non_lsf_distribution.

enforced_distribution

Optional. Specify a slash (/) and a positive integer representing the enforced number of licenses.

non_lsf_distribution

Specify the share of licenses dedicated to non-LSF workloads. The share of licenses dedicated to non-LSF workloads is a ratio of non_lsf_distribution:lsf_distribution.

Description

Optional. Defines the distribution given to each LSF and non-LSF workload within the specified service domain.

Use blinfo -a to display WORKLOAD_DISTRIBUTION configuration.

Example 1

Begin Feature 
NAME=ApplicationX 
DISTRIBUTION=LicenseServer1(Lp1 1 Lp2 2)
WORKLOAD_DISTRIBUTION=LicenseServer1(LSF 8 NON_LSF 2) 
End Feature

On the LicenseServer1 domain, the available licenses are dedicated in a ratio of 8:2 for LSF and non-LSF workloads. This means that 80% of the available licenses are dedicated to the LSF workload, and 20% of the available licenses are dedicated to the non-LSF workload.

If LicenseServer1 has a total of 80 licenses, this configuration indicates that 64 licenses are dedicated to the LSF workload, and 16 licenses are dedicated to the non-LSF workload.

Example 2

Begin Feature 
NAME=ApplicationX 
DISTRIBUTION=LicenseServer1(Lp1 1 Lp2 2)
WORKLOAD_DISTRIBUTION=LicenseServer1(LSF 8/40 NON_LSF 2) 
End Feature

On the LicenseServer1 domain, the available licenses are dedicated in a ratio of 8:2 for LSF and non-LSF workloads, with an absolute maximum of 40 licenses dedicated to the LSF workload. This means that 80% of the available licenses, up to a maximum of 40, are dedicated to the LSF workload, and the remaining licenses are dedicated to the non-LSF workload.

If LicenseServer1 has a total of 40 licenses, this configuration indicates that 32 licenses are dedicated to the LSF workload, and eight licenses are dedicated to the non-LSF workload. However, if LicenseServer1 has a total of 80 licenses, only 40 licenses are dedicated to the LSF workload, and the remaining 40 licenses are dedicated to the non-LSF workload.

ENABLE_DYNAMIC_RUSAGE

Syntax

ENABLE_DYNAMIC_RUSAGE=Y

Description

Enforces license distribution policies for class-C license features.

When set, ENABLE_DYNAMIC_RUSAGE enables all class-C license checkouts to be considered managed checkout, instead of unmanaged (or OTHERS).

DYNAMIC

Syntax

DYNAMIC=Y

Description

If you specify DYNAMIC=Y, you must specify a duration in an rusage resource requirement for the feature. This enables License Scheduler to treat the license as a dynamic resource and prevents License Scheduler from scheduling tokens for the feature when they are not available, or reserving license tokens when they should actually be free.

LM_REMOVE_INTERVAL

Syntax

LM_REMOVE_INTERVAL=seconds

Description

Specifies the minimum time a job must have a license checked out before lmremove can remove the license. lmremove causes lmgrd and vendor daemons to close the TCP connection with the application. They then retry the license checkout.

The value specified for a feature overrides the global value defined in the Parameters section. Each feature definition can specify a different value for this parameter.

Default

Undefined: License Scheduler applies the global value.

ENABLE_MINJOB_PREEMPTION

Syntax

ENABLE_MINJOB_PREEMPTION=Y

Description

Minimizes the overall number of preempted jobs by enabling job list optimization. For example, for a job that requires 10 licenses, License Scheduler preempts one job that uses 10 or more licenses rather than 10 jobs that each use one license.

Default

Undefined: License Scheduler does not optimize the job list when selecting jobs to preempt.

FeatureGroup section

Description

Optional. Collects license features into groups. Put FeatureGroup sections after Feature sections in lsf.licensescheduler.

FeatureGroup section structure

The FeatureGroup section begins and ends with the lines Begin FeatureGroup and End FeatureGroup. Feature group definition consists of a unique name and a list of features contained in the feature group.

Example

Begin FeatureGroup
NAME = Synposys
FEATURE_LIST = ASTRO VCS_Runtime_Net Hsim Hspice
End FeatureGroup
Begin FeatureGroup
NAME = Cadence
FEATURE_LIST = Encounter NCSim  NCVerilog
End FeatureGroup

Parameters

  • NAME

  • FEATURE_LIST

NAME

Required. Defines the name of the feature group. The name must be unique.

FEATURE_LIST

Required. Lists the license features contained in the feature group.The feature names in FEATURE_LIST must already be defined in Feature sections. Feature names cannot be repeated in the FEATURE_LIST of one feature group. The FEATURE_LIST cannot be empty. Different feature groups can have the same features in their FEATURE_LIST.

ProjectGroup section

Description

Optional. Defines the hierarchical relationships of projects.

The hierarchical groups can have multiple levels of grouping. You can configure a tree-like scheduling policy, with the leaves being the license projects that jobs can belong to. Each project group in the tree has a set of values, including shares, limits, ownership and non-shared, or exclusive, licenses.

Use blstat -G to view the hierarchical dynamic license information.

Use blinfo -G to view the hierarchical configuration.

ProjectGroup section structure

Define a section for each hierarchical group managed by License Scheduler.

The keywords GROUP, SHARES, OWNERSHIP, LIMIT, and NON_SHARED are required. The keyword PRIORITY is optional. Empty brackets are allowed only for OWNERSHIP, LIMIT, and PRIORITY. SHARES must be specified.

Begin          ProjectGroupGROUP SHARES OWNERSHIP  LIMITS  NON_SHARED PRIORITY
(root(A B C))  (1 1 1)           ()     ()         ()      (3 2 -)
(A (P1 D))     (1 1)             ()     ()         ()      (3 5)
(B (P4 P5))    (1 1)             ()     ()         ()      ()
(C (P6 P7 P8)) (1 1 1)           ()     ()         ()      (8 3 0)
(D (P2 P3))    (1 1)             ()     ()         ()      (2 1)
End ProjectGroup

Parameters

  • GROUP

  • SHARES

  • OWNERSHIP

  • LIMITS

  • NON_SHARED

  • PRIORITY

  • DESCRIPTION

GROUP

Defines the project names in the hierarchical grouping and its relationships. Each entry specifies the name of the hierarchical group and its members.

For better readability, you should specify the projects in the order from the root to the leaves as in the example.

Specify the entry as follows:

(group (member ...))

SHARES

Required. Defines the shares assigned to the hierarchical group member projects. Specify the share for each member, separated by spaces, in the same order as listed in the GROUP column.

OWNERSHIP

Defines the level of ownership of the hierarchical group member projects. Specify the ownership for each member, separated by spaces, in the same order as listed in the GROUP column.

You can only define OWNERSHIP for hierarchical group member projects, not hierarchical groups. Do not define OWNERSHIP for the top level (root) project group. Ownership of a given internal node is the sum of the ownership of all child nodes it directly governs.

A dash (-) is equivalent to a zero, which means there are no owners of the projects. You can leave the parentheses empty () if desired.

Valid values

A positive integer between the NON_SHARED and LIMITS values defined for the specified hierarchical group.
  • If defined as less than NON_SHARED, OWNERSHIP is set to NON_SHARED.

  • If defined as greater than LIMITS, OWNERSHIP is set to LIMITS.

LIMITS

Defines the maximum number of licenses that can be used at any one time by the hierarchical group member projects. Specify the maximum number of licenses for each member, separated by spaces, in the same order as listed in the GROUP column.

A dash (-) is equivalent to INFINIT_INT, which means there is no maximum limit and the project group can use as many licenses as possible.

You can leave the parentheses empty () if desired.

NON_SHARED

Defines the number of licenses that the hierarchical group member projects use exclusively. Specify the number of licenses for each group or project, separated by spaces, in the same order as listed in the GROUP column.

A dash (-) is equivalent to a zero, which means there are no licenses that the hierarchical group member projects use exclusively.

Normally, the total number of non-shared licenses should be less than the total number of license tokens available. License tokens may not be available to project groups if the total non-shared licenses for all groups is greater than the number of shared tokens available.

For example, feature p4_4 is configured as follows, with a total of 4 tokens:
Begin Feature NAME =p4_4 # total token value is 4 GROUP_DISTRIBUTION=final SERVICE_DOMAINS=LanServer End Feature
The correct configuration is:
GROUP            SHARES    OWNERSHIP   LIMITS      NON_SHARED 
(final (G2 G1))   (1 1)     ()          ()        (2 0) 
(G1 (AP2 AP1))  (1 1)     ()          ()          (1 1)

Valid values

Any positive integer up to the LIMITS value defined for the specified hierarchical group.

If defined as greater than LIMITS, NON_SHARED is set to LIMITS.

PRIORITY

Optional. Defines the priority assigned to the hierarchical group member projects. Specify the priority for each member, separated by spaces, in the same order as listed in the GROUP column.

“0” is the lowest priority, and a higher number specifies a higher priority. This column overrides the default behavior. Instead of preempting based on the accumulated inuse usage of each project, the projects are preempted according to the specified priority from lowest to highest.

By default, priorities are evaluated top down in the project group hierarchy. The priority of a given node is first decided by the priority of the parent groups. When two nodes have the same priority, priority is determined by the accumulated inuse usage of each project at the time the priorities are evaluated. Specify LS_PREEMPT_PEER=Y in the Parametersr section to enable bottom-up license token preemption in hierarchical project group configuration.

A dash (-) is equivalent to a zero, which means there is no priority for the project. You can leave the parentheses empty () if desired.

Use blinfo -G to view hierarchical project group priority information.

Priority of default project

If not explicitly configured, the default project has the priority of 0. You can override this value by explicitly configuring the default project in Projects section with the chosen priority value.

DESCRIPTION

Optional. Description of the project group.

The text can include any characters, including white space. The text can be extended to multiple lines by ending the preceding line with a backslash (\). The maximum length for the text is 64 characters.

Use blinfo -G to view hierarchical project group description.

Projects section

Description

Required. Lists the License Scheduler projects.

Projects section structure

The Projects section begins and ends with the lines Begin Projects and End Projects. The second line consists of the required column heading PROJECTS and the optional column heading PRIORITY. Subsequent lines list participating projects, one name per line.

Examples

The following example lists the projects without defining the priority:
Begin Projects PROJECTS Lp1 Lp2 Lp3 Lp4 ... End Projects
The following example lists the projects and defines the priority of each project:
Begin Projects 
PROJECTS         PRIORITY 
Lp1              3 
Lp2              4 
Lp3              2 
Lp4              1 
default          0
... 
End Projects

Parameters

  • PROJECTS

  • PRIORITY

  • DESCRIPTION

PROJECTS

Defines the name of each participating project. Specify using one name per line.

PRIORITY

Optional. Defines the priority for each project where “0” is the lowest priority, and the higher number specifies a higher priority. This column overrides the default behavior. Instead of preempting in order the projects are listed under PROJECTS based on the accumulated inuse usage of each project, the projects are preempted according to the specified priority from lowest to highest.

When 2 projects have the same priority number configured, the first project listed has higher priority, like LSF queues.

Use blinfo -Lp to view project priority information.

Priority of default project

If not explicitly configured, the default project has the priority of 0. You can override this value by explicitly configuring the default project in Projects section with the chosen priority value.

DESCRIPTION

Optional. Description of the project.

The text can include any characters, including white space. The text can be extended to multiple lines by ending the preceding line with a backslash (\). The maximum length for the text is 64 characters.

Use blinfo -Lp to view the project description.

Automatic time-based configuration

Variable configuration is used to automatically change LSF License Scheduler license token distribution policy configuration based on time windows. You define automatic configuration changes in lsf.licensescheduler by using if-else constructs and time expressions in the Feature section. After you change the file, check the configuration with the bladmin ckconfig command, and restart License Scheduler the cluster with the bladmin reconfig command.

The expressions are evaluated by License Scheduler every 10 minutes based on the bld start time. When an expression evaluates true, License Scheduler dynamically changes the configuration based on the associated configuration statements. Reconfiguration is done in real time without restarting bld, providing continuous system availability.

Example

Begin Feature
NAME = f1 
#if time(5:16:30-1:8:30 20:00-8:30)
DISTRIBUTION=Lan(P1 2/5  P2 1)
#elif time(3:8:30-3:18:30)
DISTRIBUTION=Lan(P3 1)
#else
DISTRIBUTION=Lan(P1 1 P2 2/5)
#endif
End Feature