com.sodius.mdw.core.eval
Interface EvaluationContext


public interface EvaluationContext

Provides services that helps to build model transformators/generators.

An instanceof of EvaluationContext is always accessible from templates and scripts through the context variable.

This interface is not intended to be implemented by clients.


Method Summary
 void cancelEvaluation()
          Provides an easy way to interrupt the evaluation.
 GeneratedCode createGeneratedCode(String templateName, List<?> arguments)
          Evaluates the specified text template and returns the generated contents.
 Model createModel(String metamodelID)
          Creates an empty model based on the specified metamodel.
 Object evaluateRule(String ruleSetName, List<?> ruleSetArguments, String ruleName, List<?> ruleArguments)
          Evaluates the specified rule and returns the evaluation result.
 void generate(String templateName, List<?> arguments)
          Evaluates the specified text template and write on disk the generated contents.
 EvaluationConfiguration getConfiguration()
          Return the configuration that define options of this evaluation.
 Logger getLogger()
          Returns the logger used to report errors and warnings, as well as debugging information.
 ProgressMonitor getProgressMonitor()
          Returns the progress monitor used to report progress information.
 Project getProject()
          Return the project in which the evaluation takes place.
 TransientLinkManager getTransientLinks()
          Returns the transient link manager, which provides facilities to dynamically create virtual links between objects of any type.
 MDWorkbench getWorkbench()
          Returnq the workbench in which the evaluation is performed, which may be used to access configuration properties.
 

Method Detail

getWorkbench

MDWorkbench getWorkbench()
Returnq the workbench in which the evaluation is performed, which may be used to access configuration properties. The workbench configuration provides for example access to the command line that launched the workbench, and can determine wheter the workbench runs in headless (no user interface) mode.

Returns:
the workbench.

getLogger

Logger getLogger()
Returns the logger used to report errors and warnings, as well as debugging information.

Returns:
the logger.

getProgressMonitor

ProgressMonitor getProgressMonitor()
Returns the progress monitor used to report progress information.

A script designer may report progress information to the end-user by calling the subTask() method this way :

context.getProgressMonitor().subTask("Analysing packages");
context.evaluateRule(...);

context.getProgressMonitor().subTask("Generating some files");
context.generate(...);

Returns:
the progress monitor used to report progress information (never null).

cancelEvaluation

void cancelEvaluation()
                      throws OperationCanceledException
Provides an easy way to interrupt the evaluation. This method throws an OperationCanceledException, to abort the evaluation, that is silently handled by the workbench.

This method is designed to respond to an end-user request. For example, if a selection dialog box is opened and the user press the Cancel button, this method helps to silently terminate the evaluation. But if the evaluation cannot complete correctly because of invalid input or missing configuration properties, you should rather throw an exception (like EvaluationException).

Throws:
OperationCanceledException - to interrupt the evaluation.

createGeneratedCode

GeneratedCode createGeneratedCode(String templateName,
                                  List<?> arguments)
Evaluates the specified text template and returns the generated contents.

The template name can be a fully qualified name ("myPackage.myTemplate" for example), or a simple name if the template is part of the caller package ("myTemplate" if called from the package "myPackage").

The number and type of arguments must match the parameters of the template. If the template does not declare parameters, the arguments can be an empty list or null.

Important note : the text template is evaluated but the generated contents is not written out on disk. While this method may be helpfull in some specific cases, you will generally prefer to use the generate method.

Parameters:
templateName - the name of the template.
arguments - the template arguments (can be null if no arguments expected)
Returns:
a generated code, which describes the text template output.
Throws:
EvaluationException - if anything prevents the template to be evaluated or its generated contents to be persisted.
See Also:
generate(String, List)

generate

void generate(String templateName,
              List<?> arguments)
Evaluates the specified text template and write on disk the generated contents.

The template name can be a fully qualified name ("myPackage.myTemplate" for example), or a simple name if the template is part of the caller package ("myTemplate" if called from the package "myPackage").

The number and type of arguments must match the parameters of the template. If the template does not declare parameters, the arguments can be an empty list or null.

The template is evaluated and the generated contents is written out, as specified by the file property of the text template. If the template does not specify a file property, an EvaluationException if thrown.

Parameters:
templateName - the name of the template.
arguments - the template arguments (can be null if no arguments expected)
Throws:
EvaluationException - if anything prevents the template to be evaluated or its generated contents to be persisted.
See Also:
GeneratedCode.write()

evaluateRule

Object evaluateRule(String ruleSetName,
                    List<?> ruleSetArguments,
                    String ruleName,
                    List<?> ruleArguments)
Evaluates the specified rule and returns the evaluation result.

The rule set name can be a fully qualified name ("myPackage.myRuleSet" for example), or a simple name if the rule set is part of the caller package ("myRuleSet" if called from the package "myPackage").

The rule name must match a rule declared by the rule set and that is :

The number and type of arguments must match the parameters of the rule set and of the rules. If the rule set or the rule does not declare parameters, the arguments can be an empty list or null.

Parameters:
ruleSetName - the name of the rule set.
ruleSetArguments - the rule set arguments (can be null if no arguments expected)
ruleName - the name of the rule.
ruleArguments - the rules arguments (can be null if no arguments expected)
Returns:
the rule evaluation result (can be null)
Throws:
EvaluationException - if anything prevents the rule to be evaluated.

getTransientLinks

TransientLinkManager getTransientLinks()
Returns the transient link manager, which provides facilities to dynamically create virtual links between objects of any type. These links do not exist before the evaluation and are automatically garbaged at the end of the evaluation process. They do never alter the models.

Returns:
the transient link manager.

getProject

Project getProject()
Return the project in which the evaluation takes place. The project provides a way to dynamically discover available scripts and templates.

Note : template and script designers should generally not have to consider this advanced facility.

Returns:
the project in which the evaluation takes place.

getConfiguration

EvaluationConfiguration getConfiguration()
Return the configuration that define options of this evaluation.

Note : template and script designers should generally not have to consider this configuration, which is directly handled by the workbench.

Returns:
the evaluation configuration.

createModel

Model createModel(String metamodelID)
Creates an empty model based on the specified metamodel.

Parameters:
metamodelID - a metamodel unique identifier.
Returns:
an empty model.
Throws:
EvaluationException - if no metamodel matches this ID.