Skip to content
Luis Diogo Couto edited this page Jun 7, 2017 · 18 revisions

This pages describes the Overture test framework and how to use to write new tests and work with existing ones. The framework is located in core/testing/framework. The code is reasonably documented so take a look. Sample tests are also available at core/testing/samples

Framework Overview

The test framework has two guiding design principles:

  • Minimize the amount of test code developers have to write by automating common tasks and providing opinionated defaults.
  • Provide a standard workflow for all tests that can be used by most modules for most of their testing.

The framework works predominantly with parameterised tests. You define the basic behavior of the test and the test is repeated multiple times, with a parameter input every time. Parameters are defined by test writers. In Overture, they are almost always VDM sources.

The workflow for a test is:

  1. grab the source from its path
  2. parse and type-check it to make an AST
  3. process the AST to produce some result (proof obligations, interpreter results, generated code, etc.)
  4. compare the produced result against a previously stored one

The primary test class in the framework (ParamStandard) automates steps 1-2, provides a mandatory hook for step 3 and an optional hook for step 4. This class should be adequate for writing most kinds of tests. However, there are a few alternatives for dealing with special cases:

  • ParamFineGrainTest provides a hook for step 2. This allows it to deal with sources that do not parse or type-check correctly. This class exists mainly to help test the parser and type checker. Be warned: invoking the parser and type checker is laborious. You should not use this class unless you really need to.
  • ParamExternalsTest runs tests with VDM sources available externally (that is, outside of the overture repository). Since external sources can contain errors, this is a subclass of ParamFineGrainTest. To run tests defined with this class, the path to the external sources must be passed via a Java property (see below).
  • ParamNoResultFileTest runs tests that do not store results. This is a variation of ParamStandardTest but step 4 simply provides a hook to analyze the result. This class is useful for "No crash" and other kinds of tests that do not need any stored results.

Working with existing tests

There are 3 things to know when working with existing tests: how to add a new input, how to pass external inputs and how to inspect failures and update results.

In order to add a new input to an existing, place the VDM source somewhere under the root for that test. You can find the root in the test code (look for the @Parameters annotation). When you add a source for a standard test, you must use the correct extension as those are used by the framework to work out which dialect needs to be used. When you add a new input there will be no result file. Run the test in update mode (see below) to generate a result file.

In order pass the external sources to a ParamExternalsTest, use the Java VM Property externalTestsPath=/path/to/files. This property can be set in the run configurations of most IDEs. On the command line, you can do it with maven by using the argument -DexternalTestsPath=/path/to/files

When a test fails, you should see a message like this: [Input: x, Result: y, Update Property:z] along with the details of the test failure (differences in actual versus expected result, etc.). From this information, you can decide if there is a problem with the test input source (in which case, you simply edit the file) or the result file (in which case, you update it by rerunning the test while passing the listed property). Note that you only get this information for tests that fail due to assertion errors. If there is an unexpected internal tool error during the test, you will only see the standard exception stack trace.

Creating new tests

To create a new test, select the appropriate test class and subclass it. The class is parameterised on the type of its results R. You are free to use whatever results class you want (but see below for tips). Also, the class must be annotated with @RunWith(Parameterized.class) in order to run the parameterised tests. Once this is done, you must implement a few methods:

@Parameters Method

This method provides the parameterised inputs to the test case. Due to some annoying limitations in JUnit, it has to be manually written for every test class. The method must have the @Parameters annotation, be public static and return a collection of Object arrays. The framework provides a helper method that you can call that does all of this for you. All you need to do is call that method with the path to your test inputs:

@Parameters(name = "{index} : {0}")
public static Collection<Object[]> sampleParameters(){
		return PathsProvider.computePaths("path/to/test/srcs");
}

For external tests, there is no need to write a @Parameters method. The framework already takes care of calling the PathsProvider on the provided external path.

processModel(List<INode> ast)

This method takes the parsed and type checked model from the input source and must return an object of the result type. Normally, you will invoke the analysis of your module on the ast, perform some data conversion to the result type and return it.

getResultType()

This method must return the type of the test result. It's here to help out with reflection and JSon serialisation. You can copy the code snippet below and replace R with the actual type of your test results:

Type resultType = new TypeToken<R>() {}.getType();
return resultType;
getUpdatePropertyString()

This method must return a string that indicates the property used to update results. The convention for this property is the following: tests.update.[maven-module].[test-class]. Please follow convention. It makes it much easier to update results.

Other methods

You can optionally define a compareResults(R actual, R expected) method. This overrides the default comparison of results which is based on equals. If you need more complex processing of results, this is where to do it. If you do override this method remember that you must write assertions yourself. In these assertions, please write a failure message and call testInfo() to get a string reporting the input and result files and the update property.

For fine grain tests, instead of processing a model, you must process a source. This is done in the processSource() method. This method must a test result but has no arguments. Instead, you can use the instance variable modelPath which points to the test input source. From here, you can process the source as needed.

Advanced

Result Classes

Good result classes need to be human-readable so it's easy to understand when tests go wrong. The key thing is to have something that's diffable. Normally, you will have collections of elements.

  • Use un-ordered collections whenever possible. This avoids test failures because a list has been re-ordered.
  • implement a custom toString() method that orders the elements before printing. This makes it easier to spot if an element is missing
  • use a value class for the elements of the collection. Value classes are read-only and only have primitive types inside them
  • Use only the minimal amount of data

TBD

  • using assumptions
  • language releases
  • bulk property passing
  • folder layouts for external inputs and the associated stored results