You are here

LCFG Component Test Framework

Project ID: 
105
Current stage: 
Manager: 
Unit: 
What: 

Description: This is a project to develop a new intuitive interface for LCFG
component authors to create and run tests on their components. The
ability to carry out continuous integration tests and produce
summaries of test compliance will be supported.

Deliverables: The project has three main deliverables. The first is a web interface
which component authors can use for creating new component tests. The
second part is a framework which actually carries out these tests
regularly (e.g. nightly) and also whenever a new version of a
component is submitted to the repository. The final deliverable is a
web interface which summarises the level of test compliance for each
component on each supported platform. Alongside this we expect to
provide component authors with the ability to run tests locally for
their components.

Why: 

Customer: The primary customer is LCFG component authors within the School of
Informatics. The intention is that the software and tests will be
packaged and distributed to allow external sites to install the
framework. They would then be able to use it to test their locally
developed components and also add extra tests to components
distributed by the School.

Case statement:

This project is required as the current component test framework suffers from a number of deficiencies. There is also a problem with the new LCFG build tools making testing much more difficult with the current system. The main problems are outlined below:

1. Tests no longer work on all supported platforms

The current testing framework no longer works on the MacOSX platform. This was caused by the addition of the sysinfo component. This highlights the fragility of the current system to changes in the core of LCFG.

2. Adding new tests is difficult

This is probably the biggest problem. It is so awkward to add new tests that component authors rarely bother to go to all the effort. A new system is required which vastly simplifies the procedures by providing a clear and intuitive interface.

3. Tests are only run manually by the component author

The tests are not automatically run regularly and there is no requirement that new versions of a component pass the tests. This leads to situations in which code that previously worked becomes broken or only works on particular platforms.

4. The tests are not run in a "real" environment

Currently the tests are run with a small set of resources which are set up at runtime. They do not come from a real LCFG profile and do not always have the full component schema specified. As the tests are being run by the user there is also the problem that the component being tested cannot modify the real configuration files or manage the real processes. Consequently lots of work must be done within the component code to avoid these problems. This adds an additional maintenance burden to the component authors which, again, acts as a deterrent to adding tests. It also can mask real problems because the tests are not following the same code paths.

5. The tests are not run in a "clean" environment

As stated above, currently the tests are run with a set of specific resources but they also get affected by the resources in the profile of the machine on which the tests are run. This can lead to situations where a component is actually broken but that is not noticed. We also have the problem that dependencies are missed because the tests are run within the "full" environment of the real machine.

When: 

Status:

Timescales:

Priority: This has become a reasonably high priority for the Managed Platform Unit as the tests for the core components no longer pass on the MacOSX platform. The design of the new LCFG build tools also makes it difficult to run tests within the source directory for a component. We thus need to add a new test framework before the code begins to rot.

Time:

How: 

Proposal:

Resources:

There is a requirement for a machine on which to host the web
interface part of the framework and a database backend. There is also
a requirement for a machine (or machines) on which to run the
tests. This could all be done on the same physical machine.

In the first instance it is likely that the tests will just be for the
component configure method and will be run within a simple chroot. At
a later stage it is expected that there will be a need for virtual
machines for full-acceptance testing (checking daemons are started and
stopped, etc) on each supported platform. That machine would need to
be running in 64bit mode.

In terms of personnel the work will be done by Stephen Quinney.

Plan:

The plan is to develop the new testing framework in stages of increasing complexity. The first stage will be to develop a web interface which can be used to add simple tests for the configure method of a component based around checking the contents of generated files. The second stage will be to create tools for running these tests regularly and also on demand (e.g. when a new component version is submitted or a component author makes a request). At this point the tests should be capable of testing anything which can be done within a
chroot, for example, the output of a command (stdout, stderr, exit code) after the configure method has been run. The next stage is to provide a web interface which summarises the test compliance will also be created. Finally the intention is to extend this framework to test more complex component methods, such as start and stop, to check if daemons are correctly managed. That is likely to require some sort of virtualisation technology, at the moment it is unclear how much effort is going to be involved in this development phase.

The web interface will be developed using the perl Catalyst web framework. There will be a PostgreSQL database backend for storing tests and the results. The details of the tests and the compliance summaries should be publically viewable, the interface for adding and modifying tests will be authenticated via Cosign. It is hoped that the test framework can produce output that conforms to the href="http://testanything.org/wiki/index.php/Main_Page">Test Anything Protocol (TAP) - which is currently being proposed as an IETF standard. This would allow the use of standard tools (there are several Perl modules) for summarising the test results and producing appropriate output.

As well as the continuous integration testing system it must be possible for a component author to run tests locally without installing the entire backend architecture. This will provide the author with the ability to test the validity of their code changes before submitting a new release. There will have to be a method to distribute sets of tests for components and also a command-line tool which can be used to execute the tests. It is not yet clear exactly how to handle running the tests locally but it might be possible to
use the Perl Test::Harness modules along with the command-line prove utility.

Other: 

Dependencies: Depends on the LCFG client being refactored - see project 139

Risks: Unforeseen issues with virtualisation technology could prevent the
success of the final stage of this project.

Milestones

Proposed date Achieved date Name Description