RSS Feed

Software Module Tests

Software Unit Tests which also are called Software Module Tests and are so called "dynamic tests". This requires the execution of the software units. The software can be executed in the target system, an emulator, simulator or any other suitable test environment. Within the range of dynamic tests the state of the art distinguishes between structural and functional tests. The structural dynamic tests are performed with the knowledge of the module internals in mind. This means that the branches and paths in functions and modules have to be considered and when the tests will be designed not only the function of the test object is tested, but at the same time it will be checked if all branches in the software have been covered.


Functional Unit Tests

Functional unit tests are also called black-box tests. They are simply the kind of tests which consider the system to be a "black box". This means that the testing is done without special knowledge or consideration of the internal structure of the software. The data flow or control flow, internal timings and internal interactions are deliberately unconsidered. Inputs are stimulated or supplied by peripheral devices or simulation environments. The outputs of the software are checked for their expected values and behaviour related to the supplied inputs. The following features are common to these tests:


  1. The tests are strictly checking the functionality and performance of the software against its requirements.
  2. The timing of the inputs and expected outputs is part of the checks, as far as it is subject to specification.
  3. In addition to checking the required behaviour, test requirements will usually be extended to check also the behaviour at unexpected inputs. These are also called "dirty tests".
A test environment, should be used for the functional testing of units and components. At the defined and specified input interface the unit or software component has to be stimulated with input data. This is done e.g. with pre filled arrays containing the test data or by reading the test data from a file. The data are handed over to the tested function via the pass parameter interface or in case the data are static it can be set prior to execution via set-functions.

The output interface is recorded and evaluated to check if the expected results were achieved. This can be also used for regression testing in case the source code was modified e.g. in a re-factoring process, to make sure that the code improvement or re-factoring did not change the functionality.

The focus of the test is:

  • Set up of automated tests that also can be used as regression tests. This means the test environment once set up for a software unit can be re-used to check it e.g. after maintenance.
  • Implementation of the relevant state of the art test methods like equivalence classes, boundary value analysis and condition coverage.
  • Easy generation of test cases for all possible combination of inputs to a function, e.g. to check a function for overflows, underflows or divisions by zero.

White-Box Aspects of Unit Tests

Especially for safety relevant software it is required to prove certain code coverage when performing unit tests. C1 coverage is required for safety critical software. This means that at least condition coverage has to be achieved by the test cases. Condition coverage will make sure that each branch of the software will be executed, even if the branch is empty.

To achieve this it is necessary to instrument the code for measurement. After this run the tests and evaluate the coverage. If condition coverage was not achieved you do not have sufficient test cases and test cases have to be added in the dynamic tests until the desired coverage was achieved.

It can not be said that test cases cover all possibilities if condition coverage was achieved. However it can be said that test cases definitely are not sufficient if it was not reached. Therefore C1 coverage is a minimum requirement for software unit tests in safety critical systems.

The Test Environment

To perform the unit tests you need a test bench. There are quite a number of professional systems available which either do testing in a PC environment or on the target hardware. Generally you can say that everything needing a target hardware is a potential problem. The target hardware might not be your exact target system, but an evaluation board has to be used for the tests. However, if this is the case there is not much benefit, because the integration into the target system will be never complete and valid. The better approach would be to keep hardware related parts of the program in a dedicated HW abstraction layer. This part has to be then tested in a special environment (e.g. the emulator). All the rest of the software is standard C and could be also tested on a PC based environment. We will give an example of setting up tests in a Perl script environment.

How does our Perl based test environment look like?

The test environment is all contained in a Perl script. It uses the Perl "Inline" package, which has the feature to use C - functions from the Perl script. To achieve this the script has two parts. At the top there is the Perl part which is appended by a C part. The general procedure is to execute the main Perl script part. Perl then will automatically set up sub-directories, compile the C part and build a DLL. The Perl script will then use the functions in the DLL as required. This means that global variables within the DLL can not be accessed by Perl, besides the fact that the data model in C and Perl is quite different. To access global variables it is necessary to write so called "get" functions to read them and use them in Perl. This is for the purpose to evaluate results. To set input values for the test of a C function so called "set" functions have to be established.

The operation flow is then as follows:

  1. Open a log file where all subsequent results will be written.
  2. Write the header of the test log to the log file.
  3. Call the initialization routine of the C-part of the test set up.
  4. Set the input(s) to the C-function using the appropriate "set-function(s)".
  5. Execute the function(s) of the test object.
  6. Read the outputs of the C-function using the appropriate "get-function(s)".
  7. Write the resulting outputs including the original variable name used in the C-function to the log file.
  8. Repeat steps 4 to 7 with new input data as often as this is required.
  9. Write the trailer to the log file which states the function name which was tested.

If the same test step is used multiple times with varying data it is recommendable to use nested "for" loops to get a combination of all possible input data.



Example to test a proprietary division function:

For a micro controller application a proprietary division function had to be written. You could not simply use c = a / b; because this would eventually lead to exceeded runtime, since the normal division made use of a lengthy library function provided by the maker of the compiler. A smart solution for a very fast division function was found, but how to make sure that it gives the same results as the standard division on any other computer?

The following test approach helped to develop the function (it had some rounding problems in the beginning), and finally gave the prove that the new function has the same performance as the standard division:



  

Imprint