Friday, 11 March 2011

Verification: Some Opening Words

VERIFICATION:
Some thoughts and info.

A very short and simple description of the verification process:
Specifically, RTL verification (VHDL, Verilog) using the VHDL Test Bench Package, is the task of proving that the RTL meets the functional requirements. The functional requirements exist in a requirements specification(s). Also, but not preferred, requirements can exist in design specifications. It is the responsibility of verification and design groups to collect functional requirements in a test plan document. The test plan cross references functional requirements with test cases. The test plan includes a description of the verification environment and it's parts. The verification group creates the verification test bench environment, models and BFM's. Test cases are created as defined in the test plan. When all the test cases are complete, design coverage is obtained, results analyzed and appropriate actions taken. Final results should be documented somewhere, could be the test plan.

The above description outlines the major tasks and deliverables that a minimalistic verification effort should have. Depending on the tools available to you, many other additional activities can be added to the over all process.

Other items:
A revision control system is a must. Even if you are working alone, it is always nice to be able to recover back to a previous state. In a group, revision control enables you to know everyone is working with the same files. From a verification point of view, it is a must to know that your verification system can be consistent when there are multiple test bench users. (WinCVS?)

A bug tracking system is a must. Bug tracking does not only pertain to the RTL. Anything that needs to be addressed, now or in the future, should be tracked. This includes documentation, test bench, models, scripts and RTL issues. When the issues pile up, it is easy to drop some, bug tracking enables all issues to be remembered and resolved. (Bugzilla?)

Definitions:
A BFM (Buss Functional Model), is an object that interfaces with the RTL, DUT (Design Under Test). Usually a BFM implements some kind of signalling protocol by asserting and reacting to external DUT ports. A BFM may contain data generation, configuration registers and or status registers. A BFM may also provide signalling into the test bench environment. It may also be advantageous to have a BFM as an internal block within the RTL, ie, a processor.

A Model, is an object that represents the functionality required by the DUT. A model is used as a reference to compare the output of the DUT against. The data and control applied to the DUT is also applied to the model. The two outputs are compared to determine correct functionality. To verify a complex DUT, it may be near impossible to do the job without a model to supplement the verification environment.

The Test Bench, is the object that enables the user (test writer) to control the BFMs' and RTL within the environment. The test bench is the container that holds all that is and is not the DUT. In the case of the VHDL Test Bench, these are the ttb and tb files generated by the tb_gen utility.

A test case or script is a plain text file that contains commands, defined in the test bench, to direct a specific function to be proven to be working. A test case is self checking in that, if an error is detected, it will be reported and the simulation may be terminated. Directed test cases are scripts that are created, usually by hand, to prove some particular functionality is working as required. This means that when your design complexity increases so does the number of directed test cases that need to be created. Usually higher complexity means more functionality, which is why the number of test cases increases. Random test cases are not usually created by hand, they are generated by other scripting languages like TCL or perl. This has the advantage of adding randomness to your testing and gaining all the benefits of random testing. (which is a huge topic, not covered here)

Design Coverage is defined as how well the test environment exorcized the functionality in the DUT. The test cases written, to test the DUT, are the functional coverage. With good planning, a high degree of functional coverage can be obtained via directed tests. Another useful coverage indicator is code coverage. Code coverage tells you what lines of code were executed while simulating. When you run all of your test cases (your regression set) and merge all the code coverage results from each test, you will get an indication of what was missed. There are many things that can be found while doing code coverage, dead code, optional code and missed functionality. To get 100% code coverage, the DUT code will have to have cover on/off paragmas around all known unreachable code. Remember that code coverage is not equal to functional coverage. Just because some line of code was executed, does not mean that it was correct.

The above text is some thoughts on the topic of verification which is why the VHDL Test Bench Package exists. This will prep the reader for the following posts.

Sckoarn

No comments:

Post a Comment