Thursday, 21 April 2011

VHDL Test Bench: TTB Gen Plus Beta


Beta Release of TTB Gen Plus 2.0!!!

In preparation for a new release of the test bench package, I have spent a few hours recoding the TTB Gen Plus tcl/tk generation tool. It has one small GUI addition but other than that should look and feel exactly like version 1.0. I am currently working on an update to the VHDL Test Bench Package and want to make it a general update to everything. Part of the package is the generation tool. It helps reduce the overhead of creating a standard test bench implementation. So, as a bonus to you for visiting the blog I would like to offer the Beta version for your testing.

The main enhancement is the removal of physical restrictions on parsing the entity definition. The tool should now parse out any legal VHDL entity definition. The tool will now also generate generics found on the entity in the component and port mapping output to the structure file, entity_name_ttb_str. As a minor addition, an optional build_tb.bat file can be generated for Modelsim and Aldec compilation.

There is one little thing about the generic generation. It is hard for the tool to predict what values should be assigned to generics. So as an initial step they are generated on component and port mapping but commented out for later completion by the user.

If you need to get tcl/tk you can get it here http://www.activestate.com/activetcl/downloads

I personally use the 8.5 version.

There is only one condition on someone that downloads a copy of TTB Gen Plus Beta. That is, if you use it and find a bug, or a feature needed or disliked, you have to post a comment at the bottom of this post. Think of it as a bug report, post up and example of the offending entity declaration.

TTB Gen Plus Beta is downloaded from here: TTB Gen Beta
In the past I have found at least two uses for TTG Gen Plus besides for generating test benches. If you code your entity first, you can use TTB Gen Plus to generate the component definition for you. This can save lots of typing if the entity is large with many pins. Also, the port map definition can be copied into a different structure file, remove the names generated and you have a nice start on the port map coding effort.

I hope that TTB Gen Plus can save you as much typing time as it has me.

Sckoarn

P.S. I will be releasing a Beta version of the VHDL Test Bench Package soon!!

VHDL Test Bench: What is Self Checking??


What is meant by a self checking?

Firstly, a self checking test case is one that gets some status from the DUT and tests that it is correct. The status could be that which is read form a DUT status register. You read the status register and test that it has the correct contents. Or it could be that some BFM implementing a bus protocol, with protocol a checker and internal status register, reading it could indicate that everything is good. When a status is checked there is the possibility that it is wrong. This should cause the simulation to output a message and terminate the run. The message should be as descriptive as possible as you want to find the source of the problem quickly. A test case is self checking in that it tests for some condition and if not correct outputs an indication and terminates the run.

I always implement READ commands in the test bench. Every READ command puts the value read into a common variable. This enables various VERIFY commands to test for conditions after every read. For example, if I had a READ_CPU and a READ_BFM command, and they both put the read value in a variable called v_temp_read, then a VERIFY (word) and a VERIFY_BIT command could look to the same place to get the data to be tested. (some tips about command creation) The VERIFY command is the self checking element of a test case.

When a VERIFY or testing type command checks a value and it is wrong, the output message should have enough detail to enable the problem to be located quickly. I use the assertion facility of VHDL to output useful messages. The file name and line number in that file are part of the instruction and the current value is always available. Specifically the file_name string variable and the file_line integer variable. These variables can be used in the assert statement so the user will know where the error originated. (NOTE: While testing this code it was found that the file_name variable contains nothing. I found this bug in the release version and it will be fixed in a new release, coming very soon.) Below is an example of how I create a VERIFY command. We assume a read took place before a verify is done, and the value is in the v_temp_read variable.

-----------------------------------------------------------------------------
elsif (instruction(1 to len) = "VERIFY") then
v_temp_vec1 := std_logic_vector(conv_unsigned(par1, 32));
assert (v_temp_vec1 = v_temp_read)
report LF & "ERROR: Compare Value was not as expected!!" &
LF & "Got " & (to_hstring(v_temp_read)) & LF &
LF & "Expected " & (to_hstring(v_temp_vec1)) & LF &
"Found on line " & (integer'image(file_line)) & " in file " & file_name
severity failure;

In the above example, if par1 does not match v_temp_read, the assertion will fire. The nice error message will print out stating there was a miss-compare, tell you the received value and the expected value, the file name and line number in the script that caused the error. The “to_hstring” function is available in VHDL 2008. The only other item needed before a user can copy/paste the above, fully functional VERIFY command, is to add the v_read_data std_logic_vector variable to the read_file process. The above command and a few others have been added to the code snips file here: (All code snips are now part of the Opencores distribution)

There is a benefit to having the VERIFY command separate from the READ command in larger verification environments. The scripting system can easily be made to create a READ_VERIFY command, where you both read and verify in the same command. The disadvantage to this that if you have more than one read type command, like DUT reads and BFM reads, you will have to create a READ_VERIFY type command for each read type. And if you wanted to create VERIFY_BIT commands, again one of those for each ready type you want to test. Everything that is read should be tested. If the read and verify are separate commands then one VERIFY command can service all READ type commands.

Another form of self checking relates to the Interrupts and Waiting Post. An unexpected interrupt causes the system to put out a message and terminate the simulation. The self checking part is the process that watches the interrupt output pin, and if not expected causes an assertion. This relates to scripts in that, a script has to inform the interrupt watcher that there will be an expected interrupt, do not terminate the simulation when it comes.

The last form of self checking comes from BFM's. Specifically checker type BFM's. These objects usually monitor some complicated function like a communications protocol, a bus monitor, a ram access checker and so on. The function of these objects is to indicate to the verification system when something goes wrong. For instance, a bus monitor could check the contents of a bus on each rising edge of a clock, and if there are x's on any line indicate an error. Or the monitor could be checking actual timing and or sequences. The topic of monitoring BFM's will be further elaborated on in future posts.

I looked over some home projects and found that I never used a verify type instruction, looking at wave forms seems to have fit my needs. My designs at home are very small and I was just playing around. Projects at work are 1000 times bigger and there are >100 test cases. When I run a regression, when a test case fails, I need to know what and where the failure happened. Good checking and good messages are a must for good verification results.

Sckoarn

Saturday, 16 April 2011

VHDL Test Bench Package: Functional Coverage


Functional Coverage?? You may think this topic only applies to current methodologies like those based on sophisticated OOP languages using randomization and coverage matrices. This is not true, the scripts created to test the DUT are the functional coverage in a script based system.

When you are verifying a product, you will want the quality to be as high as possible. This all stems from good planning. There should be a written product specification so that everyone knows what they are building. From this specification more specifications can be created that spec out the low level details, i.e. your FPGA or ASIC. Part of the verification effort is to create a test plan. This will include extracting the functional requirements from various sources. The first source of functionality comes from the requirements specification. The other source of most of the detailed functionality comes from the design specification. This is unless the requirements specification has ALL the details needed to prove the functionality of the device. I have found this to be very rare that one document is the source of all functionality definition. The quality of the specification is usually reflected in the quality of the design and testing. A poorly defined device is hard to design let alone verify.

The functionality extracted from the specifications should be collected and documented. I usually make a table in the test plan that contains all the functionality. The table can often be more than ten pages of functional points, one point per row. By doing the plan, you review all the functionality and prepare yourself for the task.  List for each functional point a test case to cover it. This could be placed in a column in the same row as the functional point.  If the functionality is covered by many test cases, they should all be listed. By doing this exercise of planning you will have produced a matrix of functional coverage and the test cases that do the covering. Be sure to make the test cases self checking so you know that the testing is being done and verified.

How do you know you got all the functionality?
One facility that is common is, code coverage. Code coverage tells you which lines of code / which branches were taken, when you ran a simulation. Code coverage does not tell you if any of the functionality simulated was correct. If you run all of your test cases and combine the code coverage you will get an insight into how well the design was exercised. Missed lines of code and branches will some times uncover missed or unknown functionality. Missed lines may be missed because they were unreachable. By analyzing each missed code line, the verification person can speculate and determine why each item was missed.

The VHDL Test Bench Package can be used to create a verification system that will enable full functional requirements of a FPGA or ASIC design to be verified. This is achieved by using test cases to prove functional points.

Sckoarn

Monday, 4 April 2011

VHDL Test Bench Package: An Internal BFM, Like a CPU


This post will present how the VHDL Test Bench Package can be used as the “internal” test bench or BFM. By internal I mean that the test bench bhv architecture file be substituted in the place of an internal architecture block of the DUT. i.e. Replacing some of the RTL with behavioural test bench code, namely the script parser. This enables the test script to emulate internal blocks with simple custom commands. This can be very useful in several situations.

The first and most popular is to replace the CPU Core with the scripting portion of the bhv file. As an internal block, the CPU does not source a clock, so the clock_driver process can be removed from the bhv file. A code snip of this implementation is provided in section 3 of the Code Snips download file. Presented in the code snip is the structure modifications that would have to be made to the DUT RTL. The code is also included here to facilitate better descriptions.

The addition of a generic to the entity who's architecture instantiates the component of interest. Below is a sample of the additional code. The addition of the generic and additional structure will have no effect on synthesis.

-- this is the top enity or at the level where you can assign the
-- en_bfm generic and it makes sense
entity my_top_dut is
generic (
g_en_bfm : integer := 0
);
port (
-- top port definitions
);
end enity my_top_dut;

A copy of the RTL block is made, renamed and a generic is added to it that points to the stimulus file. Some methods have the declaration of components in a package, in the case of the example, the components are declared in the architecture of the example entity (my_top_dut). The architecture start could look like the code below:

architecture struct of my_top_dut is
-- this is the component of an internal block of my_top_dut
component rtl_block
port (
reset_n : in std_logic;
clk_in : in std_logic;
-- ...
);
end component;
-- bhv_block has the same pin out as rtl_block, except for the generic
component bhv_block
generic (
stimulus_file: in string
);
port (
reset_n : in std_logic;
clk_in : in std_logic;
-- ...
);
end component;
--....
begin

When it comes to port mapping an instance, a VHDL generate statement round the RTL and around the bhv port mappings enables one or the other to be included in the load. Hence an optional internal BFM. This is shown in the example code below:

begin
-- if generic is default value, include rtl
if(g_en_bfm = 0) generate
begin
rtl_b1: rtl_block
port map(
reset_n => reset_n,
clk_in => clk,
-- ...
);
end generate;
-- if generic is set for bhv, include it
if(g_en_bfm = 1) generate
begin
bfm: bhv_block
generic map(
stimulus_file => "stm/stimulus_file.stm"
)
port map(
reset_n => reset_n,
clk_in => clk,
-- ...
);
end generate;
-- ...
end struct;

Be it a simulator or a synthesis tool, the user has control over which version to load through the top enable generic (g_en_bfm). By default the above implementation will load RTL, and to get the BFM in place you have to modify code or pass in a different generic value when loading the simulation. This facility is provided by all major tool vendors.

Now with the test bench bhv file substituted for the CPU, the CPU emulation commands can be pasted into the else if chain. That and the other modifications detailed in Section 2 of the code snips file, and have a nice start to the implementation. The one thing that will have to be created is the interface logic for the block being emulated. In the case of a CPU, READ and WRITE commands will have to be created that use the bus signalling. There may be many types of bus cycles that need to be emulated. Using procedures within the read_file process will enable a reduction in file size and stability of interface commands. This topic will be detailed in a future post.

Another place that is a good candidate for internal BFM substitution is the debug link block. Many processors have a debug facility with some implementations being controllable over a system bus through a simple RS-232 serial link, for example. The RS-232 debug link in RTL is a bus master that can control some debug slave on the bus system. The idea is to replace that RTL debug link with an internal BFM. This makes the scripting system a bus master, and now can control the system as if it was a host computer over the RS-232 link. This removes the slow RS-232 system from having to be simulated/modelled. The real CPU RTL can now be controlled as it would in the implementation, through it's debug interface. If the CPU had been modelled before as a bus master (as I would assume it was), you can most likely reuse some of that code.

As stated in the test bench package documentation, having multiple script parsers in one environment will increase complexity significantly. If an internal BFM uses the read_file process, all effort should be made to make it the central place for control. If the verification system needs to have scripting control from the top level, another way to gain control of an internal block is to use the signal connection facility found in most simulation tools. In the Modelsim world this is know as signal spy. It enables the connection of one signal to another while ignoring VHDL structure boundaries. The user can drive/read these signals and control the end points. This enables one script parser to control what ever is required by the effort.

Using facilities like signal_spy is good if it helps you get the job done. The implementation can use the same generate structure presented above, but use an empty architecture for the bhv instead of the test bench bhv file. Once the signal_spy is connected you can operate on the internal block as if the scripting system was there. Commands in the bhv file can act on the signal_spy signals like any other signal or pin it has access to.

That concludes the presentation of how the VHDL Test Bench Package can be used as an internal BFM. I have personally implemented all of the above methods. I hope that helps some to implement some good verification environments.

Sckoarn

Saturday, 2 April 2011

VHDL Test Bench Package: Variables in the Scripting System


One of the more power things about the VHDL Test Bench Package, is the implementation of variables in the scripting system. This facility is provided by the default command “DEFINE_VAR”, which is built into the parsing system and is not part of the else if chain. As one of the latest enhancements to the test bench package, variable definitions were removed from the instruction array. This was done to speed up searching as well as make declaration independent of usage. As in, you can use a variable before it is defined in the script.

Using variables makes scripts more readable. Instead of 1/0 as values to pass, you can use ON/OFF, and as a person reading that, you already know it is a switch value, just from the naming. The user of the variable does not have to worry about the value. If you have some binary words that need to be written to a DUT that cause certain configurations to be enabled, it is better to read “ENABLE_MODEXX”, than it is to read x00234.

Be sure to use meaningful variable names.
Now the team, or you, have bought into the using variables in a big way. At the top of each script everyone pastes in a copy of a nice group of variables they used from the test case before. Now some time later in the project, and many test cases have been created, something happens and a change to the DUT forces the value of one or more of the variables to have to change!! Oh, no, that means editing all the scripts. This problem could be solved by writing a script changer program. Or you could do it by hand. It would have been better to have avoided that situation.

The INCLUDE command solves the above problem. Create a file that only contains the common variable definitions. In your scripts, use the INCLUDE instruction to read and create the common variables. I put my variable includes at the top, like many other languages do. I use this approach for device address maps. This way, if there are any address changes, there is a single place to update, and all the test cases receive that update automatically. Also, as a side benefit (assuming all test case writers use them), the scripts will be more readable as a whole because of the common variable definitions.

Once a variable is defined in a script, it's value can be changed. A variable can be assigned the value of another variable. Below is an example of using variables in a loop.

Lets assume the line below this one, is in a file named “std_vars.stm”:
DEFINE_VAR TABLE_BASE_ADDR x8000 -- define base address of table

This is a simple test case that uses variables in a good way:
INCLUDE stm/std_vars.stm

DEFINE_VAR TABLE_ADDR 0 -- define temp location for changes
DEFINE_VAR DATA 0 -- define data variable with value 0
DEFINE_VAR DATA_INC 1 -- a value to change the data by
DEFINE_VAR ADDR_STEP 4 -- the address increment value
.....
-- copy the default value to a variable you can change
EQU_VAR TABLE_ADDR $TABLE_BASE_ADDR
LOOP 512 -- loop through the table
WRITE $TABLE_ADDR $DATA -- write to address the data.
READ $TABLE_ADDR -- read it back
VERIFY $DATA -- check is a match
ADD_VAR DATA $DATA_INC -- change the data
ADD_VAR TABLE_ADDR $ADDR_STEP -- point to next address
END_LOOP

FINISH

The above example does everything with variables. It is isolated from DUT address changes because it includes a standard variable definition from a single place. If your test had several more loops, and you wanted to change the addressing step value, you can do it in one place if you used the scheme above.

During some rather large verification projects, the use of the default variable “include” file became very important. I took the concept further and created a table defined in tcl code. From this table I generated the standard variables include files and well as several other files ... header file for C and assembly coding, tcl/tk code itself for test case generation code and in generated test cases. I will cover this topic in detail in a future post, test case generation. In one project I found that even the software group was using my table for generating their header files.

How do I produce a walking ones pattern? Simple, add a variable to itself, with a starting value of 1. This is shown in the example below:

DEFINE_VAR ADDR 1 -- 32 bit address range
DEFINE_VAR DATA 0 -- some data

LOOP 32
WRITE $ADDR $DATA -- write the test location
READ $ADDR -- read it back
VERIFY $DATA -- test results
ADD_VAR ADDR $ADDR -- walk the 1
ADD_VAR DATA 1 -- change the test data
END_LOOP

FINISH

Another variable type the VHDL Test Bench Package understands is the in-line variable. These are the variables that contain the line number of the next executable script line. Defining one is shown below:

JUMP $JUMP1
....
....
JUMP1: -- --<< this is an in-line variable
WRITE .....

The JUMP1: statement defines a variable equal to the line number the WRITE statement is found. In all my test case writing I have never had a reason to change a in-line variable value. But, it can be done as it is accessed just like any other variable in the variable link list.

The last kind of variable, which really is not a variable, is the condition operators. They are described in the documentation as Condition Variables, but after thinking about it a bit, they really are not variables. They are predetermined text that will return a specific value. This enables the command code to case on the condition that has been passed from the script.

User defined variable access and modification commands are all part of the scripting system. The test bench creator can use the ADD_VAR and EQU_VAR default commands as templates to create their own. One command I have created several times is a randomize variable command. This is rarely used by me as I like to keep complication out of the script and in BFM's. Again, a future topic, BFM's and the test bench system.

That is about all I can say about variables and the test bench package. I hope that helps some avoid some scripting nightmares.

Sckoarn