Monday, 1 August 2011

VHDL Testing: VHDL 2008 FIFO of type ....

This post is about how VHDL 2008 can be used to implement a FIFO or list of items of any type by implementing a “protected type”.
Many applications store or delay data transfers as part of the implementation. For instance, packet switching, video processing and Digital Signal Processing all usually have some delay element during the processing of the information. A packet switch may store hundreds of packets in order to meet traffic constraints. A video processing unit may store several frames of video data as it processes the video data stream. A DSP application may have delays as it processed blocks of data.

What ever the case is, you have supplied stimulus to the DUT and there is a delay before you get results out. Where do you store the input or the expected results? Wouldn't it be nice to include a time stamp in your input data so you could determine the time it took to process the data?

This can all be realized using VHDL 2008 protected types. A protected type can be used as a shared variable as well, so, knowing how to do this can save time and make life easier.

First step is to decide what kind of record to define that will be the FIFO or list item. This can include anything that you can define in a VHDL record type. For the example, the arr128x8 type will be associated with a FIFO type of fifo_data_type. The type that is going to be the FIFO or list item must include a pointer to the type. In the case of this example the FIFO item type is called fifo_item, defined as a record. The pointer is fifo_item_ptr is a type of pointer to a fifo_item type. The pointer to fifo_item type enables a linked list of fifo_items to be traversed. This implementation is called a single linked list. In order to implement a double link list, a prev_rec definition could be added to the fifo_item definition. The fifo_item type must be defined in a separate package. The pgen package used in the example packet generation BFM example, has been extended and put in a separate file. Included below:

------------------------------------------------------------------------------
-- First we start of with the definition of the packet array type
-- and the pack_out record for pins on the entity and comp.
-- The size of the whole system can be changed by changing the
-- array and record types.
library IEEE;
use IEEE.STD_LOGIC_1164.all;

package pgen is
type arr128x8 is array(0 to 127) of std_logic_vector(7 downto 0);
type fifo_data_array is array(0 to 127) of std_logic_vector(7 downto 0);
-- define the fifo_item type and access pointer
type fifo_item; -- the item
type fifo_item_ptr is access fifo_item; -- pointer to item
type fifo_item is record -- full definition of item
   data : fifo_data_array;
   next_rec : fifo_item_ptr;
end record;
type pack_out is record
   dout : arr128x8;
   drdy : std_logic;
   end record;
end package pgen;
Once the type has been defined in a package the protected type is defined in another package. For this example the package is simply called fifo_pkg. The example is presented as a pure implementation of a protected type in the fifo_pkg, but of course you could have many other items defined, including other protected types.

The first thing to define is the header portion of the package. It include the definition of the protected type and its impure functions and procedures.

use std.textio.all ;
library ieee ;
use ieee.std_logic_1164.all ;
use ieee.numeric_std.all ;
use ieee.math_real.all ;
use work.pgen.all; -- include the types, as needed by shared variables
package fifo_pkg is

   type tb_fifo is protected
     procedure push(data : fifo_data_array);
     impure function push(data : fifo_data_array) return integer;
     impure function pop return fifo_data_array;
     impure function fifo_depth return integer;
   end protected tb_fifo;
end fifo_pkg;

Some simple utilities are defined, push in both a procedure and function form, a pop function and a fifo_depth function. Basic FIFO functions. Note the “is protected”, “end protected” and “impure function” syntax.

The body section is there the implementation of the procedures and impure functions are done. The example code for the body is presented below:

package body fifo_pkg is
type tb_fifo is protected body
   variable fifo_ptr : fifo_item_ptr := null;
   variable fifo_cnt : integer := 0;

   -- push function
   impure function push(data : fifo_data_array) return integer is
     variable new_pkt : fifo_item_ptr;
     variable tmp_ptr : fifo_item_ptr;
   begin
     if(fifo_cnt = 0) then
       new_pkt := new fifo_item;
       fifo_ptr := new_pkt;
       -- copy the packet to the new space
       new_pkt.data := data;
       new_pkt.next_rec := null;
       fifo_cnt := 1;
     else
       new_pkt := new fifo_item;
       tmp_ptr := fifo_ptr;
       -- get to the end of the fifo
       while(tmp_ptr.next_rec /= null) loop
         tmp_ptr := tmp_ptr.next_rec;
       end loop;
       -- copy the packet to the new space
       new_pkt.data := data;
       new_pkt.next_rec := null;
       tmp_ptr.next_rec := new_pkt;
       fifo_cnt := fifo_cnt + 1;
     end if;
     return 1;
   end function push;
   -- push procedure
   procedure push(data : fifo_data_array) is
     variable new_pkt : fifo_item_ptr;
     variable tmp_ptr : fifo_item_ptr;
   begin
     if(fifo_cnt = 0) then
       new_pkt := new fifo_item;
       fifo_ptr := new_pkt;
       -- copy the packet to the new space
       new_pkt.data := data;
       new_pkt.next_rec := null;
       fifo_cnt := 1;
     else
       new_pkt := new fifo_item;
       tmp_ptr := fifo_ptr;
       -- get to the end of the fifo
       while(tmp_ptr.next_rec /= null) loop
         tmp_ptr := tmp_ptr.next_rec;
       end loop;
       -- copy the packet to the new space
       new_pkt.data := data;
       new_pkt.next_rec := null;
       tmp_ptr.next_rec := new_pkt;
       fifo_cnt := fifo_cnt + 1;
     end if;
   end procedure push;
   -- pop function
   impure function pop return fifo_data_array is
     variable data : fifo_data_array := (others => (others => 'U'));
     variable tmp_ptr : fifo_item_ptr;
     variable prev_ptr : fifo_item_ptr;
   begin
       case fifo_cnt is
         when 0 =>
           return data;
         when 1 =>
           fifo_cnt := 0;
           data := fifo_ptr.data;
           fifo_ptr := null;
           return data;
         when others =>
           tmp_ptr := fifo_ptr;
           fifo_ptr := tmp_ptr.next_rec;
           tmp_ptr.next_rec := null;
           fifo_cnt := fifo_cnt - 1;
           return tmp_ptr.data;
       end case;
   end function pop;
   -- fifo_depth function
   impure function fifo_depth return integer is
   begin
     return fifo_cnt;
   end function fifo_depth;

 end protected body tb_fifo;

end fifo_pkg;


At the top of the package is the definition of fifo_ptr and fifo_cnt. The fifo_ptr item is the pointer to the top of the linked list of items. The fifo_cnt integer is a count of how many items there are on the fifo.

The package is very simple and does not have many of the nice things like overloaded write and print functions, searching, indexing, deleting and replacing that could be implemented.


Of course to test this the VHDL Test Bench was used and the following stimulus commands were created.

Define just after the architecture statement  shared variable test_fifo : tb_fifo;
or it can be defined as a regular variable as part of the read_file process
-------------------------------------------------------------------------------------------
   elsif (instruction(1 to len) = "PUSH") then
     for i in 0 to test_data'high loop
       --v_dat_array(i) := std_logic_vector(conv_unsigned(v_randv.RandInt(0, 255),8));
       --test_data(i) := std_logic_vector(conv_unsigned(v_randv.RandInt(0, 255),8));
       test_data(i) := std_logic_vector(conv_unsigned(i, 8));
     end loop;
   test_fifo.push(test_data);
   print("fifo depth is: " & integer'image(test_fifo.fifo_depth));
   temp_int := test_fifo.push(test_data);
   print("fifo depth is: " & integer'image(test_fifo.fifo_depth));

------------------------------------------------------------------------------------------
   elsif (instruction(1 to len) = "POP") then
     test_data := (others => (others => 'X'));
     print("POPing fifo depth is: " & integer'image(test_fifo.fifo_depth));
     test_data := test_fifo.pop;


As well the packages have to be included in the tb_ent file. If randomization is used to generate the data, the synthworks Random package must be included.

And the stimulus file used to test the example FIFO implementation:

PUSH
PUSH

POP
POP
POP
POP
POP

PUSH
PUSH

POP
POP
POP
POP
POP

FINISH
The operation was observed by setting break points in the code and looking at the content and status as the simulation progressed.

The example is simple, and could easily be adapted to implement list functions and others. The actual “item” in the FIFO can be changed to include such things as message strings, time stamps, and any other VHDL type. The only draw back is that a FIFO of an unknown type can not be created. If the FIFO item definition is expected to change, some planning should go into the definition so that compatibility can be maintained in the future.

I have used linked lists in the past for storing packet data, but it was not easy. The VHDL Test Bench Package uses linked list techniques extensively. So the only new part to this presentation is the protected types and how you can associate functions and procedures to them. Makes the code that uses protected types more readable I think. As well, access the shared variables of protected types is controlled. I plan to use this FIFO in a checker BFM and in the model of the DUT in the expanded example currently being worked on. (slowly)

The FIFO example could be converted to a list implementation with some additions. For each instance of the example tb_fifo there will be an instance of fifo_cnt. For a list, an index may be needed so that a particular item can be found quickly or ordering can be achieved. Several other functions or procedures will be need to be added to search, re-order and manipulate the list of items. An index field would be added to the record, as it is record specific. Where as a FIFO ID string would be added in the protected body section of the protected type definition, one ID string per fifo. A function or procedure will have to be created that enables the ID string to be set. This is like the fifo_cnt field in the above example code.

Hope that helps some get more acquainted with VHDL 2008 and how it can help your verification efforts.


Sckoarn

Sunday, 24 July 2011

VHDL Test Bench Package: ttb_gen_plus_2B Bug Fix

ttb_gen_plus_2B has been updated with a bug fix.
The Beta version of ttb_gen_plus had a re-write when I first released it, a couple months back. The parsing was updated to include multi-pin definitions on the same line, like the real syntax allows. I personally do not use that syntax, I only define one pin per line in an entity definition. So there was quite a bit of code change, and I did not test one case. The bug was that if you have an inline comment in the entity definition such as PIN_NAME : std_logic; -- inline comment, the comment would be considered pins and create incorrect output.

The bug was fixed by stripping off trailing comments from the code lines as they are read from the file. One little case I just did not consider nor test for. The new version of ttb_gen_plus_2B is v2.02. It can be downloaded from here ttb_gen_plus_B2.

If you are using the Beta version of the VHDL test bench package, and old version of ttb_gen_plus was included with that package. To take advantage of the bug fix you will have to download the newest version as linked above.

If you have any problems with or suggestions for the ttb_gen_plus tool, please add a comment to this post or email me.

Sckoarn


Friday, 1 July 2011

VHDL Test Bench Package: A Partner Scripting Language

Using a partner language.
This blog entry will show how a “partner” language can be of great assistance to the verification effort. I am specifically referring to the many free scripting languages that are available, such as tcl/tk, perl, python, java ... I currently use tcl/tk as my partner language. I chose tcl because it is used by many tool vendors that provide simulators for HDL simulation. Many poohoo the choice of TCL, but what ever gets the job done is good. I found the GUI part of tcl/tk most interesting. The ability to whip together a small GUI that can do huge amounts of work, yet be intuitive to use, has increased my productivity. Specially because the GUI “script” is so easy and quick to put together.

This blog entry will also tie together topics mentioned in the Variables and Randomization posts.
In the following text, the word “script” refers to partner language code. The words “test case” and “stimulus file”, refer to test bench scripts that the VHDL Test Bench Package would parse.

One might ask, “What can a partner language do for me? After all, it is more code and yet another language to learn and more programs to maintain.”

To answer the first question, I have personally created scripts to do the following (to mention a few)
  • Provide GUI interface representing DUT programmable features for test case writers
  • Provide base stimulus file generation based on programmable features selected
  • Provide auto generation of Register INCLUDE files for test cases
  • Provide auto generation of Register indclude “.h” files for software
  • RS-232 interface for dynamic configuration regressions on FPGA devices
  • push button simulation regression
  • simulation regression log viewing tool
  • Random configuration and test case generation
  • Register set and RAM test case generation
  • Complicated Diff tool for specialized file diffing
  • SUDOKO game
  • TTBgen GUI
  • assembler (for custom assembly code)
The GUI in the first point included the four proceeding points as part of it's functionality. This script grew to a size of 30000 lines in a matter of 5 years. In other words it evolved with the project from a simple test case generation tool to a hardware regression tool and even used in manufacturing. So the effort was seen as a good one, even though it took three to six months to become skillful in tcl/tk coding.

Test Case Generation:

The tool/script mentioned above was initially created to simplify the test case writing effort because the DUT was very complicated to program. There were many calculations to perform based on configuration settings. The output from the calculations produced values that had to be written to DUT registers and RAM. This effort proved to be too much for any team member to overcome and prompted the creation of the tool. The tool provided the user the ability to select and set functions and then view the resulting calculated values and register settings for the selected configuration. For each special functional block a “tab page” presented the registers and default values for user editing. Once the user was happy with all the values presented, a button could be hit to generate a stimulus file that could be used in simulations with the VHDL Test Bench Package. To re-assure the user that his configuration was what was wanted, a graphical representation of the configuration was also provided.

As we all know, register set definitions change as a design progresses. As suggested in the Variables blog post, a master file for holding register specifications are a good thing. As an aid to myself, I added some special function buttons on an Options “tab page” that enabled me to generate a variable DEFINE include file, for simulations. Also generated from the same master file were Register set test cases, RAM test cases and .h include files for software.

Of course during play time some of us program. I created a Sudoko program for fun. I learned how to use randomization, in tcl, through that effort. The Sudoko code, though ugly, does the job. I then proceeded to make it so the tool, at work, could randomize itself. The user could provide a “seed” and hit a button and a random configuration would be created. With a little more effort a RS-232 interface was created so that connection to the FPGA processor block could be done through a debug link. The randomly generated configurations were then be applied to the FPGA and each test was performed in real time. To help put this in perspective, 1024 random configurations could be tested on the FPGA in four hours. The same random configurations run in simulation, on one machine, would take 256 days. (assuming six hours per simulation) That is a huge time savings.

The above real life story demonstrates what can be achieved through the use of a scripting language. Not only were stimulus files generated but real time regressions were realized in the end. Each iteration of FPGA design had the 1024 random regression applied as a qualification for delivery to software.

When you are looking at the job of writing test cases, it may seem that the effort could never end. I have found the fastest way to reduce the effort is to create a test case generation program. The use of a GUI is recommended as it enables users to interact with a familiar medium. Presented by the GUI are all the functional buttons and knobs that can be set and adjusted in the DUT. Many, if not all, of the DUT functions can be represented by simple “check buttons” and fields for user interaction. To help get started, if you are interested in tcl/tk, feel free to use the TTBgen GUI tool as a template. Once familiar with language constructs, it is surprising how quickly an application can take shape.

Randomization:

Most modern scripting languages have a random function that can be used to produce randomization for your testing, through random test case generation. This can greatly reduce the time it takes to create test cases. As well, the quality of testing should be better when randomization is used.

For instance, if you are developing a CPU, a large set of random test cases would be nice to run in regressions. To create a large set of random test cases would take a person a long time to write. They would have to roll dice for each instruction to help them in getting some randomization included. If we consider a large set of test cases to be 200 each with 10,000 instructions, then there is many die rolls to be done. But if instead, the person took the time to learn and use a partner scripting language to generate those test cases for them, they gain twice. First the test cases can be generated much faster than can be written by hand and second, the person learned better how to use their partner language to assist them in their verification efforts. So now the question is not how to write all the test cases, but how to run them all in a decent time frame.

If your testing requirements lead you to have to create a complicated GUI, this in itself can be a blessing. The quickest way to get randomization is to provide a “Randomize” button and a “Seed” field. Make the tool randomize its own fields and then generate test cases from the random configuration. You will be surprised how quickly this can be done and how useful this will be.
If an FPGA is part of the design process, take advantage of it to assist in your over all testing. One thing to keep in mind is that, if you randomly generate configurations for FPGA regressions, be sure you can replicate those configurations in simulation. In a high paced development environment, you will most likely uncover some bugs while regressing on the FPGA. Being able to replicate the failing configurations in simulation will greatly enhance the debug effort.

I hope that it is now obvious how valuable a scripting language can be to your verification efforts.

I personally use TCL from ActiveState.

Sckoarn

Wednesday, 1 June 2011

The VHDL Test Bench Package: Randomization

Randomization ... a hot topic in the ASIC/FPGA verification world.

In VHDL, the IEEE math_real package, currently provides a method to generate random numbers through the uniform procedure. Getting a random number from uniform is a multi-step process and is not convenient to use. In the past I have used a LFSR construct coded in a procedure to generate random numbers. In stead of home cooking your own random number generation system or directly using uniform, I strongly recommend you look here:  SynthWorks Downloads   Specifically the random package RandomPkg_2_0. This free download, of source code and documentation, is a very capable package for generating random numbers. Besides the package content, the package also demonstrates some advanced VHDL 2008 coding techniques. The rest of this post will assume that the reader has reviewed or plans to review the SynthWorks' RandomPkg documentation and is familiar with the package and its definitions of randomization.

The VHDL Test Bench Package does not hinder the implementation of randomization. In fact, randomization may remove the need for scripts altogether. Taking randomization to the fullest, you may find that the tb_bhv file just becomes a container for your randomization objects and BFMs and no script is needed to make your simulations work. Randomization can be viewed from four different “levels”. With level one being simple randomization scripting instructions to enable a general randomization facility in scripts. Level two is the BFMs randomizing their output under the control of the scripting system, i.e. the stimulus bus. Level three is the randomization through randomly generated scripts. Finally at level 4, the system is completely controlled by an object that is able to generate random transaction based interaction with the BFMs. This post will cover the level 1 and 2 implementations and leave the level 3 and 4 for future posts.

The most obvious randomization within the scripting system would be to create a RAND_VAR instruction. Like the EQU_VAR (equate variable) instruction, provided by the test bench template, a RAND_VAR instruction could be created to randomize the contents of a variable. This variable could then be used as the address or data field in a write instruction, for instance. I am sure an implementer can come up with several other randomization type instructions for use in scripting. I have found that randomization be better done within a BFM.

There is an issue with random number generation that has to be addressed by the implementation. If for instance, we want to randomly write, random data, to a RAM as well as read back and verify that data, you will have to implement a more complex random number generation system. To re-state the problem, we want to randomize the address and data for both write and read operations. Lets also assume that we want to start reading and testing after four writes have been done. (This is done to offset the reads from writes, possibly uncovering some functional bug.) This can be done in one of two ways.

First, the write address and data could be stored in a FIFO construct, and reused for the read operations. This implementation causes the implementer to create two random numbers that have different random sequences. This is because the restrictions on address generation may not be the same as the data generation. If the address is sixteen bits wide and the data is only eight bits wide, the constraints on the two numbers being generated will be different. The address will be constrained to a number that fits in sixteen bits where the data generation will be constrained to fit within eight bits. The two numbers could be generated from a single random number stream, but that makes them dependent on each other. For instance, if I change the constraint on the address generation, that will effect the data values generated. The solution is to have more than one random number stream and in this case two. The reason for the two different random number streams, is because you may want to randomize both independent of each other. So, for one randomly generated address stream, you can generate many different random data streams.

Second, four different random number streams could be implemented and used to create a test to randomly test the RAM. The write and read data random variables could be initialize with the same seed and a different seed used for the write and read data.

Confused yet? A review of the SynthWorks' Random_Pkg documentation may help. I will attempt to make it clear here as well. For each random number stream that you want to have concurrently generating random numbers, you need to have a variable of type RandomPType defined. The RAND_VAR instruction will have to be expanded to include an index to random number stream to be used for the random operation. The RAND_VAR instruction should also have, as parameters, the max and min values to constrain generation within. This makes the simple RAND_VAR instruction much more complicated to implement in a script as it requires the script writer to understand and manage random number generation streams. If possible, the implementer should try to avoid complication in the scripts and move it into a BFM.


In a level 2 implementation, as defined above, the randomization is done within a BFM. This can hide complication from the script system. For items to be generated randomly a variable for each can be defined giving complete sequence independence for all fields.

Lets take an ATM signalling BFM as an example. The packet fields include GFC, VPI, VCI, PTI, CLP, HEC and data. One of the modes of operation for this BFM is random generation for any combination of fields. Through the stimulus access port, registers can be written to set the BFM to operate in different modes as well as provide random generation limits. The test case writer now only has to set up the register set for the configuration and not worry about the random number sequences. The sequences of random numbers will be controlled by the implementation of the BFM randomization system.

A simple example of random generation in a BFM is presented in the packet_gen example. This is part of the current Beta VHDL Test Bench Package download:  Beta Test Bench Package

In the coming weeks, I will be posting a full set of BFM's for an environment to test a simple self threading switch. This will enable example code of a complete randomization system to be presented, with enough complexity to take the example into a master BFM implementation. I am currently working on coding this example DUT and test bench. Until the example is released, there is enough information in this post to keep some busy playing with randomization.

Sckoarn

Saturday, 14 May 2011

VHDL Test Bench Packge: Version 2 Beta Release

 The version 2 VHDL Test Bench Package Beta is now available.

This version of the package introduces very few changes. The main addition to the package is the ability to have an undefined number of parameters for a command. The other addition is the predefined record types for the stimulus access bus. A minor change is the clean up of the package for commented out lines and unused functions. They were removed.

There is one bug fix in this issue. It seems the file_name variable had no return value from the access_inst_sequ procedure.

In order to create an instruction that has an undefined number of parameters, the “args” value, in the define_instruction call, must contain a value larger than six. This will cause the parser to skip the checking for the correct number of parameters for that command.

The reason for introducing the ability to have an undefined number of parameters passed in an instruction is to enable more dynamic commands to be created. For instance, the CPU style commands presented in the code_snips download, have some commands defined as three parameters. Optionally those commands could be created so two or three parameters could be passed. If only two parameters are valid, then the target of the operation must be the second parameter. As another example, an INC command could be created that enables the one to six parameters to be passed and have all of them incremented in value in one line of a stimulus script. I am sure users will find a use for this feature.
There is one caution about commands with an undefined number of parameters. The test bench package parsing was created such that user scripting errors were found ASAP. One of the items tested for is a correct number of parameters being passed for every command. By creating an instruction to have an undefined number of parameters, you are telling the parser to skip testing for the correct number of parameters. It then becomes the responsibility of the creator to deal with user input, or ignore errors like too many parameters passed. The code for this change was done in the check_valid_inst procedure.

The predefined stimulus bus record types are intended to be used on BFMs. They are presented as an example of a flexible, standardized interface method. Not that this interface is a standard in industry, but it could be for you applications. This definition of the stimulus bus interface enables the flexibility of ease of change. Say you adopt these record types and create many BFMs using them. If later you find that there is another signal needed in your system, it can be added easily just by changing the record definitions. As a facility to easily set a stimulus bus into a neutral state, a stm_neut function is also provided. This function returns a value that can be directly assigned to the record type, making interfacing simpler. This also provides an example of record assignment and function overloading.

As an example of the stimulus bus record types a packet generator BFM example is provided with the beta release. This BFM implementation demonstrates the following features:
  • self generating data, incrementing and random
  • data loading from a file and file opening
  • data loading from the stimulus file
  • dynamic file name in stimulus files
  • direct setting of text data from the stimulus file.
  • use of the new stimulus port definition.
NOTE: It is required to download the random package from here http://www.synthworks.com/downloads/ package link in order for the example BFM to compile.

The packet generator itself is a generic data generator example. It could be combined with a signaling BFM to implement an complete interface. This is a way of disconnecting the data generation from the actual interface signaling. A data generator can service many different kinds of signaling interfaces. This concept is restated in the BFMs #1 blog post. The request input to the packet generator would originate from a signaling BFM to get new data. In this case the request is implemented in a stimulus command called REQUEST.

The other interesting item presented in the example, something I have never done before, is the assignment of text from the stimulus file. In the example stimulus_file.stm file, the SET_FN command takes the dynamic string and assigns it to the stm_text type input of the BFM. This feature is handy for setting file names dynamically from the test case. The parsing system ensures that strings are nul terminated. This makes it easy to do assignment in a loop. The text string input on the example packet generator BFM is part of the stimulus system, but I did not see it needed for all cases, so left it out of the record definitions.

The Beta version of the test bench package is available here: tb_2011beta2.zip direct download in .zip form. Once I am confident the package has no obvious errors the OpenCores release will be updated.

Included in this release is an updated test bench package header and body files. Some additions and fixes to the ttb_gen utility. Updated documentation, yet to be fully completed. And an example of the new package features and test bench techniques in the form of a packet generator BFM.

If you are a user of the VHDL Test Bench Package and discover a bug or want an enhancement, now is your opportunity provide input. Feel free to post any responses in the comments section of this post.

If you are already using the VHDL Test Bench Package, you can upgrade to the new release, there should be no changes needed to your bhv file.

Enjoy

Sckoarn

Tuesday, 10 May 2011

VHDL Test Bench Package: Using BFMs #1


Bus Functional Models (BFM's) are the back bone of the VHDL Test Bench system. Using BFM's relieves the scripting system from having to implement detailed signal sequence generation and reaction. Other than your basic READ and WRITE commands, the scripting system should not implement interface signaling. In other words, data path elements outside the DUT should be implemented in a BFM.

For instance, lets say we have a SoC design. It has a processor, a PCIe interface and a high speed optical interface. There is a DDR2 RAM device interface for the processor and temp storage for packet data. This is a very simple design specification and it states we have at least 3 major interfaces to the design. There is no way that the scripting system can act as the DDR2 RAM at the same time as it interacts with the PCIe interface. This is because a script command usually “waits” for signaling to proceed through it's function. i.e. A READ command asserting address, select signals, and waiting for an acknowledge before collecting the read data and terminating the cycle. To solve the problem of concurrency in the scripting system BFM's are used.

A BFM is a functional model of the object of interest. In the case of a DDR2 RAM, the BFM acts and reacts like the real device would. At the same time it should not implement the timing of the RAM interface as those types of models are not BFM's but actual models of the device. Models of devices that include timing can significantly slow your simulations. Avoid using models with timing if possible. BFM's on the other hand may not even model a memory location until it is written to, in other words a sparse memory model. A BFM is purely functional and does not model timing. This is of course unless it is required to model the timing, in which case it becomes a model as apposed to a BFM.

Notice I use the word object when referring to a BFM. A VHDL component can be considered an Object, as it has interface requirements and data access control. In the case of our BFM, it has a specific interface to the DUT and that interface has restrictions on the BFM internal access, i.e through its interface signalling. The test system has a stimulus access port to access various controls and storage, but in a very controlled way. The BFM will have internal signals and variables that are not accessible to the outside itself, they can be considered private to the BFM. Sort of like an object in an OOP language, but not quite.

Creating BFM's is part of the verification effort. If it is well done, the effort can be reused. A well thought out BFM may be reusable in the next project, or for other projects that have similar interfaces. When I create a BFM I look at it as if it was a device. It has an entity and architecture just like any other design object. Besides the obvious DUT interfaces required to exercise the design interface I provide an interface for the test bench. I call it the stimulus access port in the VHDL Test Bench Package documentation, Section 7. Behind the access port is a register set or addressable space. I use the registers (several default ones again and again) to control and configure the BFM.

A refinement I have added to the test bench package is stimulus access port record type definitions. These additions include slave and master versions of the bus definition. The master versions will enable smart objects to use the stimulus bus to control BFMs. Records types for pins enables the pins to be modified and have the over all coding effort, for the change, be reduced. Records for pin types are a very good practice, even in the DUT. The new “standard” record types are part of a new release of the test bench package, (available for Beta testing soon).

To continue on the topic of creating BFM's, if your efforts are part of a large corporation or team, good BFM planning can save a lot of time. If you have many different signalling interfaces and very few data configurations, it may be beneficial to break the BFM's into parts. One part could be the data generation BFM and another part could be the signalling interface BFM. When you create your signalling BFM's you use an interface compatible with the data generation BFM so you can reuse the data generator. So if you had ten different signalling interfaces and only one data type, you saved rewriting or copy/pasting the data generation code nine times over. Not only that but, if your data generation system changes there is only one place to make the change. Modularity is the key word. This kind of structure can save time in the creation of the verification environment as well as in test case writing. Once you have used something, it is easier when you use it again.

When I create a BFM I start by writing a simple specification. Since I use a BFM like a device, accessed through its stimulus port, I create a register map of the control, functionality options and internal memory(s). This enables me to plan out what I will build as well as provides documentation for users. When implementing, you have a specification to code from and this makes the coding easier.
As an example, using my default BMF register set and one memory range:

---
Name              Address     Bit(s) Description
Control Register    0           0    Enable
                                1    Open file trigger, Write a '1'
                                     to this bit to trigger file open
                                     This bit is self clearing.

Configuration       1         3-0    Data coding mode
Register                             0000 Incrementing
                                     0001 Random
                                     0010 Load from file
                                     0011 User Data

Error Register     2                 Read only definition of error indications

Seed Register      3         31-0    Seed value for random number generation

User Data Memory 0x1000 – 0x107F     Bits 7 downto 0 of the data are written
                                       to the addressed location.

The above specification is for the example packet generator BFM provided with the new test bench package version, to be released in the next blog posting.

The stimulus access port provides 32 bit addressing and 32 bit data, which has been enough for everything I have ever implemented. If I have BFM that needs more registers I have all the rest of the 32 bits of addressing to use. The above registers I always put into a BFM because they seem common. As well, it is just a copy/paste away from a working base implementation.

When coding the BFM, I create a test bench for it, using the VHDL test bench package of course. The other thing that one may have to build is a mating BFM for your development. If you are building a SPI Master interface, for instance, you need a slave to react with the master commands, so you might have to build both. This can have the advantage of having to know the interface better and producing both versions of the interface. When the BFM is connected to the DUT for testing, there is a whole test environment to fall back on when there are interface problems.

In the weeks to come, I will post up examples of specific BFM types. The plan is to produce a signaling BFM to partner with the packet generator, a BFM to recover DUT output data, a compare BMF for testing correctness of data and finally a master BFM to demonstrate my vision of the master stimulus bus. The master stimulus bus is new and I will have to do some playing around with it to be able to produce a usage example.  The next post, will be the release of the Beta version of the Test Bench Package.  This includes the BFM and stimulus bus defintion stated above.

Sckoarn

Thursday, 21 April 2011

VHDL Test Bench: TTB Gen Plus Beta


Beta Release of TTB Gen Plus 2.0!!!

In preparation for a new release of the test bench package, I have spent a few hours recoding the TTB Gen Plus tcl/tk generation tool. It has one small GUI addition but other than that should look and feel exactly like version 1.0. I am currently working on an update to the VHDL Test Bench Package and want to make it a general update to everything. Part of the package is the generation tool. It helps reduce the overhead of creating a standard test bench implementation. So, as a bonus to you for visiting the blog I would like to offer the Beta version for your testing.

The main enhancement is the removal of physical restrictions on parsing the entity definition. The tool should now parse out any legal VHDL entity definition. The tool will now also generate generics found on the entity in the component and port mapping output to the structure file, entity_name_ttb_str. As a minor addition, an optional build_tb.bat file can be generated for Modelsim and Aldec compilation.

There is one little thing about the generic generation. It is hard for the tool to predict what values should be assigned to generics. So as an initial step they are generated on component and port mapping but commented out for later completion by the user.

If you need to get tcl/tk you can get it here http://www.activestate.com/activetcl/downloads

I personally use the 8.5 version.

There is only one condition on someone that downloads a copy of TTB Gen Plus Beta. That is, if you use it and find a bug, or a feature needed or disliked, you have to post a comment at the bottom of this post. Think of it as a bug report, post up and example of the offending entity declaration.

TTB Gen Plus Beta is downloaded from here: TTB Gen Beta
In the past I have found at least two uses for TTG Gen Plus besides for generating test benches. If you code your entity first, you can use TTB Gen Plus to generate the component definition for you. This can save lots of typing if the entity is large with many pins. Also, the port map definition can be copied into a different structure file, remove the names generated and you have a nice start on the port map coding effort.

I hope that TTB Gen Plus can save you as much typing time as it has me.

Sckoarn

P.S. I will be releasing a Beta version of the VHDL Test Bench Package soon!!

VHDL Test Bench: What is Self Checking??


What is meant by a self checking?

Firstly, a self checking test case is one that gets some status from the DUT and tests that it is correct. The status could be that which is read form a DUT status register. You read the status register and test that it has the correct contents. Or it could be that some BFM implementing a bus protocol, with protocol a checker and internal status register, reading it could indicate that everything is good. When a status is checked there is the possibility that it is wrong. This should cause the simulation to output a message and terminate the run. The message should be as descriptive as possible as you want to find the source of the problem quickly. A test case is self checking in that it tests for some condition and if not correct outputs an indication and terminates the run.

I always implement READ commands in the test bench. Every READ command puts the value read into a common variable. This enables various VERIFY commands to test for conditions after every read. For example, if I had a READ_CPU and a READ_BFM command, and they both put the read value in a variable called v_temp_read, then a VERIFY (word) and a VERIFY_BIT command could look to the same place to get the data to be tested. (some tips about command creation) The VERIFY command is the self checking element of a test case.

When a VERIFY or testing type command checks a value and it is wrong, the output message should have enough detail to enable the problem to be located quickly. I use the assertion facility of VHDL to output useful messages. The file name and line number in that file are part of the instruction and the current value is always available. Specifically the file_name string variable and the file_line integer variable. These variables can be used in the assert statement so the user will know where the error originated. (NOTE: While testing this code it was found that the file_name variable contains nothing. I found this bug in the release version and it will be fixed in a new release, coming very soon.) Below is an example of how I create a VERIFY command. We assume a read took place before a verify is done, and the value is in the v_temp_read variable.

-----------------------------------------------------------------------------
elsif (instruction(1 to len) = "VERIFY") then
v_temp_vec1 := std_logic_vector(conv_unsigned(par1, 32));
assert (v_temp_vec1 = v_temp_read)
report LF & "ERROR: Compare Value was not as expected!!" &
LF & "Got " & (to_hstring(v_temp_read)) & LF &
LF & "Expected " & (to_hstring(v_temp_vec1)) & LF &
"Found on line " & (integer'image(file_line)) & " in file " & file_name
severity failure;

In the above example, if par1 does not match v_temp_read, the assertion will fire. The nice error message will print out stating there was a miss-compare, tell you the received value and the expected value, the file name and line number in the script that caused the error. The “to_hstring” function is available in VHDL 2008. The only other item needed before a user can copy/paste the above, fully functional VERIFY command, is to add the v_read_data std_logic_vector variable to the read_file process. The above command and a few others have been added to the code snips file here: (All code snips are now part of the Opencores distribution)

There is a benefit to having the VERIFY command separate from the READ command in larger verification environments. The scripting system can easily be made to create a READ_VERIFY command, where you both read and verify in the same command. The disadvantage to this that if you have more than one read type command, like DUT reads and BFM reads, you will have to create a READ_VERIFY type command for each read type. And if you wanted to create VERIFY_BIT commands, again one of those for each ready type you want to test. Everything that is read should be tested. If the read and verify are separate commands then one VERIFY command can service all READ type commands.

Another form of self checking relates to the Interrupts and Waiting Post. An unexpected interrupt causes the system to put out a message and terminate the simulation. The self checking part is the process that watches the interrupt output pin, and if not expected causes an assertion. This relates to scripts in that, a script has to inform the interrupt watcher that there will be an expected interrupt, do not terminate the simulation when it comes.

The last form of self checking comes from BFM's. Specifically checker type BFM's. These objects usually monitor some complicated function like a communications protocol, a bus monitor, a ram access checker and so on. The function of these objects is to indicate to the verification system when something goes wrong. For instance, a bus monitor could check the contents of a bus on each rising edge of a clock, and if there are x's on any line indicate an error. Or the monitor could be checking actual timing and or sequences. The topic of monitoring BFM's will be further elaborated on in future posts.

I looked over some home projects and found that I never used a verify type instruction, looking at wave forms seems to have fit my needs. My designs at home are very small and I was just playing around. Projects at work are 1000 times bigger and there are >100 test cases. When I run a regression, when a test case fails, I need to know what and where the failure happened. Good checking and good messages are a must for good verification results.

Sckoarn

Saturday, 16 April 2011

VHDL Test Bench Package: Functional Coverage


Functional Coverage?? You may think this topic only applies to current methodologies like those based on sophisticated OOP languages using randomization and coverage matrices. This is not true, the scripts created to test the DUT are the functional coverage in a script based system.

When you are verifying a product, you will want the quality to be as high as possible. This all stems from good planning. There should be a written product specification so that everyone knows what they are building. From this specification more specifications can be created that spec out the low level details, i.e. your FPGA or ASIC. Part of the verification effort is to create a test plan. This will include extracting the functional requirements from various sources. The first source of functionality comes from the requirements specification. The other source of most of the detailed functionality comes from the design specification. This is unless the requirements specification has ALL the details needed to prove the functionality of the device. I have found this to be very rare that one document is the source of all functionality definition. The quality of the specification is usually reflected in the quality of the design and testing. A poorly defined device is hard to design let alone verify.

The functionality extracted from the specifications should be collected and documented. I usually make a table in the test plan that contains all the functionality. The table can often be more than ten pages of functional points, one point per row. By doing the plan, you review all the functionality and prepare yourself for the task.  List for each functional point a test case to cover it. This could be placed in a column in the same row as the functional point.  If the functionality is covered by many test cases, they should all be listed. By doing this exercise of planning you will have produced a matrix of functional coverage and the test cases that do the covering. Be sure to make the test cases self checking so you know that the testing is being done and verified.

How do you know you got all the functionality?
One facility that is common is, code coverage. Code coverage tells you which lines of code / which branches were taken, when you ran a simulation. Code coverage does not tell you if any of the functionality simulated was correct. If you run all of your test cases and combine the code coverage you will get an insight into how well the design was exercised. Missed lines of code and branches will some times uncover missed or unknown functionality. Missed lines may be missed because they were unreachable. By analyzing each missed code line, the verification person can speculate and determine why each item was missed.

The VHDL Test Bench Package can be used to create a verification system that will enable full functional requirements of a FPGA or ASIC design to be verified. This is achieved by using test cases to prove functional points.

Sckoarn

Monday, 4 April 2011

VHDL Test Bench Package: An Internal BFM, Like a CPU


This post will present how the VHDL Test Bench Package can be used as the “internal” test bench or BFM. By internal I mean that the test bench bhv architecture file be substituted in the place of an internal architecture block of the DUT. i.e. Replacing some of the RTL with behavioural test bench code, namely the script parser. This enables the test script to emulate internal blocks with simple custom commands. This can be very useful in several situations.

The first and most popular is to replace the CPU Core with the scripting portion of the bhv file. As an internal block, the CPU does not source a clock, so the clock_driver process can be removed from the bhv file. A code snip of this implementation is provided in section 3 of the Code Snips download file. Presented in the code snip is the structure modifications that would have to be made to the DUT RTL. The code is also included here to facilitate better descriptions.

The addition of a generic to the entity who's architecture instantiates the component of interest. Below is a sample of the additional code. The addition of the generic and additional structure will have no effect on synthesis.

-- this is the top enity or at the level where you can assign the
-- en_bfm generic and it makes sense
entity my_top_dut is
generic (
g_en_bfm : integer := 0
);
port (
-- top port definitions
);
end enity my_top_dut;

A copy of the RTL block is made, renamed and a generic is added to it that points to the stimulus file. Some methods have the declaration of components in a package, in the case of the example, the components are declared in the architecture of the example entity (my_top_dut). The architecture start could look like the code below:

architecture struct of my_top_dut is
-- this is the component of an internal block of my_top_dut
component rtl_block
port (
reset_n : in std_logic;
clk_in : in std_logic;
-- ...
);
end component;
-- bhv_block has the same pin out as rtl_block, except for the generic
component bhv_block
generic (
stimulus_file: in string
);
port (
reset_n : in std_logic;
clk_in : in std_logic;
-- ...
);
end component;
--....
begin

When it comes to port mapping an instance, a VHDL generate statement round the RTL and around the bhv port mappings enables one or the other to be included in the load. Hence an optional internal BFM. This is shown in the example code below:

begin
-- if generic is default value, include rtl
if(g_en_bfm = 0) generate
begin
rtl_b1: rtl_block
port map(
reset_n => reset_n,
clk_in => clk,
-- ...
);
end generate;
-- if generic is set for bhv, include it
if(g_en_bfm = 1) generate
begin
bfm: bhv_block
generic map(
stimulus_file => "stm/stimulus_file.stm"
)
port map(
reset_n => reset_n,
clk_in => clk,
-- ...
);
end generate;
-- ...
end struct;

Be it a simulator or a synthesis tool, the user has control over which version to load through the top enable generic (g_en_bfm). By default the above implementation will load RTL, and to get the BFM in place you have to modify code or pass in a different generic value when loading the simulation. This facility is provided by all major tool vendors.

Now with the test bench bhv file substituted for the CPU, the CPU emulation commands can be pasted into the else if chain. That and the other modifications detailed in Section 2 of the code snips file, and have a nice start to the implementation. The one thing that will have to be created is the interface logic for the block being emulated. In the case of a CPU, READ and WRITE commands will have to be created that use the bus signalling. There may be many types of bus cycles that need to be emulated. Using procedures within the read_file process will enable a reduction in file size and stability of interface commands. This topic will be detailed in a future post.

Another place that is a good candidate for internal BFM substitution is the debug link block. Many processors have a debug facility with some implementations being controllable over a system bus through a simple RS-232 serial link, for example. The RS-232 debug link in RTL is a bus master that can control some debug slave on the bus system. The idea is to replace that RTL debug link with an internal BFM. This makes the scripting system a bus master, and now can control the system as if it was a host computer over the RS-232 link. This removes the slow RS-232 system from having to be simulated/modelled. The real CPU RTL can now be controlled as it would in the implementation, through it's debug interface. If the CPU had been modelled before as a bus master (as I would assume it was), you can most likely reuse some of that code.

As stated in the test bench package documentation, having multiple script parsers in one environment will increase complexity significantly. If an internal BFM uses the read_file process, all effort should be made to make it the central place for control. If the verification system needs to have scripting control from the top level, another way to gain control of an internal block is to use the signal connection facility found in most simulation tools. In the Modelsim world this is know as signal spy. It enables the connection of one signal to another while ignoring VHDL structure boundaries. The user can drive/read these signals and control the end points. This enables one script parser to control what ever is required by the effort.

Using facilities like signal_spy is good if it helps you get the job done. The implementation can use the same generate structure presented above, but use an empty architecture for the bhv instead of the test bench bhv file. Once the signal_spy is connected you can operate on the internal block as if the scripting system was there. Commands in the bhv file can act on the signal_spy signals like any other signal or pin it has access to.

That concludes the presentation of how the VHDL Test Bench Package can be used as an internal BFM. I have personally implemented all of the above methods. I hope that helps some to implement some good verification environments.

Sckoarn

Saturday, 2 April 2011

VHDL Test Bench Package: Variables in the Scripting System


One of the more power things about the VHDL Test Bench Package, is the implementation of variables in the scripting system. This facility is provided by the default command “DEFINE_VAR”, which is built into the parsing system and is not part of the else if chain. As one of the latest enhancements to the test bench package, variable definitions were removed from the instruction array. This was done to speed up searching as well as make declaration independent of usage. As in, you can use a variable before it is defined in the script.

Using variables makes scripts more readable. Instead of 1/0 as values to pass, you can use ON/OFF, and as a person reading that, you already know it is a switch value, just from the naming. The user of the variable does not have to worry about the value. If you have some binary words that need to be written to a DUT that cause certain configurations to be enabled, it is better to read “ENABLE_MODEXX”, than it is to read x00234.

Be sure to use meaningful variable names.
Now the team, or you, have bought into the using variables in a big way. At the top of each script everyone pastes in a copy of a nice group of variables they used from the test case before. Now some time later in the project, and many test cases have been created, something happens and a change to the DUT forces the value of one or more of the variables to have to change!! Oh, no, that means editing all the scripts. This problem could be solved by writing a script changer program. Or you could do it by hand. It would have been better to have avoided that situation.

The INCLUDE command solves the above problem. Create a file that only contains the common variable definitions. In your scripts, use the INCLUDE instruction to read and create the common variables. I put my variable includes at the top, like many other languages do. I use this approach for device address maps. This way, if there are any address changes, there is a single place to update, and all the test cases receive that update automatically. Also, as a side benefit (assuming all test case writers use them), the scripts will be more readable as a whole because of the common variable definitions.

Once a variable is defined in a script, it's value can be changed. A variable can be assigned the value of another variable. Below is an example of using variables in a loop.

Lets assume the line below this one, is in a file named “std_vars.stm”:
DEFINE_VAR TABLE_BASE_ADDR x8000 -- define base address of table

This is a simple test case that uses variables in a good way:
INCLUDE stm/std_vars.stm

DEFINE_VAR TABLE_ADDR 0 -- define temp location for changes
DEFINE_VAR DATA 0 -- define data variable with value 0
DEFINE_VAR DATA_INC 1 -- a value to change the data by
DEFINE_VAR ADDR_STEP 4 -- the address increment value
.....
-- copy the default value to a variable you can change
EQU_VAR TABLE_ADDR $TABLE_BASE_ADDR
LOOP 512 -- loop through the table
WRITE $TABLE_ADDR $DATA -- write to address the data.
READ $TABLE_ADDR -- read it back
VERIFY $DATA -- check is a match
ADD_VAR DATA $DATA_INC -- change the data
ADD_VAR TABLE_ADDR $ADDR_STEP -- point to next address
END_LOOP

FINISH

The above example does everything with variables. It is isolated from DUT address changes because it includes a standard variable definition from a single place. If your test had several more loops, and you wanted to change the addressing step value, you can do it in one place if you used the scheme above.

During some rather large verification projects, the use of the default variable “include” file became very important. I took the concept further and created a table defined in tcl code. From this table I generated the standard variables include files and well as several other files ... header file for C and assembly coding, tcl/tk code itself for test case generation code and in generated test cases. I will cover this topic in detail in a future post, test case generation. In one project I found that even the software group was using my table for generating their header files.

How do I produce a walking ones pattern? Simple, add a variable to itself, with a starting value of 1. This is shown in the example below:

DEFINE_VAR ADDR 1 -- 32 bit address range
DEFINE_VAR DATA 0 -- some data

LOOP 32
WRITE $ADDR $DATA -- write the test location
READ $ADDR -- read it back
VERIFY $DATA -- test results
ADD_VAR ADDR $ADDR -- walk the 1
ADD_VAR DATA 1 -- change the test data
END_LOOP

FINISH

Another variable type the VHDL Test Bench Package understands is the in-line variable. These are the variables that contain the line number of the next executable script line. Defining one is shown below:

JUMP $JUMP1
....
....
JUMP1: -- --<< this is an in-line variable
WRITE .....

The JUMP1: statement defines a variable equal to the line number the WRITE statement is found. In all my test case writing I have never had a reason to change a in-line variable value. But, it can be done as it is accessed just like any other variable in the variable link list.

The last kind of variable, which really is not a variable, is the condition operators. They are described in the documentation as Condition Variables, but after thinking about it a bit, they really are not variables. They are predetermined text that will return a specific value. This enables the command code to case on the condition that has been passed from the script.

User defined variable access and modification commands are all part of the scripting system. The test bench creator can use the ADD_VAR and EQU_VAR default commands as templates to create their own. One command I have created several times is a randomize variable command. This is rarely used by me as I like to keep complication out of the script and in BFM's. Again, a future topic, BFM's and the test bench system.

That is about all I can say about variables and the test bench package. I hope that helps some avoid some scripting nightmares.

Sckoarn