Monday, 28 March 2011

VHDL Test Bench: CPU or Logic Script Commands

There may come a time when you want to perform some logic on values read from the DUT. This could be facilitated by some kind of special command or process. But, the logic manipulations should be implemented using simple standard logic commands. This is so that other tests can take advantage of a generic command structure.

I speak about CPU like assembly instructions being implemented as test bench script commands. This includes a register set (an array of std_logic_vectors), a condition code register, basic read/write commands that target the register set, logic commands and branch commands. These commands can be used in both an internal test bench and external test bench. The code for this addition to the command set is presented in Section 2 of the code snips download .zip file. A link to this file can be found in the “Useful Links” section of the blog page, low on the right.  Or Code Snips here.

The commands found in Code Snips for CPU emulation are:  MOV, MOVI, AND, ANDI, OR, XOR, NOT, BZ, BE, BB.  In text:  move, move immediate, and, and immediate, or, exclusive or, not, branch if zero, branch if equal, branch if bit set/clear.  These form the basics, and the user can easily create more as needed with the examples provided in Code Snips.

I have used these CPUish commands in the past for some external test benches. I have always used the CPU commands when emulating a CPU internal to a design. This was done because it is simpler to create/use the scripting system inside a DUT for testing, than it is to have the over head of the real RTL CPU in place. If you are creating a processor system, and you want to prove the system, but not really the CPU and code, you can put your own in. You can now test the system, within limits, and be sure that at least basics are functioning before trying to run compiled software on the real RTL CPU. Of course you do, in the end, want to run compiled code on the RTL CPU, just maybe not at the very start of the design process.

The code in the tb_code_snips.zip download file is optional code, specially the CPU commands. I would only include the code that is needed. If your testing does not need CPUish commands, do not add them.

I have been working on more topics that include code. I know I have been Blah, Blah, Blah since I started this blog .... time to pump out some code. Code will always be published through the Code Snips link and content will be updated each time I post about some code. Also, keep an eye for a new Application called Winttb_gen, I am playing with some C# and figured that it would be a nice little application to start on.

Sckoarn

VHDL Test Bench: Default Generated Structure

Looking over the Documentation I noticed that the structure of the test bench is not really detailed. If a user did not use the ttb_gen application they may not know what the structure of the test bench looks like. (In the way I use it.) The following text gives a code example of a default test bench structure.

The default test bench structure is an external test bench is wrapped around an entity. The wrapping function is performed by the top level test bench (ttb). I have a preference to putting the entity declaration and architecture declaration in separate files. So the top level entity is just empty with generic pointing to the stimulus file. As shown below:

library IEEE;
--library dut_lib;
use IEEE.STD_LOGIC_1164.all;
--use dut_lib.all;

entity spi_master_ttb is
generic (
stimulus_file: string := "stm/stimulus_file.stm"
);
end spi_master_ttb;

The top level structure file connects the DUT architecture to the test bench architecture. As shown below:

architecture struct of spi_master_ttb is

component spi_master
port (
mspi_reset_n : in std_logic;
mspi_clk_in : in std_logic;
mspi_sen : out std_logic;
mspi_sclk : out std_logic;
mspi_sdio : inout std_logic;
mspi_sdo : in std_logic;
mspi_acc_done : out std_logic;
stm_add : in std_logic_vector(31 downto 0);
stm_dat : inout std_logic_vector(31 downto 0);
stm_rwn : in std_logic;
stm_req_n : in std_logic;
stm_ack_n : out std_logic
);
end component;

component spi_master_tb
generic (
stimulus_file: in string
);
port (
mspi_reset_n : buffer std_logic;
mspi_clk_in : buffer std_logic;
mspi_sen : in std_logic;
mspi_sclk : in std_logic;
mspi_sdio : inout std_logic;
mspi_sdo : buffer std_logic;
mspi_acc_done : in std_logic;
stm_add : buffer std_logic_vector(31 downto 0);
stm_dat : inout std_logic_vector(31 downto 0);
stm_rwn : buffer std_logic;
stm_req_n : buffer std_logic;
stm_ack_n : in std_logic
);
end component;

--for all: spi_master use entity dut_lib.spi_master(str);
for all: spi_master_tb use entity work.spi_master_tb(bhv);

signal temp_mspi_reset_n : std_logic;
signal temp_mspi_clk_in : std_logic;
signal temp_mspi_sen : std_logic;
signal temp_mspi_sclk : std_logic;
signal temp_mspi_sdio : std_logic;
signal temp_mspi_sdo : std_logic;
signal temp_mspi_acc_done : std_logic;
signal temp_stm_add : std_logic_vector(31 downto 0);
signal temp_stm_dat : std_logic_vector(31 downto 0);
signal temp_stm_rwn : std_logic;
signal temp_stm_req_n : std_logic;
signal temp_stm_ack_n : std_logic;

begin

dut: spi_master
port map(
mspi_reset_n => temp_mspi_reset_n,
mspi_clk_in => temp_mspi_clk_in,
mspi_sen => temp_mspi_sen,
mspi_sclk => temp_mspi_sclk,
mspi_sdio => temp_mspi_sdio,
mspi_sdo => temp_mspi_sdo,
mspi_acc_done => temp_mspi_acc_done,
stm_add => temp_stm_add,
stm_dat => temp_stm_dat,
stm_rwn => temp_stm_rwn,
stm_req_n => temp_stm_req_n,
stm_ack_n => temp_stm_ack_n
);

tb: spi_master_tb
generic map(
stimulus_file => stimulus_file
)
port map(
mspi_reset_n => temp_mspi_reset_n,
mspi_clk_in => temp_mspi_clk_in,
mspi_sen => temp_mspi_sen,
mspi_sclk => temp_mspi_sclk,
mspi_sdio => temp_mspi_sdio,
mspi_sdo => temp_mspi_sdo,
mspi_acc_done => temp_mspi_acc_done,
stm_add => temp_stm_add,
stm_dat => temp_stm_dat,
stm_rwn => temp_stm_rwn,
stm_req_n => temp_stm_req_n,
stm_ack_n => temp_stm_ack_n
);

end struct;

The above shows a test bench component spi_master_tb, with opposite pin directions with the same name. A DUT component definition as derived from the entity. A temporary signal for each pin, and the port mapping of each component using those signals.

The test bench entity, tb_ent, definition is just a reflection of the DUT, with every DUT output being a test bench input. A little editing and the file looks like this:

library IEEE;
--library ieee_proposed;
--library tb_pkg;
--possible users libs;
use IEEE.STD_LOGIC_1164.all;
use IEEE.STD_LOGIC_ARITH.all;
use work.STD_LOGIC_1164_additions.all;
use std.textio.all;
use work.tb_pkg.all;
--possible users use statement;

entity spi_master_tb is
generic (
stimulus_file: in string
);
port (
mspi_reset_n : buffer std_logic;
mspi_clk_in : buffer std_logic;
mspi_sen : in std_logic;
mspi_sclk : in std_logic;
mspi_sdio : inout std_logic;
mspi_sdo : buffer std_logic;
mspi_acc_done : in std_logic;
stm_add : buffer std_logic_vector(31 downto 0);
stm_dat : inout std_logic_vector(31 downto 0);
stm_rwn : buffer std_logic;
stm_req_n : buffer std_logic;
stm_ack_n : in std_logic
);
end spi_master_tb;

The edits made after generation were to fix up the library definitions to meet the environment.

All the above code was generated by the TTB Gen GUI provided with the package download. It basically saves the user a lot of the mindless typing that structure files contain. Consider a 1000 pin device ... I would rather use a generation program than to type that by hand. The above also shows the code that would create the structure described in the documentation, Default Test Bench Structure.

The architecture of the test bench, tb_bhv, is the container of all objects connected to the DUT, the script parsing process, clock driver process and possibility many others. The tb_bhv file is the only file I add things too, I do not add objects to any other test bench file. This is because the others can be regenerated if there are significant pin changes to the DUT. But you can optionally generate the tb_bhv file because that is where all the hand generated code resides, you do not want to generate a new one.

Below is the entity that was used to generate the above structure. This entity and it's BFM will be redesigned and presented in future posts about BFM's.

library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
use IEEE.STD_LOGIC_ARITH.all;
use std.textio.all;


entity spi_master is
port(
MSPI_RESET_N : in std_logic;
MSPI_CLK_IN : in std_logic;
-- interface pins
MSPI_SEN : out std_logic;
MSPI_SCLK : out std_logic;
MSPI_SDIO : inout std_logic;
MSPI_SDO : in std_logic;
MSPI_ACC_DONE : out std_logic;
-- env access port
STM_ADD : in std_logic_vector(31 downto 0);
STM_DAT : inout std_logic_vector(31 downto 0);
STM_RWN : in std_logic;
STM_REQ_N : in std_logic;
STM_ACK_N : out std_logic
);
end spi_master;

The code was included as text, and not a picture, because it was not easy to make it pretty and fit. Also pictures are hard copy code from.

Sckoarn

VHDL Test Bench Package and VHDL 2008

In an effort to remove the need for the std_logic_1164_additions.vhdl package, I started looking at compiling with the 2008 switch turned on. ( I use the Modelsim simulator PE Student addition 10.1) One of the reasons I want to get rid of the additions package is because it is about 2000 lines long. The student addition of the simulator is performance limited by the number of lines of code in your design and test bench. The number 10,000 lines comes to mind. So the additions package uses 10% of that, IMO that is a high percentage, for something that is unused by the basic test bench structure.

So, why was it included in the first place? At the time, the additions package provided the to_hstring function that is very handy to convert your std_logic_vector to a text string, in hex. This need arose when an evaluation of different tool performances was done. This made it so I had to change the tb_bhv file so it did not use tool specific VHDL functions. (namely the Modelsim std_developerskit) I used std_developerskit functions to format messages to the user when reporting compared values. The needed, tool independent, functionality was found in the additions package. The functions in the additions package can be copied/modified and placed in the VHDL test bench package if needed. The removal of the std_logic_1164_additions from the compile stream will save space for more design and other useful test bench code when using PE Student addition.

What I did was to put the -2008 switch in my compile line that pertains to the test bench package like so: “vcom -2008 vhdl/tb_pkg_header.vhd vhdl/tb_pkg_body.vhd” and commented out all lines that pertained to std_logic_1164_additions.vhdl package use. It compiled and ran with no errors. NOTE: I had not used to_hstring in any file. When I compiled without the -2008 switch the results were the same, a working simulation.

I have never evaluated the test bench package for compilation under the latest VHDL standard. It appears like the package itself is standalone without the need for the std_logic_1164_additions package. So when you use the test bench package you can comment out the library and use statements that pertain to the std_logic_1164_additions package.

When using Modelsim you can make use of their std_developerskit for many useful facilities. Thought in inclusion of std_developerskit will use ALL of the design space in the Modelsim PE Student application. Those of you that have real tools, you can use what is needed to get the job done and not worry so much about performance.

Sckoarn

Tuesday, 22 March 2011

VHDL Test Bench Package Methodology, where does it Fit?

RTL Design verification methodology has change significantly over the past 15 years. A current, but incomplete, list of methodologies I know of are:

  1. Golden files (Wave forms)
  2. Vector files
  3. Script based
  4. OOP & Randomization (C++/SystemC, “e”)
  5. Formal
  6. OVM, VMM, UVM

The list above is really a short summery of how the verification industry evolved and is by no means complete. The methods that the VHDL Test Bench Package fits into are 2, 3 and touching into 4. The order of methodology list above is the preference level of each method. Number 1 being the least preferred method of functional verification and number 5 and 6 being the most preferred verification methodology.

The “Golden” file verification method is the act of comparing one graphical representation of a output wave form against another. The golden file can be created by recording the output of a simulation or by use of a wave form editor. Unless your logic is VERY simple, this method should be avoided. I, in fact, would never use this method no mater how simple my logic was. The problem is that if your design changes, you will have to generate new golden files. Your inputs are limited to those actions that will cause the output to compare, this rules out randomization of any kind.

The Vector file method is not much different from the golden file approach except that the compare files usually contain simple text. As well there is usually two files per test, one is the input file data set and one file is the expected output data values. Vector files are usually generated by some kind of model that is not part of the test bench. Though this approach is much better than Golden files, it can cause a huge number of files to be generated to cover the full functionality of a complex design. When I implement a BFM that generates data, I make it so the BMF can optionally load the data from a file as well. The test bench package can easily be used to implement Vector file methods.

Script based methods appeared in the early 1990's. This method enabled more dynamic testing to be performed with less overhead. The test cases were smaller and fewer than a vector file system. The VHDL Test Bench Package is a script based system. Though the script based system can be made to wiggle wires, it is at it's best when it is controlling and monitoring objects that are built to do that function. So in essence a scripting system can take the user up a level in abstraction. This is how I view the test bench package, a wrapper that gives me a container for my objects and a way to communicate with those objects from a simple script.

OOP and Randomization came along when things started getting really complicated. A C++ library, SystemC was created to facilitate all those things the HDL languages lacked. Several new languages were created to tackle the problem of ever more complicated RTL designs. These languages have built in randomization so that test cases are in essence self generating. VHDL is does not come with any built in randomization, but I personally use randomization all the time in my generator BFM's.

Formal methods, OVM, VMM, UVM .... are the current methods popular with bigger companies. Since this Blog is about the VHDL test bench package, I will leave those methods for your own investigation.

I personally strive for a scripting method that includes BFM's and models of the DUT, to prove the functionality in a stand alone manner. All generator BFM's can generate random data as well as random signalling where appropriate. All test cases are self checking. To create constrained random test cases I use a scripting language like tcl/tk or perl to generate stimulus files for me. So when I said that the test bench package touches into randomization, the last point is what I am referring to.

That summarizes where I see the VHDL Test Bench Package fitting into the methodologies map.

Sckoarn

Saturday, 19 March 2011

Top Level and Block Level Testing

 

Top Level and Block Level testing.

When creating a top level verification environment, I try to consider the top level DUT as black box. That it can only be accessed through it's ports and registers. This may not always be possible but I think it is a good goal. The main mission of top level testing is to prove that all the internal blocks are connected and can work together as required. This assumes that the design is complicated and can benefit from block level testing. When the full DUT is being simulated, simulation speed will be slower.

If the DUT is sufficiently complex, it may be advantageous to do block level verification. The advantage to block level testing is that, the DUT being smaller, simulations will run faster. Also, because the functionality of a block is only part of the top DUT functionality, it should be easier to prove. Once the block is proven to be correct, it can be considered IP, sort of. The disadvantage of block level testing is that special BFM's and/or models may have to be created. Also, that a separate environment has to be created for each block. A complicated design will usually be broken down into blocks, and one or more designers assigned to implementing each of them. If this is the case, then block level testing is almost a given. Each designer should use the same system as would be used at the top level, in this case I assume it is the VHDL test bench package. If a group of designers are working on individual blocks, there will be cases where BFM's can be common or reused from previous projects. The point is that all the block level verification is done using the same system, this enables others to take up the work easily, provided they know the system. I have seen several instances where someone leaves the company and someone else has to take up the work. Many times I have seen people redesign the verification environment because it was some random implementation, and they could not use it easily.

Block level testing can provide great benefit if there are complicated algorithmic functions that are applied to a data stream for instance. There may be several operations along a modem coding chain that are extremely difficult to prove completely. For instance a FFT or IFFT translation block. In a block level environment it is easy to stimulate the block with full random data, but when in the full design, it's inputs may be limited (by design). With a reference model of the algorithmic function to compare against, simulation speed will be optimum and randomness will ensure coverage. Once all blocks have been proven, the top level testing should not have to concern itself with the low level functionality of the algorithm(s), but more on the control and overall input/output.

The above discussion assumes a complicated design, for instance, one design I worked on, it took 15 minutes of real time simulation to get 300 us of simulation time done at the top level. This time was required just for the setup of the DUT for the test at hand. Some test cases run 4-6 hours. That IMO is a long time. Being that you are usually creating a directed test, you now have to wait for output. You could put the job on a compute farm, but not everyone has that facility. The top level I refer to here was in fact a block in a SoC design, the modem section. Running a basic traffic test on the whole SoC took 10-15 hours real time.

I have done several home projects and have done block level testing. I like it when simulations are instantaneous, you can get lots done in a short time frame. My designs are small, but there was still benefit from breaking the design into blocks that could be easily tested.

So the message is, try to keep it simple, break things down into smaller chunks to facilitate simplified testing.

Thursday, 17 March 2011

VHDL Test Bench: Usage Tips I Interrupts and Waiting

VHDL Test Bench: Usage Tips I

If the test bench contains commands that wait for an event, it is good to have a watch dog to terminate the simulation if the event never happens. The main reason for this is so that your over night regression does not get stuck on test #2, and waste the whole night.

A Script command might look something like this:

WAIT_IRQ -- wait for the IRQ

The test bench code could look like this:


For some cases maximum wait time may need to be modifiable. In this case make the constant, in the above example, a signal and create a command that enables you to change it's value.
 
 
If the DUT has an interrupt type output(s), the output(s) should be monitored by a process. This will enable the test to “expect” the signal or to catch it, if it asserts unexpectedly. First, a switch is needed to disable the monitor for the expected IRQs. So a boolean signal can be used as a switch. Then a command that will enable the state of the switch to be changed. Finally, a process to monitor the IRQ, clock by clock, and assert the appropriate message when the IRQ asserts.


 

Tuesday, 15 March 2011

The VHDL Test Bench Package vs. the Test Harness

 VHDL Test Bench Package vs. The test harness

I have seen posted lately and have seen people use what they call a test bench. Some call it a harness, this name could be coming from the Verilog world. To put it in VHDL terms, the test bench is an architecture that instantiates the Design Under Test (DUT) and has several processes that interface to the DUT. One of those processes is the clock process, this is very standard. Every system has to have some kind of clock generation process. The other process is called something like stimulate or simulate. It “is” the test, and wiggles inputs and waits for time, wiggles more inputs. Many of these kinds of test harnesses do not even prove that something is correct. I think the user just looks at wave forms and decides it is correct or not. This system would be useful for testing something simple like an AND gate. But if your DUT is complicated the test harness suffers from several problems.

  1. You have to recompile your test bench every time you make a change to the test.
  2. Unless you have more than one test harness, you only have one test.
  3. If your design changes, and you have more than one test harness, each harness has to be modified to test the changes.
  4. Since there is only one test per harness, it is difficult to implement multiple test scenarios.
  5. Reuse of test bench code is not likely from design to design.

With the VHDL Test Bench Package all of the above issues are removed from concern.
  1. You can run test after test and not have to recompile anything.
  2. You can have many many tests all running from the same, single test bench.
  3. If there are design changes that affect signalling in some way, it only has to be modified in one place to cause all tests to be updated. ie. Design changes
  4. You can have many test case writers creating test cases at the same time on the same test bench.
  5. You will find that many commands you create will be reusable in every test effort.

The VHDL Test Bench Package is a programmable script parser that exists in an architecture. By default (and generation) the VHDL test bench is a multi file set with the test bench architecture and DUT architecture connected in a VHDL top level test bench structure. The included generation program, generates entity and architecture files to connect the test bench to the DUT. The generation program also copies and edits the test bench architecture from a template. A user can also use the template architecture within any entity, making it an internal DUT BFM. Though it is a multi file system, a simple build script makes it easy to compile as needed.

Though the VHDL Test Bench Package system is more complicated than a simple harness test bench, I believe the small amount of extra effort required to create is offset by the capabilities it enables. But when the design is changing while you are trying to test it, you may find the effort to update hard coded test benches overbearing.

I will make posts in the near future that will help get users up to “high speed” in no time. Please post any comments regarding this post or what you would like to see posted next.

Sckoarn

Friday, 11 March 2011

Verification: Some Opening Words

VERIFICATION:
Some thoughts and info.

A very short and simple description of the verification process:
Specifically, RTL verification (VHDL, Verilog) using the VHDL Test Bench Package, is the task of proving that the RTL meets the functional requirements. The functional requirements exist in a requirements specification(s). Also, but not preferred, requirements can exist in design specifications. It is the responsibility of verification and design groups to collect functional requirements in a test plan document. The test plan cross references functional requirements with test cases. The test plan includes a description of the verification environment and it's parts. The verification group creates the verification test bench environment, models and BFM's. Test cases are created as defined in the test plan. When all the test cases are complete, design coverage is obtained, results analyzed and appropriate actions taken. Final results should be documented somewhere, could be the test plan.

The above description outlines the major tasks and deliverables that a minimalistic verification effort should have. Depending on the tools available to you, many other additional activities can be added to the over all process.

Other items:
A revision control system is a must. Even if you are working alone, it is always nice to be able to recover back to a previous state. In a group, revision control enables you to know everyone is working with the same files. From a verification point of view, it is a must to know that your verification system can be consistent when there are multiple test bench users. (WinCVS?)

A bug tracking system is a must. Bug tracking does not only pertain to the RTL. Anything that needs to be addressed, now or in the future, should be tracked. This includes documentation, test bench, models, scripts and RTL issues. When the issues pile up, it is easy to drop some, bug tracking enables all issues to be remembered and resolved. (Bugzilla?)

Definitions:
A BFM (Buss Functional Model), is an object that interfaces with the RTL, DUT (Design Under Test). Usually a BFM implements some kind of signalling protocol by asserting and reacting to external DUT ports. A BFM may contain data generation, configuration registers and or status registers. A BFM may also provide signalling into the test bench environment. It may also be advantageous to have a BFM as an internal block within the RTL, ie, a processor.

A Model, is an object that represents the functionality required by the DUT. A model is used as a reference to compare the output of the DUT against. The data and control applied to the DUT is also applied to the model. The two outputs are compared to determine correct functionality. To verify a complex DUT, it may be near impossible to do the job without a model to supplement the verification environment.

The Test Bench, is the object that enables the user (test writer) to control the BFMs' and RTL within the environment. The test bench is the container that holds all that is and is not the DUT. In the case of the VHDL Test Bench, these are the ttb and tb files generated by the tb_gen utility.

A test case or script is a plain text file that contains commands, defined in the test bench, to direct a specific function to be proven to be working. A test case is self checking in that, if an error is detected, it will be reported and the simulation may be terminated. Directed test cases are scripts that are created, usually by hand, to prove some particular functionality is working as required. This means that when your design complexity increases so does the number of directed test cases that need to be created. Usually higher complexity means more functionality, which is why the number of test cases increases. Random test cases are not usually created by hand, they are generated by other scripting languages like TCL or perl. This has the advantage of adding randomness to your testing and gaining all the benefits of random testing. (which is a huge topic, not covered here)

Design Coverage is defined as how well the test environment exorcized the functionality in the DUT. The test cases written, to test the DUT, are the functional coverage. With good planning, a high degree of functional coverage can be obtained via directed tests. Another useful coverage indicator is code coverage. Code coverage tells you what lines of code were executed while simulating. When you run all of your test cases (your regression set) and merge all the code coverage results from each test, you will get an indication of what was missed. There are many things that can be found while doing code coverage, dead code, optional code and missed functionality. To get 100% code coverage, the DUT code will have to have cover on/off paragmas around all known unreachable code. Remember that code coverage is not equal to functional coverage. Just because some line of code was executed, does not mean that it was correct.

The above text is some thoughts on the topic of verification which is why the VHDL Test Bench Package exists. This will prep the reader for the following posts.

Sckoarn

Thursday, 10 March 2011

Introduction

Hello all you VHDL coders out there.

I have initially posted this on the edaboard.com blogs, but their blogger is not as flexible as this system. So this will be the new and continuing place for my postings. Sckoarn

Welcome to the GNU VHDL Test Bench Package Blog!!
I have been coding VHDL for 15 years now. The coding I did was 95% for verification. From my beginnings as a Verification person using VHDL, I have used a test bench system. A simple scripting system that has proven itself to enable quality designs to be produced. The system I have used and have published, has been significantly upgraded in the past 15 years of use.

The link to the GNU VHDL Test Bench Package is here:

Be sure to download the whole thing including documentation and example.
I published this package because the company I was with was looking to change to a more modern verification methodology. Since I had found it so useful, I wanted to share it with anyone that could use it. The package has been downloaded, on average, about 2-3 times per day over the past 4 years. This indicates to me that many are already using the package. This also indicates to me that simple scripting, as a methodology, is not completely dead and that VHDL is still used by many.

So, what is the GNU VHDL Test Bench Package?

The test bench package is a frame work that enables the objects of a test system to be controlled. The test bench package is also, a collection of procedures that enable the user to create their own scripting language. The frame work is the structure that can be automatically generated around a VHDL entity. (automatically provided the coding is done in a specific way.) The frame work gives the user a “place” to put all the interface BFM type code. The scripting system enables the user to create textual scripting commands to control the simulation and it's objects. A test writer, can use the scripting commands to write test cases without having to know any HDL. The script is a simple text file that can be created by any text editor. Scripts can also be generated, which is an advanced topic. The package enables the user (designer/verification person) to create a test bench and be bringing the design out of reset within hours. Scripts can be changed, using existing commands, and NO recompile is needed.

The test bench package has been tested on all major simulators as well as GHDL, the free VHDL simulator.

Why should/would anyone use the GNU VHDL Test Bench Package?

  1. Your tool set or coding knowledge limits you to the use of VHDL.
  2. You have no established way of doing verification, you use an ad-hawk method for each design.
  3. Your designs are not huge SOC devices. Though you can use the test bench package to assist in the verification effort of SOC devices, but it is not a task for a novice user.
  4. You want to design, not design verification systems. The package enables a VHDL coder to concentrate more on the design and verification effort and less on the verification system/environment.
  5. You are a lone Verification person in a bigger group. With some of the above constraints, you want to organize the verification effort so the whole group can work under a single method.
  6. The test bench package is industry proven.
  7. The test bench package is free.

I have performed the verification function under all of the above constraints. Some of the designs I have verified have been SOC devices and even multi-device designs.

Why write this blog?

As stated above, the test bench package has been used, by myself and many others, for 15 years. Over that time frame I have changed the way I use the package, leaning from past efforts. I would like to help new users of the VHDL test bench package to by-pass some of the pitfalls I have encountered. The test bench package is very versatile and it can help make life easier, or not so.

My only goal, in writing this blog, is to help new users become experienced users ASAP. This will help you produce better quality designs with less time and effort.

By writing this blog, I am not stating that using the VHDL test bench package is the best way to do verification. The VHDL test bench is a 1990's methodology, and today many believe in the OOP languages that have evolved to try and meet the task of verification. It is not the purpose of this blog to compare or comment on the differences of OOP methods and the VHDL test bench package.

What do I plan to write about in this blog?

I will be adding to this blog over the coming weeks, there are lots of little things that can be said about usage. Topics like BFM's, modelling, scripting, complex designs, reuse, usage tips and external script generation.


If you are interested in having me write about a specific topic, regarding the VHDL test bench package, feel free to post up your request or send me a privet message.

Cheers,
Sckoarn