Saturday 19 March 2011

Top Level and Block Level Testing

 

Top Level and Block Level testing.

When creating a top level verification environment, I try to consider the top level DUT as black box. That it can only be accessed through it's ports and registers. This may not always be possible but I think it is a good goal. The main mission of top level testing is to prove that all the internal blocks are connected and can work together as required. This assumes that the design is complicated and can benefit from block level testing. When the full DUT is being simulated, simulation speed will be slower.

If the DUT is sufficiently complex, it may be advantageous to do block level verification. The advantage to block level testing is that, the DUT being smaller, simulations will run faster. Also, because the functionality of a block is only part of the top DUT functionality, it should be easier to prove. Once the block is proven to be correct, it can be considered IP, sort of. The disadvantage of block level testing is that special BFM's and/or models may have to be created. Also, that a separate environment has to be created for each block. A complicated design will usually be broken down into blocks, and one or more designers assigned to implementing each of them. If this is the case, then block level testing is almost a given. Each designer should use the same system as would be used at the top level, in this case I assume it is the VHDL test bench package. If a group of designers are working on individual blocks, there will be cases where BFM's can be common or reused from previous projects. The point is that all the block level verification is done using the same system, this enables others to take up the work easily, provided they know the system. I have seen several instances where someone leaves the company and someone else has to take up the work. Many times I have seen people redesign the verification environment because it was some random implementation, and they could not use it easily.

Block level testing can provide great benefit if there are complicated algorithmic functions that are applied to a data stream for instance. There may be several operations along a modem coding chain that are extremely difficult to prove completely. For instance a FFT or IFFT translation block. In a block level environment it is easy to stimulate the block with full random data, but when in the full design, it's inputs may be limited (by design). With a reference model of the algorithmic function to compare against, simulation speed will be optimum and randomness will ensure coverage. Once all blocks have been proven, the top level testing should not have to concern itself with the low level functionality of the algorithm(s), but more on the control and overall input/output.

The above discussion assumes a complicated design, for instance, one design I worked on, it took 15 minutes of real time simulation to get 300 us of simulation time done at the top level. This time was required just for the setup of the DUT for the test at hand. Some test cases run 4-6 hours. That IMO is a long time. Being that you are usually creating a directed test, you now have to wait for output. You could put the job on a compute farm, but not everyone has that facility. The top level I refer to here was in fact a block in a SoC design, the modem section. Running a basic traffic test on the whole SoC took 10-15 hours real time.

I have done several home projects and have done block level testing. I like it when simulations are instantaneous, you can get lots done in a short time frame. My designs are small, but there was still benefit from breaking the design into blocks that could be easily tested.

So the message is, try to keep it simple, break things down into smaller chunks to facilitate simplified testing.

No comments:

Post a Comment