✔ Đào Tạo Kỹ Sư Thiết Kế Vi Mạch cùng Semicon

11/20/2013

Typical Verification Flow


../images/main/bullet_green_ball.gif Verification Flow With Specman

Verification flow with specman is the same as with any other HVL. The figure below shows the verification flow with specman.

Verification Flow

Verification flow starts with understanding the specification of the chip/block under verification. Once the specification is understood, a test cases document is prepared, which documents all possible test cases. Once this document is done to a level where 70-80 percent functionality is covered, a testbench architecture document is prepared. In the past, this document was prepared first and the test cases one was prepared next. There is a drawback with this style: if test cases document shows a particular functionality to be verified and if testbench does not support it, as the architecture document was prepared before the test cases one. If we have a test cases document to refer to, then writing an architecture document becomes much easier, as we know for sure what is expected from the testbench.       

Note:
This section was written in a hurry, so it is very far from what I really want it to be!!!


../images/main/bulllet_4dots_orange.gif Test Cases

 Identify the test cases from the design specification: a simple task for simple cases. Normally requirement in test cases becomes a test case. Anything that specification mentions with "Can do", "will have" becomes a test case. Corner test cases normally take lot of thinking to be identified.

../images/main/bulllet_4dots_orange.gif Testbench Architecture

  Typical testbench architecture looks as shown below. The main blocks in a testbench are base object, transaction generator, driver, monitor, checker/scoreboard.


The block in red is the DUT, and boxes in orange are the testbench components. Coverage is a separate block which gets events from the input and output monitors. It is the same as the scoreboard, but does something more.

../images/main/bullet_star_pink.gif Base Object

 Base object is the data structure that will be used across the testbench. Let's assume you are verifying a memory, then the base object would contain:
       
  1 <'
  2 struct mem_object {
  3   addr    : uint (bits:8);
  4   data    : uint (bits:8);
  5   rd_wt   : uint [0..100];
  6   wr_wt   : uint [0..100];
  7   rd_wr   : bool;
  8   keep soft rd_wt == 50;
  9   keep soft wr_wt == 50;

10 
11   keep gen (wr_wt) before (rd_wr);
12   keep gen (rd_wt) before (rd_wr);
13   // Default operation is Write
14   keep soft rd_wr == FALSE;
15 
16   keep soft rd_wr == select {
17      rd_wt : TRUE;
18      wr_wt : FALSE;
19   };
20 };
21 '>

Here base_object is the name of the base object, in the same way as we have a module name for each module in Verilog or an entity name in VHDL. Address, data, read, write are various field of the base_object. Normally we have some default constraints and some methods (functions) which could manipulate the objects in the base object.

../images/main/bullet_star_pink.gif Transaction Generator

 Transaction generator generates the transactions based on the test constraints. Normally the transaction generator applies test case constraints on the base object and generate a base object based on constraints. Once generated, the transaction generator passes it to the driver.
         
A typical transaction generator would be like this:
     
  1 <'
  2 struct mem_txgen {
  3    ! mem_base : mem_object;
  4   //driver   : mem_driver;
  5    ! num_cmds : uint;
  6   // This method generates the commands and
  7   // calls the driver
  8   genrate_cmds()@sys.any is {
  9     for {var i:uint = 0 ; i < num_cmds; i+=1} do {

10       // Generate a write access
11       gen mem_base keeping {
12         it.addr == 0x10;
13     it.data == 0x22;
14     it.rd_wr == FALSE;
15       };
16       // call the driver
17       //driver.drive_object(mem_base);
18     };
19   };
20 };
21 '>

../images/main/bullet_star_pink.gif Driver

 Driver drives the base object generated by the transaction generator to the DUT. To do this, it implements the DUT input protocol. Something like this.    

  1 <'
  2 unit mem_driver {
  3   event clk is rise('top.mem_clk') @sim;
  4   // This method drives the DUT
  5   drive_mem(mem_base : mem_object)@clk is {
  6     wait cycle;
  7     //Driver ce,addr,rd_wr command
  8     'top.mem_ce' = 1;
  9     'top.mem_addr' = mem_base.addr;

10     'top.mem_rd_wr' = mem_base.rd_wr;
11     if (mem_base.rd_wr == FALSE) {
12       'top.mem_wr_data' = mem_base.data;
13     };
14     // Deassert all the driven signals
15     wait cycle;
16     'top.mem_ce' = 0;
17     'top.mem_addr' = 0;
18     'top.mem_rd_wr' = 0;
19     'top.mem_wr_data' = 0;
20   };
21 };
22 '>

../images/main/bullet_star_pink.gif Input Monitor

Input monitor monitors the input signals to the DUT. Example: in an ethernet switch, each ingoing packet is picked by the input monitor and passed to the checker.

../images/main/bullet_star_pink.gif Output Monitor

Output monitor monitors the output signals from DUT. Example: in an ethernet switch, each outgoing packet from the switch is picked by the output monitor and passed to the checker.

../images/main/bullet_star_pink.gif Checker/Scoreboard

Checker or Scoreboard basically checks if the output coming out of the DUT is correct or wrong. Basically scoreboards in e language are implemented using keyed lists.

../images/main/bulllet_4dots_orange.gif TestBench Coding
 
  Testbench coding starts after the testbench architecture document is complete, typically we start with:
  •     base object
  •     transaction generator
  •     driver
  •     input monitor
  •     output monitor
  •     scoreboard
If the project is big, all the tasks can start at the same time, as many engineers will be working on them.

../images/main/bulllet_4dots_orange.gif Test Case Execution

In this phase, test execution teams execute the test cases based on a priority. Typically once the focused test cases pass and some level of random test cases pass, we move to regression. In regression all the test cases are run with different seeds every time there is change in RTL.

../images/main/bulllet_4dots_orange.gif Post Processing    
 
In post processing, code and functional coverage is checked to see if all the possible DUT functionality is covered.

../images/main/bullet_green_ball.gif Code Coverage

Code coverage shows which part of the RTL is tested, thus is used as a measurement to show how well the DUT is verified. Also code coverage shows how good the functional coverage matrix is.
          
There are many types of code coverage as listed below:
  •     Line Coverage
  •     Branch Coverage
  •     Expression Coverage
  •     Toggle Coverage
  •     FSM Coverage


../images/main/bulllet_4dots_orange.gif Line Coverage

    Line coverage or block coverage or segment coverage shows how many times each line is executed.

../images/main/bulllet_4dots_orange.gif Branch Coverage

    Branch coverage shows if all the possible branches of if..else or case statements are reached or not.

../images/main/bulllet_4dots_orange.gif Expression Coverage
 
    The golden of all coverage types. Expression coverage shows if all possible legal boolean values of an expression are reached or not. Generally expression coverage of 95% and above for large design is considered good.

../images/main/bulllet_4dots_orange.gif Toggle Coverage
         Toggle coverage shows which bits in the RTL have toggled. Toggle coverage is used for power analysis mainly.

../images/main/bulllet_4dots_orange.gif FSM Coverage   
   The FSM coverage shows if all states are reached, if all possible state transitions have happened.       


Theo asic-world 

Không có nhận xét nào:

Đăng nhận xét