Variable Ordering Heuristics for Kronecker Lattice Diagrams
by
Junaid Omar
Project:
Develop a Lattice Diagram package and investigate variable ordering
heuristics.
References:
-
Lattice Diagrams using Reed-Muller Logic
-- Perkowski, Jeske and Xu (RM `97)
-
On Variable Ordering and Decomposition
Type Choice in OKFDDs -- Dreschler, Becker and Jahnke
- Multi-level Programmable Arrays for Sub-Micron Technology Based on Symmetries -- Perkowski, Jeske et. al
- Polarized Pseudo-Kronecker Symmetry and synthesis of 2x2 Lattice Diagrams. -- Drucker, Files, Perkowski and Jeske
Motivation:
Lattice Diagrams are a useful structure for implementing a function directly in regular layout hardware.
They resemble Ordered Kronecker diagrams in the sense that the expansion applied at a node can be Shannon, positive-Davio or negative-Davio. They are different from Ordered Kronecker diagrams because internal nodes have two parents instead of one thereby resulting in a lattice-like structure.
They grow linearly at each level and thus are more desirable for layout-driven synthesis than other decision diagram structures that grow exponentially.
However the merging of nodes results in reintroduction of variables that get eliminated in other decision diagrams after one level, this causes the depth of the diagram to increase and makes a good variable ordering heuristic necessary to keep the depth from becoming unnecessrily large.
The order in which variables are chosen and the type of transformation
applied at a level can greatly influence the eventual size of the lattice
diagram. Therefore a good heuristic is needed to determine variable ordering.
This project will try to apply the heuristics as described in [2] to the
Lattice Diagrams as described in [1].
[1] presents very useful introduction and methodologies for creating Lattice Diagrams and how they can be translated into Akers Array type regular layout hardware. Ordered Shannon Lattice Diagrams (OSLDD) and Ordered Kronecker Lattice Diagrams (OKLDD) are described and multiple output OSLDDs are discussed.There are several rules to merge adjacent nodes in a tree depending on whether its a Shannon-Shannon, Davio-Davio or Davio-Shannon node merger. Davio can be positive-Davio or negative-Davio. However note that there is no pD-nD merge transformation. Also Shannon-Shannon transformation only affects the middle node whereas other also effect the right branch. Thus lattice diagrams tend to trail off quickly on the left side and skew towards the center and right. Thus a good mapping to regular layout must try to minimize the skew so that we can get a more evenly shaped lattice. To this end I will attempt to introduce some new transformation on reverse-Shannon, reverse positive-Davio and reverse negative-Davio. [1] only discusses merging rules for normal transformation types. Our heuristic will use the following merging rules from [1]:
- Shannon - Shannon
- positive Davio - positive Davio
- negative Davio - negative Davio
We will not use the positive Davio-Shannon and negative Davio - Shannon merging rules described in [1] because our heuristic assumes a common decomposition type at a given level. Later refinements to the heuristic may include these merges also but would have to ensure that we don't encounter any unmergable situations such as a pD-nD. Also note that the nD-nD rule in Fig. 1 is not correct. The correct nD-nD merge rule is as follows:
In addition to these three merge rules defined above we add the following "reverse" merge rules:
- Reverse Shannon - Reverse Shannon
The proof for this is as follows:
Before merge:
R= bW ^ b'X
S= bY ^ b'Z
where ^ denotes the xor operation.
After merge:
R=bW ^ b'(b'X ^ bY) = bW ^ b'X
Similarly
S= b(b'X ^ by) ^ b'Z = bY ^ b'Z
Reverse positive Davio - Reverse positive Davio
The proof for this is as follows:
Before merge:
R= bW ^ X
S= bY ^ Z
where ^ denotes the xor operation.
After merge:
R= b(W ^ X ^Y) ^1(b'X ^ bY) = bW ^ bX^bY^b'X ^bY
substituting b' = 1^b
R= bW ^bX ^(1^b)X = bW ^bX ^X ^bX = bW ^X
Similarly
S= b(b'X ^ by) ^ b'Z = bY ^ b'Z
Reverse negative Davio - Reverse negative Davio
Proof can be inferred in a similar manner to the above proofs.
Reverse or "flipped" transformations are similar to their normal counterparts with the difference being that the left and right pointers are transposed (flipped around). The flipped expansions have no particular significance for ordinary BDD or OKFDDs since the logic being represented is unchanged. However, in context of shaping the Lattice DD these flipped expansions can play a very useful role. This can obviously help in balancing the shape of the lattice. Ideally a lattice diagram outline should look like an right-angled isosceles triangle but in practice this is not always achievable. Symmetric functions would yield lattices close to this ideal form. Two of these ideal symmetric lattices could be implemented on a rectangular grid hardware layout with minimal wastage of hardware resources.
Also note that merging results in reintroduction of the decision variable which is eliminated in a normal decision diagram. Therefore variables will be repeated more than once in a lattice diagram and it is very important to order the variables in such a manner as to avoid unnecessary repitition which would result in a larger lattice diagram.
Another important difference between our lattice diagram and conventional DDs is that its terminal nodes are single variables instead of ones and zeros. This is logical since our objective is layout-driven synthesis rather than equivalence-checking etc. which are the most common use for other conventional DDs.
An important thing to note in Kronecker lattices is that the only level where there is no merging is the level immediately below the root. Consequently it makes sense to use the variable that occurs in most terms of our function as the decsion variable here. Use of the most common varaible in the first split from the root level will result in certain elimination of that variable with no possibility of its re-emergence due to later merges.
Another important aspect of Kronecker lattices is the reuse of the select line. Each level of the diagram can be visualized as a set of 2 input multiplexers with the select line being connected to the decision variable. It is assumed both normal and complemented forms of the decision variable are availbale for connection to the selct lines of the different multiplexers at a given level. But at a given level it is possible and in fact quite likely that the decision variable is not present in all the functions on that level. Rather than wasting the select lines for these functions , we can use them for other variables. This is conceptually equivalent to using different decision variables at that level. The result of this will be the introduction of "curved" levels.
Examining the merging rules shows that negative Davio and Reverse Negtive Davio are the most complicated of all the merging rules since they cause all the child nodes to be affected. Thus a negative Davio merge will cause a lot of agitation and recomputaion of merged nodes. Shannon and Reverse Shannon don't change the balance of the lattice because they only affect the middle node, thus they will be the cause the least perturbation after the merge. However Positive Davio and Reverse Positive Davio are quite useful for purposes of balancing the shape of the lattice Diagram. It would appear that an FDD Lattice with alternating levels using positive and reverse positive Davio expansion may have the most balancing effect on the shape of the lattice diagram. Since positive Davio tends to skew the diagram towards right and reverse postivie Davio tends to skew the tree towards the left.
[2] describes heuristics for SOP and multi-level circuits for determining
variable ordering for OKFDDs and then compares it to previous heuristics
for OBDD (a Kronecker with only Shannon Expansions) and OFDD (a Kronecker
with only Davio expansions).
We need to know two things before
we build the OKFDD.
- We need to know the order in which the variables occur (Variable Ordering).
- we need to know the decomposition type (Shannon, pDavio or nDavio) applied at each level. This information is stored in a Decomposition Type List(DTL).
The PLA Heuristic algorithm described in [2] works as follows:
- The algorithm uses a constant, k, to determine the extent to which he tree will favor Shannon or Davio expansions. User will arbitrarily choose a value for k where 0 <= k <= 0.5. This constant influences all the other calcuations for variable order and DTL. For k = 0 the tree degenerates to an OBDD with all decompostiuns are Shannon.For k = 0.5 the tree degenerated to a OFDD with all decompositions being Davio. obviously the value of k greatly influences the DTL and variable ordering so it should be chosen by an educated guess about the nature of the function being implemented. If the nature of the function does not have a direct correlation to k then we may need to try different values of k before getting a suitable value
- calculate the nlit and plit values for each varaible. nlit and plit for each variable are a measure of how often it occurs in its complemented or uncomplemented form. For example let us assume a PLA which implements
two functions f1 and f2 with 3 variables x1,x2,x3
x1 | x2 | x3 | f1 | f2 |
1 | 1 | 0 | 0 | 1 |
1 | 0 | 0 | 0 | 1 |
1 | 1 | 1 | 1 | 1 |
then nlit for var[i]= sum of 1s in the f1 and f2 columns for which var[i] = 0
then plit for var[i]= sum of 1s in the f1 and f2 columns for which var[i] = 1
for example, plit for x2 = 1+2 = 3 and nlit for x2 = 1
variable | nlit | plit |
x1 | 0 | 4 |
x2 | 1 | 3 |
x3 | 2 | 2 |
- Calculate a weight for each variable and then sort the variables by their weight to get the variable ordering
- Calculate a DTL by the the PLA heuristic
Objectives/Milestones:
-
Familiarize myself with a Decision Diagram package
like CUDD, PUMA or TUD DD.
-
Add lattice diagram transformations to this package.
-
Use this lattice package to explore variable ordering and
its effects on size of lattice diagrams.
- compare results for various standard benchmarks to existing results from [1] and [2].
-
Expand this investigation to lattice diagrams for multi-output functions.
* Because of time limitations many of these milestones may not be reachable
in this quarter.
Results (In Progress)
-
Familiarize myself with a Decision Diagram package
like CUDD, BuDDy, PUMA or TUD DD. These packagews were chosen for evaluation because they met many of my selection criteria listed below.
So far I have evaluated three DD packages for my project. The important considerations in my evaluation were:
- Ease of use, including easy downloading, installation, compiling and linking on SUN or HP Unix Workstation
- Good documentation with details of functions and usage
- Preferably C or C++ source code since I planned to develop the lattice extension in C++
- Some form of graphical visualization of the DDs
- A sample application or example that could read a specified function in BLIF or similar format.
- CUDD: CU Decision Diagram Package
This package has been developed by Prof. Somenzi at University of Colorado . Somenzi is co-author of the book "Logic synthesis and Verification Algorithms" which is the main text book for this course (EE572). I looked at this package first but found that it only supports 3 types of DDs . Namely BDDs (Binary Decision Diagrams), ADDs (Algebraic Decision Diagrams) and ZDDs (Zero-suppressed Decision Diagrams). The two basic structures for this package are DDNode and DDManager. The DDNode struct is the basic node which can have T and E pointers to its children (Then and Else). The same node can hold a value thereby acting as a terminal node. There are two functions to output the resultant diagram. Cudd_DumpBlif() writes a file in blif format. It is restricted to BDDs. The diagrams are written as a network of multiplexers. Cudd_DumpDot() produces input suitable to the graph-drawing program dot written by Eleftherios Koutsofios and Stephen C. North. The CUDD package comes with a sample application that can read BLIF files and form BDDs and dump the files in DOT format. After downloading and compiling this package and sample application I have reached the conclusion that this package is unsuitable for my project because it does not support OKFDDs. It has no Davio expansions and would require considerable modification to support lattice diagrams.
BLIF
BLIF format is an acronym for Berkeley Logic Interchange Format. The goal of BLIF is to describe a logic-level hierarchical circuit in textual form. A circuit is an arbitrary combinational or sequential network of logic functions. More information on BLIF is available at http://schof.colorado.edu/~ecen5139/5139/blif.html and http://www.ics.uci.edu/~rgupta/ics252/blif. A Blif2VHDl converter is also available in the public domain at http://rassp.scra.org/vhdl/tools/tools.html
Here is an eample BLIF file that implements
f = n1.n2'.n4' + n1.n2.n4 + n2.n3
.model rcn25
.inputs n1 n2 n3 n4
.outputs f
.names n1 n2 n3 n4 f
10-0 1
11-1 1
-11- 1
DOT
This is a format that can be read by a viewer tool "GraphWiz" developed at AT&T research. It can read a dot file and output a graphical view of the DD in postscript format. This can now be viewed using ghostview or other postscript viewers.Web Address is : http://www.research.att.com/sw/tools/graphviz
Example of the output of a dot file for the BLIF example given above:
% dot -Tps /usr2/cudd-2.1.2/nanotrav/myout -o my.ps
- TUD Decision Diagram Package :
This package has been developed at the Department of Electrical Engineering / Computer Systems in Darmstadt University of Technology (Germany).It is designed to handle OKFDDS and is interesting in that it also has a web interface to specify and view DDs. I tried to contact the developer to get more information about downloading the package but did not receive any timely responses and so reluctantly had to abandon the plan of using this package. Web Address of this package is http://www.rs.e-technik.tu-darmstadt.de/~sth/dd/dd.html
- PUMA Decision Diagram Package
This package has been developed at the Freiburg University (Germany).It is designed to handle OKFDDS and is written in C++. There is a sample application with it to demonstrate some of the packages capabilities. It utilizes some of the basic C++ features but most of the design appears to be closer to C. This is in many ways a limitation since it restricts the extensibility of the package which would have been possible with a proper C++ object-oriented design. Input can be read from BLIF format using OKFDD_Read_BLIF () function. It also has a useful XWindows interface to draw the DD with different coloring for Shannon, negative-Davio and positive-Davio nodes. The access to this display is via three functions
- ulint XShowDD ( utnode* ROOT )
Displays a tree with root specified in argument.
- ulint XShowDD_all ( )
Displays all trees registered with DD Manager.
- ulint XShowDD_mult ( utnode** ROOTS )
Displays all trees specified by the array of pointers ROOTS.
Here is the resulting output for the example described above
f = n1.n2'.n4' + n1.n2.n4 + n2.n3
.i 4
.o 1
10-0 1
11-1 1
-11- 1
The class utnode* is the basic structure element of the DDs in PUMA. A DD-Node consists of the following items:
Item |
Type and name |
Unique node number (ID-Number) |
idnum of type ulint |
Variable label |
label_i of type usint |
Low son of node |
lo_p of type utnode* |
High son of node |
hi_p of type utnode* |
Accessory field |
link of type void* |
Reference counter |
ref_c of type uchar |
Several protected items |
... |
As can be seen, there are no parent pointers in this node. This may make it inapproriate for use as is and
may require derivation of a LatticeNode class from this as the base class using C++ inheritance properties. Two additional utnode pointer to left_parent and right_parent will need to be introduced.
Conclusion
Based on my analysis I chose CUDD because of its very good documentaion, wealth of DD manipulation functions and a nice sample application for reading and writing BLIF files. I found CUDD easier to debug and compile than PUMA . This was the deciding factor in my choice.
compare results for various standard benchmarks to existing results
The results of this project are somewhat hard to compare since this is a relatively new area of research. There are no results available for Kronecker Lattice Diagrams in current publications. As such any comparisons to existing results for OKFDDS or Shannon Lattices is like comparing oranges to apples. The various benchmarks like MCNC can be used to form the Kronecker lattices but the results of various heuristics developed as a result of this project can only be compared amongst themselves. The results of this project can serve as a starting point and act as an initial benchmark for comparing results from future research.
Problems with applying the PLA Heuristic described in [2]s
Despite similarities etween OKFDDs and Lattice Diagrams, there are several differences which make the adaptation of the PLA heursistic described in [2] difficult.
- The Lattice Diagram has variable repetition which makes it much different from OKFDDD Variable Ordering.
- It can be proven that a lattice diagram will eventually be completed but a heuristic can not exactly predict when a lattice diagram is complete. A lattice diagram is considered complete when all nodes at the exterior are terminal nodes.
- The PLA Heuristic does not take into the account the possibility of "flipped" decompositions.
- The PLA heuristic uses the assumption that terminal nodes will be constants.
Outline of our heuristic
Because of time limitations I could not implement this heuristic completely in an actual program but here is a broad outline. An appendix lists the actual source code developed so far.
- Calculate variable and ordering and DTL according to the PLA heuristic described in [2] and oulined above.
- Start from the top down and build the lattice diagram in a breadth-first manner.
- At each level only one kind of decomposition can be applied. Perform decomposition like a normal OKFDD.
- Merge adjacent non-isomorphic nodes using the six merging rules described earlier.
- The DTL will be limited to Shannon, positive Davio and negative Davio but the heuristic will determine whether to use the normal or reverse version of that decomposition at a given level. This determination wil be made made at each level by using a look-ahead algorithm which will choose the reverse type of the decomposition if the lattice appears to be becoming skewed in one direction.
- If a decomposition yields a single variable then no more decomposition to terminal nodes is necessary. In a layout-synthesis sense we have reached the point where a signal will be directly connected to the input of our computing element (namely a Davio gate or Shannon multiplexer).
- Pseudo-code for the heuristic:
enum Decomp { // >> Decomposition types <<
N_Shan = 0, // Shannon
N_posD = 1, // positive Davio
N_negD = 2, // negative Davio
R_Shan = 3, // Reverse Shannon
R_posD = 4, //
R_negD = 5,
};
// These two functions would implement the PLA heuristic described in
// the Dreshler and Becker OKFDD Oredering Heuristic paper
void read_literals(Decomp ** DTL, int ** VariableOrder,float k);
void pla_heuristic(Decomp ** DTL, int ** VariableOrder,,float k);
// assuming no variable will be repeated more than 3 times
// We allocate space accordingly. In real code this could be allocated
//dynamically
int [3* NO_INPUTS] VariableOrder; // stores order in which variables occur
// Decomposition Type List can have six possible values
Decomp [3*NO_INPUTS] DTL;
float k=0; // Arbitray constant k used in PLA Heuristic;
BEGIN
print Enter value of k;
Read k;
read_literals(Decomp ** DTL, int ** VariableOrder, float k);
pla_heuristic(Decomp ** DTL, int ** VariableOrder, float k);
latticeNode * root = Build_OKFDD();
// At this stage we have a correct Variable order for the first go
// without any varaible repetition and a
// DTL containing only normal decomposition types
// WE have constructed a OKFDD based on this info
// Now we ignore the root level since no merging occurs at
// that level
// list template to store all the nodes left to right in a level
List level;
// start merging from second level down
for(int i=1; i < NO_INPUTS; i++){
fill_level(i, level); // populate list with nodes from the level
for (int j=0; j < level.size()-1; j++){
merge_nodes(level[j], level[j+1], DTL[i]);
}
}
Questions/comments to:junaid_omar@mentorg.com
Last modified: Tue Dec 8 13:57:59 PST 1998