This document outlines the suggested procedures for running the tests
included in this validation suite.  Note that all command files have been
written with the assumption that the underlying toolset implementation is
the Intermetrics' VHDL Language System; however, it is expected that these
files can be easily modified to meet the needs of other VHDL implementations.


RUNNING THE BENCHMARK TESTS

The "bench" test directory contains the benchmark "shells" along with the
command files to run them.  These benchmarks are organized into subdirectories
by the VHDL features they test as documented in the files "bench_matrix" and 
"bench_objs" under the "doc" directory of this tape.

Also included in the "bench" subdirectory is the utility program that takes
the benchmark shells along with input capacity parameters and generates
proper VHDL source code representing the test.  This generator program is 
itself a VHDL file called "gen.vhd".  A sample command file called "generate"
which analyzes, model generates, and builds the generator program has been
provided.

Any of the benchmarks may be executed once the simulation kernel exists for
"gen".  First choose a test, change directories to the corresponding
subdirectory, and then print the shell either to the screen or a printer
since you will need to read the comments to see what generator parameters
are expected.  Look at the EXAMPLE section, decide on what parameter
values to use, and then run the generator program as instructed being sure
to use the same file names given in the shell.  When "gen" has successfully
generated a description (flagged by an assertion violation, along with the
message "done"), there should be a syntactically correct VHDL model named
"test.vhd" or something very similar deposited in the current directory.
The "test" command file in that subdirectory should then be used to run the
test.  Note that you can run the job as a VMS batch job by doing the
following:

     $ submit command_file_name.com

The "/que=que_name" and "/log=log_file_name" qualifiers may be used to control 
which batch queue the job is submitted to, and the pathname of the log_file.
This will cause all messages related to this job to be written to the file
"log_file_name" (you can name this whatever you want).  When the job is
reported "done", check "log_file_name" to see the timing data or any errors
that might have occurred.  You can re-run the test with different parameters
by regenerating the test program and following the procedures outlined above.

The timing collection commands are already in the command files; however,
these commands do not provide timing data for any subprocesses spawned (as
in model generation, build, and simulation).  To get this data, you need to 
check with your system manager to see what kind of system accounting, if any, 
is being done.  If "process" accounting is done, ask the system manager to 
execute a command in the following general form AFTER you have run some 
benchmarks

    $ account/binary/output=DATA_FILE.DAT/since=DD-MM-YYYY/user=YOUR_USER_ID

where DATA_FILE.DAT is any name you choose for the output file, DD-MM-YYYY is 
the date you started running the benchmarks (check with the manager to see how
far back the accounting files go), and YOUR_USER_ID is your login name on the 
system.  When you receive the DATA_FILE.DAT file, use the following command to 
look at it

    $ acc/full DATA_FILE.DAT

This displays one subprocess's data per screen.  You can use the date/time data
in the log_file to help identify which subprocess corresponds to which command.
Only model generate, build, and sim commands spawn subprocess, so nothing in the
accounting file will correspond to the analysis commands.  When you have 
matched a command with a subprocess, record the "CPU time" data for the command.
When you have done this for each command in a log_file, go through the log_file 
and subtract the "Elapsed CPU Time" number above each command from the one 
below it to get the remaining CPU time.  Add this time to the subprocess CPU 
time from the accounting file.  This is the total CPU time for the command. 


RUNNING THE CORRECT/STATIC-ERROR TESTS

Under the "tools" directory of this tape is the command file "val" which
may be used to run all or a subset of either the correct or static error
tests.  Do a "type" on "val" to get a description on the expected inputs
and options.  The file "sample_val" provides an example on how to use "val".

Also under the "tools" directory is the command file "review".  This file
can be used to review a validation run on the correct tests by producing
two lists: those tests which passed, and those tests which failed.  Basically
"review" operates by performing a search on the output logfiles for strings
that are indicative of a failure (for example, an error message issued by
the compiler, or an assertion message with "FAIL").

To determine if static error tests pass, you must examine the output for
error messages and verify that all errors as documented in the corresponding
vhdl source for the test are detected.

