OpenLane provides run_designs.py
, a script that can do multiple runs in a parallel using different configurations. A run consists of a set of designs and a configuration file that contains the configuration values. It is useful to explore the design implementation using different configurations to figure out the best one(s). For examples, check the Usage section.
Also, it can be used for testing the flow by running the flow against several designs using their best configurations. For example the following has two runs: spm and xtea using their default configuration files config.tcl.
:
python3 run_designs.py --designs spm xtea des aes256 --tag test --threads 3
You can view the results of the run against some designs (more here) against any of the 5 sky130 standard cell libraries through these sheets:
Note: flow_failed
under flow_status
implies that the run had failed.
To replicate these sheets, run the following command inside the docker after setting the proper standard cell library in ../configuration/general.tcl:
python3 run_design.py --defaultTestSet --htmlExtract
You can control the run by adding more of the flags in this section
- The list of flags that could be used with run_designs.py is described here Command line arguments. Check columns_defintions.md for more details on the reported configuration parameters.
The script can be used in two ways
-
Running one or more designs.
python3 run_designs.py --designs spm xtea PPU APU
You can run the defualt test set consisting of all designs under ./designs through running the following command along with any of the flags:
python3 run_design.py --defaultTestSet
-
An exploration run that generates configuration files of all possible combinations of the passed regression file and runs them on the provided designs.
python3 run_designs.py --designs spm xtea --regression ./scripts/config/regression.config
These parameters must be provided in the file passed to
--regression
. Any file can be used. The file used above is just an example-
Basic Regression Script:
The parameters that have multiple values inside the brackets will form the combinations. So here all combinations of GLB_RT_ADJUSTMENT and FP_CORE_UTIL will be tried.
GLB_RT_ADJUSTMENT=(0.1,0.15) FP_CORE_UTIL=(40,50) PL_TARGET_DENSITY=(0.4) SYNTH_STRATEGY=(1,3) FP_PDN_VPITCH=(153.6) FP_PDN_HPITCH=(153.18) FP_ASPECT_RATIO=(1) SYNTH_MAX_FANOUT=(5)
-
Complex Expressions:
In addition,
extra
is appended to every configuration file generated. So it is used to add some configurations specific to this regression run. The file could also contain non-white-space-separated expressions of one or more configuration variables or alternatively this could be specified in the extra section:FP_CORE_UTIL=(40,50) PL_TARGET_DENSITY=(FP_CORE_UTIL*0.01-0.1,0.4) extra=" set ::env(SYNTH_MAX_FANOUT) { $::env(FP_ASPECT_RATIO) * 5 } "
-
SCL-specific section
You can use this section to specify information that you would like to be sourced before sourcing SCL-specific information:
FP_CORE_UTIL=(40,50) PL_TARGET_DENSITY=(FP_CORE_UTIL*0.01-0.1,0.4) extra=" set ::env(SYNTH_MAX_FANOUT) { $::env(FP_ASPECT_RATIO) * 5 } " std_cell_library=" set ::env(STD_CELL_LIBRARY) sky130_fd_sc_hd set ::env(SYNTH_STRATEGY) 1 "
In the example above, SYNTH_STRATEGY and STD_CELL_LIBRARY will be set before sourcing the SCL-specific information, and thus if SYNTH_STRATEGY is already specified under the configurations, the old value will override the value specified here.
This can also be used to control the used PDK and its SCL, since it is set before sourcing the SCL-specific information, so this will override the SCL set in general.tcl and allow for more control on different standard cell libraries under the same design.
It's important to note that the used configuration in the expression should be assigned a value or a range of values preceding its use in the file.
-
Important Note: If you are going to launch two or more separate regression runs that include same design(s), make sure to set different tags for them using the --tag
option. Also, put memory management into consideration while running multiple threads to avoid running out of memory to avoid any invalid pointer access.
-
In addition to files produced inside
designs/<design>/runs/config_<tag>_<timestamp>
for each run on a design, three files are produced:regression_results/<tag>_<timestamp>/<tag>_<timestamp>.log
A log file that describes start and stopping time of a given run.regression_results/<tag>_<timestamp>/<tag>_<timestamp>.csv
A report file that provides a summary of each run. The summary contains some metrics and the configuration of that runregression_results/<tag>_<timestamp>/<tag>_<timestamp>_best.csv
A report file that selects the best configuration per design based on number of violations
-
If the --htmlExtract flag is enabled, the following files will also be generated:
regression_results/<tag>_<timestamp>/<tag>_<timestamp>.html
A summary of the report file that provides a summary of each run. The summary contains the most important metrics and configuration of that runregression_results/<tag>_<timestamp>/<tag>_<timestamp>_best.html
A summary of the report file that selects the best configuration per design based on number of violations. The summary contains the most important metrics and configuration of that run
-
If a file is provided to the --benchmark flag, the following files will also be generated:
regression_results/<tag>_<timestamp>/<tag>_<timestamp>_design_test_report.csv
An incrementaly generated list of all designs in this run compared to the benchmark results and whether they PASSED or FAILED the regression test.regression_results/<tag>_<timestamp>/<tag>_<timestamp>_benchmark_written_report.rpt
A detailed report pointing out the differences between this run of the test set and the benchmark results. It divides them into three categories: Critical, Note-worthy, and Configurations.regression_results/<tag>_<timestamp>/<tag>_<timestamp>_benchmark_final_report.xlsx
A design to design comparison between benchmark results and this run of the test set. It includes whether or not a design failed or passed the test and it highlights the differences.
Argument | Description |
---|---|
--designs | -d design1 design2 design3 ... (Required) |
Specifies the designs to run. Similar to the argument of ./flow.tcl -design
|
--defaultTestSet | -dts (Boolean) |
Ignores the design flag, and runs the default design test set consisting of all designs under the ../designs/ directory. Default: False
|
--excluded_designs | -e design1 design2 design3 ... (Optional) |
Specifies the designs to exclude from the run. Useful with <--defaultTestSet> .
|
--regression | -r <file> (Optional) |
Creates configuration files using the parameters in <file> and runs the configuration files on each design The generated configuration files are based on the default config file in each design designs/<design>/config.tcl and the passed parameters in <file>
The regression/exploration/configuration script described above. If not specified then none will be used and the designs will run against defualt/specified configs
|
--tag | -t <name> (Optional) |
Appends a tag to the log files in regression_results/ and the generated configuration files when passing --regression Default value: regression
|
--threads | -th <number> (Optional) |
Number of threads Default value: 5
|
--config | -c <config> (Optional) |
Defines the configuration file to be used in NON regression mode Default value: config
|
--configuration_parameters | -cp <file> (Optional) |
<file> contains configuration parameters to be printed in the csv report.
Input must be file containing the names of the configurations comma separated.
If not specified the default configuration list will be used.
If this is followed by "all" all configurations will be reported
|
--append_configurations | -app (Boolean) |
Specifies whether or not to print the added configuration_parameters as well as the default or not. Default value: False
|
--clean | -cl (Boolean) |
Specifies whether or not to delete the tmp directory of all designs and move merged_unpadded to the results directory. Default value: False
|
--tar | -tar <list> (Optional) |
List sub directories or files under the run directory, and they will be compressed into a {design}_{tag}.tar.gz under the runs dirctory. If the flag is followed by "all" then the whole directory will be compressed. |
--delete | -dl (Boolean) |
Specifies whether or not to delete the run directory after completion and reporting the results in the csv.
If this flag is used with --tar, then the compressed files will not be deleted because they are placed outside of the run directory. Default value: False
|
--htmlExtract | -html (Boolean) |
Specifies whether or not to print an html summary of the report printed in the csv format with the most important configurations and metrics. Default value: False
|
--benchmark | -b <file> (Optional) |
If provided this run will be tested against (compared to) the given benchmark <file> . check the output section above for the details of the reported results.
|
--print_rem | -p <number> (Optional) |
If a <number> greater than 0 is provided, a list of the remaining designs will be printed into the terminal every <number> seconds.
|
--disable_timestamp | -dt (Boolean) |
If enabled, the output files and tags will not contain the appended timestamp. Default value: False
|
--show_output | -so (Boolean) |
If enabled, the full output log resulting from running ./flow.tcl will be displayed realtime in the terminal. However, if more than one design or more than one configuration is running at the same time, this flag will be ignored and no live output will be displayed. Default value: False
|