An end-to-end learning approach DeepPlace for placement problem with two stages. The deep reinforcement learning (DRL) agent places the macros sequentially, followed by a gradient-based optimization placer to arrange millions of standard cells. We use PPO for all the experiments implemented with Pytorch, and the GPU version of DREAMPlace is adopted as gradient based optimization placer for arranging standard cells.
In order to install requirements, follow:
# PyTorch
conda install pytorch torchvision -c soumith
# Baselines for Atari preprocessing
git clone https://github.com/openai/baselines.git
cd baselines
pip install -e .
# Other requirements
pip install -r requirements.txt
# DREAMplace installation
git clone --recursive https://github.com/limbo018/DREAMPlace.git
mkdir build
cd build
cmake .. -DCMAKE_INSTALL_PREFIX=your_install_path -DPYTHON_EXECUTABLE=$(which python)
make
make install
#Get benchmarks
python benchmarks/ispd2005_2015.py
# DGL installation
conda install -c dglteam dgl-cuda10.2
python main.py --task "place" --algo ppo --use-gae --lr 2.5e-4 --clip-param 0.1 --value-loss-coef 0.5 --num-processes 1 --num-steps 2840 --num-mini-batch 4 --log-interval 1 --use-linear-lr-decay --entropy-coef 0.01
python EDA-AI/main.py --task "fullplace" --algo ppo --use-gae --lr 2.5e-4 --clip-param 0.1 --value-loss-coef 0.5 --num-processes 1 --num-steps 2840 --num-mini-batch 4 --log-interval 1 --use-linear-lr-decay --entropy-coef 0.01
python validation.py --task "place" --num-processes 1 --num-mini-batch 1 --num-steps 710 --lr 2.5e-4 --clip-param 0.1 --value-loss-coef 0.5 --entropy-coef 0.01