Skip to content

QuickStartupGuide

Valentina edited this page Oct 16, 2023 · 11 revisions

Startup

Quick startup guide

To start benchmarking, it is required to create two new directories on the FTP-server, the first one for the benchmark configuration files, and the second one for the file of bencmarking results. Further, please, follow instructions.

  1. Prepare configuration files in accordance with src/configs/benchmark_configuration_file_template.xml, src/configs/accuracy_checker_configuration_file_template.xml and src/configs/remote_configuration_file_template.xml. Please, use GUI application (src/config_maker).

  2. Copy the benchmark and the accuracy checker configuration files to the corresponding directory on the FTP-server.

  3. Execute the src/remote_control/remote_start.py script. Please, follow src/remote_control/README.md.

  4. Wait for completing the benchmark.

  5. Wait for completing the accuracy checker.

  6. Copy benchmarking and accuracy checker results from the FTP-server to the local machine for the further analysis.

Startup example

  1. Fill out the configuration file for the benchmarking script. It is required to describe tests to be performed, you can find the template in the src/config/benchmark_configuration_file_template.xml file. Fill the configuration file and save it to the configs/bench_config.xml file on the FTP-server. Please, use the developed GUI application (src/config_maker).

    <Tests>
      <Test>
        <Model>
            <Task>Classification</Task>
            <Name>densenet-121</Name>
            <Precision>FP32</Precision>
            <SourceFramework>Caffe</SourceFramework>
            <ModelPath>/mnt/models/public/densenet-121/FP32/densenet-121.xml</ModelPath>
            <WeightPath>/mnt/models/public/densenet-121/FP32/densenet-121.bin</WeightPath>
        </Model>
        <Dataset>
            <Name>ImageNet</Name>
            <Path>/tmp/data/</Path>
        </Dataset>
        <FrameworkIndependent>
            <InferenceFramework>OpenVINO_DLDT</InferenceFramework>
            <BatchSize>2</BatchSize>
            <Device>CPU</Device>
            <IterationCount>10</IterationCount>
            <TestTimeLimit>1000</TestTimeLimit>
        </FrameworkIndependent>
        <FrameworkDependent>
             <Mode></Mode>
             <Extension></Extension>
             <AsyncRequestCount></AsyncRequestCount>
             <ThreadCount></ThreadCount>
             <StreamCount></StreamCount>
             <!-- The following options may be missing -->
             <Frontend></Frontend>
             <InputShape></InputShape>
             <Layout></Layout>
             <Mean></Mean>
             <InputScale></InputScale>
             <ChangePreprocessOptions></ChangePreprocessOptions>
        </FrameworkDependent>
      </Test>
    </Tests>
  2. Fill out the configuration file for the accuracy checker script. It is required to describe tests to be performed, you can find the template in src/config/accuracy_checker_configuration_file_template.xml. Fill the configuration file and save it to the configs/ac_config.xml file on the FTP-server. Please, use the developed GUI application (src/config_maker).

    <Tests>
      <Test>
         <Model>
             <Task>classification</Task>
             <Name>densenet-121</Name>
             <Precision>FP32</Precision>
             <SourceFramework>Caffe</SourceFramework>
             <Directory>/mnt/models/public/densenet-121/FP32</Directory>
         </Model>
         <Parameters>
             <InferenceFramework>OpenVINO DLDT</InferenceFramework>
             <Device>CPU</Device>
             <Config>/opt/intel/open_model_zoo/tools/accuracy_checker/configs/densenet-121.yml</Config>
         </Parameters>
     </Test>
    </Tests>
  3. Fill out the configuration file for the remote start script, you can find the template in the src/config/remote_configuration_file_template.xml file. Fill it and save to the /tmp/dl-benchmark/src/remote_start/remote_config.xml file. Please, use the developed GUI application (src/config_maker).

    <Computers>
     <Computer>
       <IP>4.4.4.4</IP>
       <Login>user</Login>
       <Password>user</Password>
       <OS>Linux</OS>
       <FTPClientPath>/tmp/dl-benchmark/src/remote_start/ftp_client.py</FTPClientPath>
       <Benchmark>
         <Config>configs/bench_config.xml</Config>
         <Executor>docker_container</Executor>
         <LogFile>/tmp/dl-benchmark/src/remote_start/bench_log.txt</LogFile>
         <ResultFile>/tmp/dl-benchmark/src/remote_start/bench_result.csv</ResultFile>
       </Benchmark>
       <AccuracyChecker>
         <Config>configs/ac_config.xml</Config>
         <Executor>docker_container</Executor>
         <DatasetPath></DatasetPath>
         <DefinitionPath>/opt/intel/open_model_zoo/tools/accuracy_checker/definitions.yml</DefinitionPath>
         <LogFile>/tmp/dl-benchmark/src/remote_start/ac_log.txt</LogFile>
         <ResultFile>/tmp/dl-benchmark/src/remote_start/ac_result.csv</ResultFile>
       </AccuracyChecker>
     </Computer>
    </Computers>
  4. Execute the remote start script using the following command:

    python3 remote_start.py \
    -c /tmp/dl-benchmark/src/remote_start/remote_config.xml \
    -s 2.2.2.2 -l admin -p admin -br bench_all_results.csv -acr ac_all_results.csv \
    --ftp_dir results

    Please, for details see the README file.

  5. Wait for completing the benchmark and accuracy checker. After completion, the results directory will contain tables with the benchmarking combined results named bench_all_results.csv and with the accuracy checker combined results named ac_all_results.csv.

  6. Copy the results from the FTP-server to the local machine.

    scp admin@2.2.2.2:/results/bench_all_results.csv /tmp/
    scp admin@2.2.2.2:/results/ac_all_results.csv /tmp/
  7. Convert csv to html or xlsx using the following commands:

    cd /tmp/dl-benchmark/src/csv2html
    python3 converter.py -k benchmark -t /tmp/bench_all_results.csv -r /tmp/bench_formatted_results.html
    python3 converter.py -k accuracy_checker -t /tmp/ac_all_results.csv -r /tmp/ac_formatted_results.html
    cd /tmp/dl-benchmark/src/csv2xlsx
    python3 converter.py -k benchmark -t /tmp/bench_all_results.csv -r /tmp/bench_formatted_results.xlsx
    python3 converter.py -k accuracy_checker -t /tmp/ac_all_results.csv -r /tmp/ac_formatted_results.xlsx
Clone this wiki locally