Getting started

1. Installation

  1. Clone the Repository

git clone https://github.com/feelpp/benchmarking.git
  1. Use a python virtual environment [Optional]

python3 -m venv .venv
source .venv/bin/activate
  1. Build the project

pip3 wheel --no-deps --wheel-dir dist .
  1. Install requirements

This will install necessary dependencies as well as the built project from the previous step.

python3 -m pip install -r requirements.txt

2. Quickstart

The framwork includes a sample C++/MPI application that can be used to get familiar with the framework’s core concepts. It can be found under tests/data/parallelSum.cpp.

This Feel++ Benchmarking "Hello World" application will compute the sum of an array distributed across multiple MPI processes. Each process will compute a partial sum, and then it will be summed to get the total sum.

Additionally, the app will measure the time taken to perform the partial sum, and will save it under a scalability.json file.

The executable is already provided as tests/data/parallelSum. You can update it and recompile it for a specific config as needed.

mpic++ -std=c++17 -o test/data/parallelSum test/data/parallelSum.cpp

Configuration files might require some changes for specific configurations depending on the system you are running the framework.

Finally, to benchmark the test application, generate the reports and plot the figures, run

execute-benchmark --machine-config config/machines/local.json \
                    --benchmark-config config/test_parallelSum/parallelSum.json \
                    --plots-config config/test_parallelSum/plots.json \
                    --website

The --website option will start an http-server on localhost, so the website can be visualized, check the console for more information.

3. Executing a benchmark

In order to execute a benchmark, you can make use of the execute-benchmark command after all configuration files have been set ( Configuration Reference).

The script accepts the following options :

  • --machine-config (-mc) : The path to the machine configuration JSON file

  • --benchmark-config (-bc) : The path to the benchmark configuration JSON file

  • --plots-config (-pc) : The path to the plots configuration JSON file. If not provided, it will be assumed that the plots section is included in the benchmark config. Otherwise, no plots will be considered.

  • --dir (-d) : [Optional] Directory path where benchmark configuration files can be found. If provided, the application will consider all benchmark configuration files inside the provided directory.

  • --exclude (-e) : [Optional] To use in combination with --dir, mentioned files will not be launched. Only provide basenames to exclude.

  • --move-results (-mv) : [Optional] Directory to move the resulting files to. If not provided, result files will be located under the directory specified by the machine configuration.

  • --list-files (-lf) : [Optional] List all benchmarking configuration file found. If this option is provided, the application will not run. Use it for validation.

  • --verbose (-v) : [Optional] Select Reframe's verbose level by specifying multiple v's.

  • --website (-w) : [Optional] Render reports, compile them, create the website and start an http server.

  • --help (-h) : Display help and quit program

When a benchmark is done, a website_config.json file will be created (or updated) with the current filepaths of the reports and plots generated by the framework. If the --website flag is active, the render-benchmarks command will be launched with this file as argument.

4. Rendering reports

To render reports, a webiste configuration file is needed. An example is provided under src/benchmarking/reports/config/config.json. This file indicates how the website views should be structured, and it indicates the hierarchy of the benchmarks.

A file of the same type is generated after a benchmark is launched, called website_config.json, and it is found at the root of the reports directory specified under the reports_base_dir field of machine configuration file ( xref:tutorial:configfiles/machine.adoc).

Once this file is located, users can run the render-benchmarks command to render existing reports.

The script takes the following arguments:

  • config-file : The path of the website configuration file.

  • remote-download-dir: [Optional] Path of the directory to download the reports to. Only relevant if the configuration file contains remote locations (only Girder is supported at the moment).

  • modules-path: [Optional] Path to the Antora module to render the reports to. It defaults to docs/modules/ROOT/pages. Multiple directories will be recursively created under the provided path.