Configuration guide

The core of the Feel++ benchmarking framework are its configuration files. Users must provide the following configuration files:

  • A complete system description, based on ReFrame’s configuration files.

  • A machine specific configuration, defining HOW to execute benchmarks.

  • A benchmark (or application) specific configuration, defining WHAT should be executed.

  • A figure description, containing information of what to display on the final reports.

These machine and benchmark configuration files are equiped with a special placeholder syntax, allowing to dynamically update the files along the tests execution. Aditionally, multiple environments can be specified, including Apptainer containers.

Single line comments are supported on these JSON files. Comments must be on their own line.

1. System configuration

The system configuration files need to be placed under are strictly ReFrame dependent, and must be passed to the application via the --custom-rfm-config option. A single Python file should be provided per machine. Please follow ReFrame’s configuration file reference for precise settings.

One of the main objectives of having these files, is to be able to describe necessary modules and commands for your application’s to run as expected.

1.1. Built-in system configurations

Configuration files for the following EuroHPC machines are available in the feelpp.benchmarking framework:

  • DISCOVERER (cn)

  • Vega (cpu)

  • Karolina (cpu)

  • MeluXina (cpu)

  • Leonardo (cpu) [Coming soon…​]

  • LUMI (cpu) [Coming soon…​]

To use them, the machine field on the machine configuration JSON must correspond to the EuroHPC system name (in lowercase).

{
    "machine":"discoverer",
    "targets":["cn::"],
    ...
}

1.2. Custom system configurations

Users can provide their own ReFrame configuration file using the --custom-rfm-config option. In addition to mandatory fields specified in ReFrame’s configuration file reference, users must provide the systems.partitions.processor.num_cpus field to indicate the maximum number of logical CPUs per partition’s node. This information will be used by feelpp.benchmarking to schedule test jobs accordingly. This can be skipped only if no strong scaling is planned for the given system.

There is no need to hardcode account’s access information in these files, as it can be specified on the feelpp.benchmarking machine configuration JSON.

If users plan on using certain container platform on their machine, they must provide the systems.partitions.container_platforms object.

At the moment, only Apptainer and built-in platforms are supported.

Using SLURM as scheduler is highly recommended if possible. Many of the tool’s features are not available on other schedulers.

1.2.1. Additional resources

Processor bindings and other launcher options should be specified as a resource under the desired partition, with the launcher_options name field value for example. For example,

"resources": [
    {
        "name":"launcher_options",
        "options":["-bind-to","core"]
    }
]

If memory is a constraint, the systems.partitions.extras.memory_per_node field can be specified, indicating the memory per compute node of your system in Gb. For example,

"extras":{
    "memory_per_node":500
}
Using a GPU partition

If your system counts with a GPU partition, the following ressource must be added to the launcher_options field:

"resources": [
    {
        "name": "_rfm_gpu",
        "options": ["--gres=gpu:{num_gpus_per_node}"],
    }
]

2. Magic strings

Benchmarking configuration files support a special placeholder syntax, using double curly braces {{placeholder}}. This syntax is specially useful for:

  • Refactoring configuration fields.

  • Replacing with values from other configuration files, such as the machine config.

  • Making use of code variables modified at runtime, by having reserved keywords.

  • Fetching defined parameters values that change during runtime.

To get a value of a field in the same file, the field path must be separated by dots. For example,

"field_a":{
    "field_b":{
        "field_c": "my value"
    }
}

"example_placeholder": "{{field_a.field_b.field_c}}"

For replacing a value coming from the machine configuration, simply prepend any placeholder path with machine.

2.1. Reserved Keywords

The framework is equiped with the following reserved keywords for placeholders:

  • {{instance}} : Returns the hashcode of the current ReFrame test.

  • {{.value}}: The value keyword must be appended to a parameter name (e.g. {{parameters.my_param.value}}). It fetches the current value of a given runtime variable (such as a parameter). More information on the Parameters section.

2.2. Nested placeholders

Nested placeholders are supported.

For example, lets say you have a machine configuration containing

"platform":"builtin"

And a benchmark configuration:

"platforms":{
    "builtin":"my_builtin_value",
    "other":"my_other_value"
},
"nested_placeholder":"My platform dependent value is {{ platforms.{{machine.platform}} }}"

The nested_placeholder field will then take the value of "My platform dependent value is my_buildin_value", because the machine config specifies that "platform" is "builtin". But this will change if "platform" is set to "other".

2.3. Using environment variables

Environment variables can be specifed in any configuration file by prepending a $. For example,

"my_home": "$HOME"

Shorthand representations such as ~ and relative paths starting by . are not supported. For relative file or folder paths, use $PWD instead.

3. Machine configuration

The machine configuration JSON contains all information related uniquely to the system where benchmarks will run on. It is used to tell the application HOW and WHERE benchmarks will run. It is also used to provide access to certain systems. The framework supports multiple containers and environments such as Apptainer and Spack. This information should be specified here.

The configuration file schema is described below.

machine [str]

The name of the machine. If using built-in system ReFrame configuration for EuroHPC machines, this should correspond to the names described in the systems configuration reference page.

execution_policy [str] (Optional)

Describes how reframe will launch the tests. Should be either "serial" or "async". Defaults to "serial".

access [List[str]] (Optional)

List of scheduler directives to be passed in order to grant access to a given partition of the system. For example, passing [--account=<YOUR-ACCOUNT>] will add #SBATCH --account=<YOUR-ACCOUNT> to the submit script if using SLURM.

targets [str | List[str]] (Optional)

Specifies in which partition, platform and prog_environment run benchmarks on. The syntax is [partition:platform:prog_environment]. Default values are supported by only putting :. For example partition::prog_environment. Default values are "default" for partition and prog_environment, and "builtin" for platform. You can choose between providing this field OR indicating all partitions, platform, prog_environments fields.

partitions [List[str]] (Optional)

Partitions where the test can run on. Tests will run on the cartesian product of partitions and prog_environments, where environments are specified for the current partition on the ReFrame configuration. Should not be provided if using the targets field.

prog_environments [List[str]] (Optional)

Environments where the test can run on. Test will run with this programming environment if it is specified on the current partition on the ReFrame configuration. Should not be provided if using the targets field.

platform [str] (Optional)

Name of the platform to run the benchmark one. Accepted values are : "apptainer","builtin". Defaults to "builtin". Should not be provided if using the targets field.

env_variables [Dict[str,str]] (Optional)

key:value pairs for machine related environment variables. These variables will be set after the init phase of ReFrame tests.

reframe_base_dir [str]

Directory where ReFrame will save its stage and output directories. If it does not exist, it will be created.

reports_base_dir [str]

Directory where the output reports should be exported to.

input_dataset_base_dir [str] (Optional)

Base directory where inputs can be found by ReFrame at the moment of job sumbission. Advanced configuration allows file transfers between data located under input_user_dir and this directory. This directory should ideally be located in the same disk as where the jobs will run on, to avoid unexpected execution times. Refer to Advanced Configuration for more information.

input_user_dir [str] (Optional)

Base directory where input data can be found before running tests. If provided, input_dataset_base_dir should be present too. It is used to copy input_file_dependencies from this directory to the input_dataset_base_dir. Refer to Advanced Configuration for more information.

output_app_dir [str]

The base directory where the benchmarked application should write its outputs to.

containers Dict[str,Container] (Optional)

Specifies container type and platform related information. Keys correspond to the platform name (e.g. "apptainer" or "docker")

-image_base_dir [str]

Base directory where container images can be found in the system

-options[List[str]] (Optional)

Options to add to the container execution command.

-cachedir [str] (Optional)

Directory where the pulled images will be cached on.

-tmpdir [str] (Optional)

Directory where temporary image files will be written.

-executable [str] (Optional)

Base command to be used for pulling the image. Defaults to the name of the container. (e.g. If using Apptainer, one can provide singulariy. And the command for pulling an image that will be used will be singularity pull …​)

Using the Docker platform will soon be available.

Below, an example of a complete machine configuration file can be found, for a machine called "my_machine".

{
    "machine": "my_system",
    "execution_policy": "async",
    "access":["--account=1234"],
    "targets":"production:builtin:hpcx",
    "env_variables":{ "MY_ENV_VAR":"ABCD" },
    "reframe_base_dir":"$PWD/build/reframe",
    "reports_base_dir":"$PWD/reports/",
    "input_dataset_base_dir":"$PWD/input/",
    "output_app_dir":"$PWD/output/",
    "containers":{
        "apptainer":{
            "image_base_dir":"/data/images/",
            "options":[ "--sharens", "--bind /opt/:/opt/" ],
            "cachedir":"/data/images/chache/",
            "tmpdir":"/data/images/tmp/",
            "executable":"singularity"
        }
    }
}

Let’s review step by step what the file defines.

"machine":"my_system"

indicates that the system where tests will run can be identified as "my_system". If no custom reframe configuration is provided, the framework will look for a configuration file named my_system.py.

"execution_policy":"async"

Tells ReFrame to run tests asynchronously on available resources.

"access":["--account=1234"]

Indicates that the scheduler should use "--account=1234" to connect to compute nodes for a given partition. For example, if using SLURM, #SBATCH --account=1234 will be added to the submition script.

"targets":"production:builtin:hpcx"

Tells reframe to run tests uniquely on the production partition with the builtin platform and the hpcx programming environment. These values should correspond to what’s contained in the ReFrame configuration file.

"env_variables":{ "MY_ENV_VAR":"ABCD" }

ReFrame will set environment variable MY_ENV_VAR to have the ABCD value before tests are launched.

"reframe_base_dir":"$PWD/build/reframe"

Reframe will use the build/stage/ folder and build/output/ folder of the current working directory for staging tests and storing the benchmarked application’s standard output and errors.

"reports_base_dir":"$PWD/reports/"

Means that the reframe reports will be found under the reports/ folder of the current working directory.

"input_dataset_base_dir":"$PWD/input/"

Means that the framework should look for input somewhere under the input/ folder of the current working directory. The rest of the path is specified on the benchmark configuration.

"output_app_dir":"$PWD/output/"

Means that the benchmarked application should write its output files under the output/ folder of the current working directory. The rest of the path is specified on the benchmark configuration.

Concerning containers:

"apptainer"

The key name indicates that the application CAN be benchmarked using apptainer. Not necesserily that it will. If the targets field specifies the apptainer platform, then this field is mandatory. Otherwise if the tagets field specifies the built-in platform, there is no need to have this object.

"image_base_dir":"/data/images"

Indicates that the built apptainer images can be found somewhere under the _/data/images/ directory. The rest of the image’s filepath is specified on the benchmark configuration.

"options":"[ "--sharens", "--bind /opt/:/opt/" ]"

Tells ReFrame to add these options to the Apptainer execution command. For example, mpiexec -n 4 apptainer exec --sharens --bind /opt/:/opt/ …​. Only machine related options should be specified here, more options can be defined in the benchmark configuration.

"cachedir":"/data/images/chache/"

Indicates that the container should cache images under the /data/images/cache directory. For example, when using apptainer, this will overwrite the APPTAINER_CACHEDIR environment variable. Apptainer Build Environment

"tmpdir":"/data/images/tmp/"

Indicates that the container should use the /data/images/tmp directory for temporary files. For example, when using apptainer, this will overwrite the APPTAINER_TMPDIR environment variable. Apptainer Build Environment

"executable":"singularity"

Tells the framework to use the singularity pull …​ command for pulling images instead of apptainer pull …​ .

4. Benchmark configuration

Configuring a benchmark can be quite extensive, as this framework focuses on flexibility. For this, the documentation will be divided in main sections.

The benchmark configuration file describes precisely how the benchmarking should be done, for example, specifying where the executable is located, the options to pass to the application, and how the tests will be parametrized.

The base of the configuration file is shown below.

{
    "executable": "",
    "use_case_name": "",
    "timeout":"",
    "env_variables":{},
    "options": [],
    "resources":{},
    "platforms":{},
    "additional_files":{},
    "scalability":{},
    "sanity":{},
    "parameters":{},
}

Users can add any field used for refactoring. For example, one can do the following.

"output_directory":"/data/outputs" // This is a custom field
"options":["--output {{output_directory}}"]

4.1. Fields on JSON root

executable [str]

Path to the application executable or command to execute.

use_case_name [str]

Custom name given to the use case. Serves as an ID of the use case, must be unique across use cases.

timeout [str]

Job execution timeout, starting when the job is launched (and not pending). Format: days-hours:minutes:seconds This field is notably important so that HPC resources are not wasted unnecessarily. For example, for a 10 minutes timer: 0-00:10:00

env_variables [Dict[str,str]]

key:value pairs for benchmark related environment variables. These variables will be set after the init phase of ReFrame tests.

options [List[str]]

List of application’s options. Input arguments can be parameterized in here. For example, [ "--number-of-elements={{parameters.elements.value}}", "--number-of-points={{parameters.points.value}}", "--verbose" ]

4.2. Resources

The resources field is used for specifying the computing resources that each test will use. Users can specify a combination of tasks, tasks_per_node, gpus_per_node, nodes, memory and exclusive_access. However, only certain combinations are supported, and at least one must be provided. The resource fields are meant to be parameterized, so that the application scaling can be analyzed, but this is completely optional.

At the moment, multithreading is not supported. The number of tasks per cpu is set to 1.

tasks [int]

Total number of tasks to launch the test with.

tasks_per_node [int]

Number of tasks per node to use. Must be specified along nodes OR tasks. If this number cannot be greater than ReFrame’s systems.partitions.processor.num_cpus configuration value.

nodes [int]

Number of nodes to launch the tests on.

gpus_per_node [int] (Optional)

Number of GPUs per node to use. Defaults to None. The test will not be launched on any gpu.

memory [int] (Optional)

Total memory used by the test, in Gb. If this field is provided, the number of tasks per node and number of nodes will be recalculated so that the application will have enough memory to run.

If using custom ReFrame configuration files, users must ensure that the `extras.memory_per_node` field is present on the ReFrame configuration file.
exclusive_access [bool] (Optional)

If true, the scheduler will reserve the totality of the nodes where tests will run. Defaults to true.

Valid combinations are the following: - tasks and tasks_per_node - nodes and tasks_per_node - Only tasks

Other fields can be specified along these combinations as needed.

4.2.1. Examples

Non-parameterized resources field
  • Tasks and tasks per node

"resources":{
    "tasks": 256,
    "tasks_per_node":128,
    "exclussive_access":true
}

This configuration will run ALL tests on 2 nodes (reserved exlusively), using 128 task per node. If systems.partitions.processor.num_cpus configuration field is inferior to 128, an error will be raised before submitting the test job.

  • Memory

Concerning memory, let’s suppose that our system has 256Gb of RAM per node, and that our application requires a total of 1000Gb of memory.

"resources":{
    "tasks": 256,
    "tasks_per_node":128,
    "exclussive_access":true,
    "memory":1000
}

This means that in order for the application to run, it needs at least 4 nodes. As we specified that we want to run on 256 tasks, and we need at least 4 nodes, the number of tasks per node cannot be greater than 64. The final number of tasks per node will be recomputed as the minimum between the requested number of tasks per node, and 64. In this case, all tests will run on 4 nodes, using 64 tasks per node.

Parameterized resources field

Suppose that the following parameters are defined:

"parameters":[
    {
        "name":"resources",
        "sequence":[
            { "tasks":128, "tasks_per_node":32 },
            { "tasks":128, "tasks_per_node":64 },
            { "tasks":128, "tasks_per_node":128 },
            { "tasks":256, "tasks_per_node":128 }
        ]
    }
]

We would need to define the resources field like this:

"resources":{
    "tasks":"{{parameters.resources.tasks.value}}",
    "tasks_per_node":"{{parameters.resources.tasks_per_node.value}}",
}

This configuration will execute one test for each one of the combinations below: - 4 nodes, 32 tasks per node (total of 128 tasks) - 2 nodes, 64 tasks per node (total of 128 tasks) - 1 node, 128 tasks per node (total of 128 tasks) - 2 nodes, 128 tasks per node (total of 256 tasks)

4.3. Platforms

The platforms object lists all options and directories related to the benchmark execution for each supported platform. A platform present on this object does not imply that it will be benchmarked, but it rather lists all possible options. The actual platform where tests will run is defined by either the access field of the machine configuration, or the platform field.

input_dir [str]

Indicates the directory path where input files can be found, INSIDE the given platform. For the built-in platform, it corresponds to where input files can be found on the system.

append_app_options [List[str]] (Optional)

Describes the options to pass to the application. It is equivalent to the options field on the configuration root. However, it is used for having different application options depending on the platform.

options [List[str]] (Optional)

Describes the options to pass to the platform launcher. For example, "options":["--bind a:b"] will be interpreted, for the apptainer platform, as apptainer exec --bind a:b your_image.sif your_application.exe …​

image [Dict[str,str]] (Conditional)

Contains information related to the container image. For any platform other than built-in, the image field must be specified

-filepath [str]

Filepath containing the location of the container image. If provided along the url field, the image will be pulled and placed here before tests are executed, otherwise the framework will assume that the image exists on the given filepath.

-url [str] (Optional)

URL to pull the image from. If this field is specified, feelpp.benchmarking will pull the image and place it under the filepath field. If this field is not provided, the framework will assume that the image exists under filepath.

The platforms field is optional, if not provided, the builtin platform will be considered. The syntax for builtin platform is the following:

"platforms": {
    "builtin":{
        "input_dir":"",
        "append_app_options":[]
    }
}

The following shows an example how to configure the Apptainer platform:

"platforms":{
    "apptainer":{
        "input_dir":"/input_data/",
        "options":["--bind /data/custom_data/:{{platforms.apptainer.input_dir}}"],
        "append_app_options":["--my_custom_option_for_apptainer"],
        "image":{
            "filepath":"/data/images/my_image.sif",
            "url":"oras://ghcr.io/your-image.sif"
        }
    }
}

In this case, input_dir represents the directory where input files will be found INSIDE the container. If there is no input data present in the container, you might need to bind a local input data directory to it, using options

The options field contains a list of all the options to include on the container execution. It is equivalent to the machine’s containers.apptainer.options field. However, users should only include benchmark dependent options in this list.

Note that the placeholder syntax is used to tell to the container to bind a directory to the one specified in ìnput_dir.

The append_app_options lists all the options to add to the application execution. It does the same as the options field in the root of the file, but can be used for case handling.

The image field indicates that the image should be pulled from oras://ghcr.io/your-image.sif and placed in /data/images/my_image.sif.

To summarize, feelpp.benchmarking will first execute (even before ReFrame is launched):

apptainer pull -F /data/images/my_image.sif oras://ghcr.io/your-image.sif

And then, ReFrame will submit a job executing (for local scheduler):

apptainer exec --bind /data/custom_data/:/input_data/ /data/images/my_image.sif --my_custom_option_for_apptainer

For the filepath field, it is very useful to make use of the {{machine.containers.apptainer.image_base_dir}} field from the machine configuration.

4.4. Scalability

Lists all the files where performance variables can be found.

directory [str]

Common directory where files containing performance variables can be found.

clean_directory [bool] (Optional)

If true, it will delete the contents of inside directory. Defaults to false.

stages [List[Stage]]

Describes the files containing performance variables, and how to extract them.

-name [str]

Name to describe the stage. It is used as prefix to add to the performance variables found in the file. If no prefix is needed, the name can be "".

-filepath [str]

Relative filepath of the file containing performance variables, relative to the directory field.

-format [str]

Format of the stage file. Supported values are "csv" and "json".

-units [Dict[str,str]] (Optional)

Custom units for certain performance variables. key:value pairs correspond to performance-variable:unit. For example, "my-variable":"m/s" implies that the variable will have "my-variable" has "m/s" as unit. By default, all columns have "s" as unit. To change the default behavior, users need to pass the "*":"custom-unit" key:value pair. This will associate the "custom-unit" to ALL performance variables inside the file, excepting other units specified inside this object.

-variables_path [str, List[str]]

Only valid if format is "json". Defines where, in the JSON hierrarchy, performance variables will be found. Supports the use of one or multiple wildcards (*).

custom_variables [List[Dict[str,str]]] (Optional)

Contains a list of objects describing custom performance variables to create, based on extracted ones (from stages). An aggregation will be performed using provided columns and valid operations. For more information, see the advanced Configuration

-name [str]

The name to give to the custom performance variable.

-columns [List[str]]

List of columns to aggregate, accepts both variables existing in the performance files, as well as other custom variables.

-op [str]

The aggregation operation to apply to the performance columns to create the custom one. Valid operations are "sum","min","max","mean".

-unit [str]

The unit to assign to the created performance variable.

Recursive creation of custom_variables is supported!

Deeply nested and complex JSON scalability files are supported, using multiple wildcard syntax!

4.4.1. Examples

Let’s assume our application exports the following files:

The {{instance}} keyword on the export implies that each test exports this files on its own directory, using the test’s hashcode.

  • /data/outputs/{{instance}}/exports.csv

a,b,c
1,2,3
  • /data/outputs/{{instance}}/logs/timers.json

{
    "function1":{
        "constructor":1.0,
        "init":0.1,
    },
    "function2":{
        "constructor":1.0,
        "init":0.1,
    },
    "execution":{
        "step1":0.5,
        "step2":0.7,
        "step3":1.0,
    }
}

An example of the scalability field to extract values on these files is found and explained below.

"scalability": {
    "directory": "/data/outputs/{{instance}}/",
    "stages": [
        {
            "name":"myExports",
            "filepath": "export.csv",
            "format": "csv",
            "units":{ "*":"meters", "a":"kg" }
        },
        {
            "name":"myTimers",
            "filepath": "logs/timers.json",
            "format": "json",
            "variables_path":"*"
        }
    ]
}

The common directory path of these exports is /data/outputs/{{instance}}.

Let’s analyse the first stage:

{
    "name":"myExports",
    "filepath": "export.csv",
    "format": "csv",
    "units":{ "*":"meters", "a":"kg" }
}

The name myExports means that performance variables from this file will appear in the exported report (and available for plotting) as myExports_a:1, myExports_b:2, myExports_c:3.

Concerning the units, "*":"meters" means that all of the variables in this CSV should have the "meters" unit. However, by specifying "a":"kg" we indicates that all columns should be "meters", except a who should have "kg" as unit.

Let’s now consider the second stage:

{
    "name":"myTimers",
    "filepath": "logs/timers.json",
    "format": "json",
    "variables_path":"*"
}

Performance variables on this file will be prefixed by "myTimers_".

As the units field is not specified, all variables will have the default ('s') unit.

Having only * as variables_path, means that all variables should be exported into the performance report. Variables will be exported as follows:

  • myTimers_function1.constructor : 1.0

  • myTimers_function1.init : 0.1

  • myTimers_function2.constructor : 1.0

  • myTimers_function2.init : 0.1

  • myTimers_execution.step1 : 0.5

  • myTimers_execution.step2 : 0.7

  • myTimers_execution.step3 : 1.0

Filtering with variables_path
  • "variables_path":"function1.*":

Exported performance variables:

  • myTimers_constructor : 1.0

  • myTimers_init : 0.1

Using the wildcards removes the part of the json that is not variable.

  • "variables_path":"exectution.step1"

Exported performance variables:

  • myTimers_step1 : 0.5

If a full path is passed, the variable name corresponds to the key of the leaf element of the JSON.

variables_path can be a list.

4.5. Additional Files

This field allows the export of custom files, as well as use case descriptions and logs. It is optional.

description_filepath [str] (Optional)

Filepath of a file to be used as benchmark description. This file will be copied to the same directory of the performance report, named description.adoc, after all ReFrame tests are completed (successfully or not). This file will automatically appear in the website for the current report, treated as an Antora partial.

parameterized_descriptions_filepath [str] (Optional)

Parameterized filepath (using either {{instance}} or {{parameters.MY_PARAM.value}}) of a file generated by a test. This file will be copied right after a test finishes (only if successful). The copied file will be located in a partials directory, under the current exported report directory. It will be named after the tests hashcode. These files will automatically appear in the website, inside the parameter table of the current report, and are treated as Antora partials.

custom_logs [List[str]] (Optional)

List of parameter depdendent filepaths of logs to be included in the website.

Currently, it is imposed for description and parameterized_description files to be in AsciiDoc format.

4.5.1. Examples

Let’s suppose that our benchmarked application produces an information.adoc file each time it runs. And that we have a use case description named description.adoc. Also, let’s assume that our app logs some information under logs/log.INFO and under logs/log.WARNING each time it runs.

If we have set our application to write its outputs under /data/outputs/{{instance}}. Then our additional_files field will look like this.

"additional_files":{
    "description_filepath":"/data/outputs/description.adoc",
    "parameterized_descriptions_filepath":"/data/outputs/{{instance}}/information.adoc",
    "custom_logs":[
        "/data/outputs/{{instance}}/logs/log.INFO",
        "/data/outputs/{{instance}}/logs/log.WARNING"
    ]
}

All these files will automatically appear on the website’s report page.

4.6. Sanity

The sanity field is used to validate the application execution.

The syntax is the following:

"sanity":{
    "success":[],
    "error":[]
}
  • The success field contains a list of patterns to look for in the standard output. If any of the patterns are not found, the test will fail.

  • The error field contains a list of patters that will make the test fail if found in the standard output. If any of these paterns are found, the test will fail.

At the moment, only validating standard output is supported. It will soon be possible to specify custom log files.

4.6.1. Examples

"sanity": {
    "success": ["[SUCCESS]"],
    "error": ["[OOPSIE]","Error"]
}

Will check if "[SUCCESS]" is found on the application’s standard output. If not, the test will failt.

It will also check that neither "[OOPSIE]" nor "Error" appear in the standard output.

Regex patterns are supported.

4.7. Parameters

The parameters field list all parameters to be used in the test. The cartesian product of the elements in this list will determine the benchmarks to be executed.

Parameters are accessible across the whole configuration file by using the syntax {{parameters.my_parameter.value}}.

Each parameter is described by a name and a generator.

Valid generators are :

  • linspace:

{
    "name": "my_linspace_generator",
    "linspace":{
        "min":2,
        "max":10,
        "n_steps":5
    }
}

The example will yield [2,4,6,8,10]. Min and max are inclusive.

  • geomspace:

{
    "name": "my_geomspace_generator",
    "geomspace":{
        "min":1,
        "max":10,
        "n_steps":4
    }
}

The example will yield [2,16,128,1024]. Min and max are inclusive.

  • range:

{
    "name": "my_range_generator",
    "range":{
        "min":1,
        "max":5,
        "step":1
    }
}

The example will yield [1,2,3,4,5]. Min and max are inclusive.

  • geometric:

{
    "name": "my_geometric_generator",
    "geometric":{
        "start":1,
        "ratio":2,
        "n_steps":5
    }
}

The example will yield [1,2,4,8,16].

  • repeat:

{
    "name": "my_repeat_generator",
    "repeat":{
        "value":"a repeated value",
        "count":3
    }
}

The example will yield ["a repeated value", "a repeated value", "a repeated value"].

  • sequence:

{
    "name": "my_sequence_generator",
    "sequence":[ 1, 2, 3, 4]
}

Sequence is the simplest generator. It will yield exactly the given list. It accepts dictionnaries as items, which can then be accessed via the . separator.

  • zip and subparameters:

Parameters can contain subparameters, which can be accessed recursively via the . separator. Its objective is to have parameters that depend on eachother, without producing a cartesian product. Aditionnaly, parameters can be zipped together via the zip generator. The zip generator takes a list of parameters to produce a list of python dictionaries. Each param inside the list can then have any desired generator from above.

{
    "name": "my_zip_generator",
    "zip":[
        {
            "name":"param1",
            "sequence":[
                {"val1":1,"val2":2},
                {"val1":3,"val2":4},
                {"val1":5,"val2":6}
            ]
        },
        {
            "name":"param2",
            "repeat":{
                "value":"a repeated value",
                "count":3
            }
        }
    ]
}

This example will yield [{'param1': {'val1': 1, 'val2': 2}, 'param2': 'a repeated value'}, {'param1': {'val1': 3, 'val2': 4}, 'param2': 'a repeated value'}, {'param1': {'val1': 5, 'val2': 6}, 'param2': 'a repeated value'}]

Zipped parameters need to have the same lenght.

Parameter filtering is supported, visit the Advanced Configuration for more information.

More advanced features are available (Advanced Configuration)such as:

  • Downloading remote data

  • Copying input file between disks

  • Pruning the parameter space

  • Specifying custom performance variables

5. Figures

In order to generate reports, the Feel++ benchmarking framework requires a figure description to specify what the website report page should contain.

These descriptions should be provided either with a specific JSON file with the structure containing uniquely

{
    "plots":[]
}

Or by specifying the plots field on the benchmark configuration JSON file.

Each figure description should contain the following fields

{
    "title": "The figure title",
    "plot_types": [], //List of figure types
    "transformation": "", //Transformation type
    "variables":[], // List of variables to consider
    "names":[], //Respective labels for variables
    "yaxis":{},
    "xaxis":{},
    "color_axis":{}, //Default: performance variables
    "secondary_axis":{}
}

Figures will appear in the same order as they appear on the list.

Users can provide multiple plot_types in the same description field.

Only performance variables specified under the variables list will be considered. If the list is empty, ALL variables inside the ReFrame report will be taken into account.

5.1. Axis

Each axis (with the exception of the yaxis) take a parameter and a label field. The yaxis will always contain the performance values, therefore only the label key should be specified.

The parameter field of each axis should correspond to either a single dimension parameter specified on the benchmark configuration. In the case of subparameters, the syntax should be the following: parameter.subparameter.

By default, the color axis will contain the performance variables, but this can be customized.

5.2. Transformations

The ReFrame report will be used to create a Master DataFrame, which will contain all performance variables and their respective values, as well as all parameters and environments.

To explain how transformation and plot types work, we can consider the following example.

import json
report = json.loads("""
    { "session_info": { "cmdline": "/Users/cladellash/Documents/Repos/benchmarking/.venv/bin/reframe -C /Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/config/machineConfigs/local.py -c /Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py -S report_dir_path=/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28 --system=local --exec-policy=async --prefix=/Users/cladellash/Documents/Repos/benchmarking/build/reframe --report-file=/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28/reframe_report.json -J '#SBATCH --time=0-0:5:0' --perflogdir=/Users/cladellash/Documents/Repos/benchmarking/build/reframe/logs -v -r", "config_files": [ "<builtin>", "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/config/machineConfigs/local.py" ], "data_version": "3.1", "hostname": "irma-dhcp-2.math.unistra.fr", "log_files": [ "/var/folders/pd/r8v9chs90wb1bj6pm0x147jr0000gp/T/rfm-qml5j4xz.log" ], "prefix_output": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output", "prefix_stage": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage", "user": "cladellash", "version": "4.6.3", "workdir": "/Users/cladellash/Documents/Repos/benchmarking", "time_start": "2024-12-02T14:46:29+0100", "time_end": "2024-12-02T14:46:40+0100", "time_elapsed": 11.50057601928711, "num_cases": 12, "num_failures": 0 }, "runs": [ { "num_cases": 12, "num_failures": 0, "num_aborted": 0, "num_skipped": 0, "runid": 0, "testcases": [ { "build_stderr": null, "build_stdout": null, "dependencies_actual": [], "dependencies_conceptual": [], "description": "", "display_name": "RegressionTest %nb_tasks={'tasks': 4, 'exclusive_access': True} %elements=1000000000.0", "environment": "default", "fail_phase": null, "fail_reason": null, "filename": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py", "fixture": false, "hash": "aa223ed0", "jobid": "99178", "job_stderr": "rfm_job.err", "job_stdout": "rfm_job.out", "maintainers": [], "name": "RegressionTest %nb_tasks={'tasks': 4, 'exclusive_access': True} %elements=1000000000.0", "nodelist": [ "irma-dhcp-2.math.unistra.fr" ], "outputdir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output/local/default/default/RegressionTest_aa223ed0", "perfvars": [ { "name": "computation_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 5.32933 }, { "name": "communication_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 0.052584 }, { "name": "N", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 1000000000.0 }, { "name": "sum", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 1000000000.0 }, { "name": "num_process", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 4.0 } ], "prefix": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe", "result": "success", "stagedir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage/local/default/default/RegressionTest_aa223ed0", "scheduler": "local", "system": "local:default", "tags": [ "async" ], "time_compile": 0.006192207336425781, "time_performance": 0.008784055709838867, "time_run": 9.518624067306519, "time_sanity": 0.007528066635131836, "time_setup": 0.00566411018371582, "time_total": 9.696507215499878, "unique_name": "RegressionTest_11", "check_vars": { "valid_prog_environs": [ "default" ], "valid_systems": [ "local:default" ], "descr": "", "sourcepath": "", "sourcesdir": null, "prebuild_cmds": [], "postbuild_cmds": [], "executable": "/Users/cladellash/Documents/Repos/benchmarking//tests/data/parallelSum", "executable_opts": [ "1000000000.0", "/Users/cladellash/Documents/Repos/benchmarking/tests/data/outputs/parallelSum/aa223ed0" ], "prerun_cmds": [], "postrun_cmds": [], "keep_files": [], "readonly_files": [], "tags": [ "async" ], "maintainers": [], "strict_check": true, "num_tasks": 4, "num_tasks_per_node": 4, "num_gpus_per_node": null, "num_cpus_per_task": 1, "num_tasks_per_core": null, "num_tasks_per_socket": null, "use_multithreading": null, "max_pending_time": null, "exclusive_access": true, "local": false, "modules": [], "env_vars": {}, "variables": {}, "time_limit": null, "build_time_limit": null, "extra_resources": {}, "build_locally": true, "report_dir_path": "/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28", "use_case": "parallel_sum", "platform": "builtin" }, "check_params": { "nb_tasks": { "tasks": 4, "exclusive_access": true }, "elements": 1000000000.0 } }, { "build_stderr": null, "build_stdout": null, "dependencies_actual": [], "dependencies_conceptual": [], "description": "", "display_name": "RegressionTest %nb_tasks={'tasks': 4, 'exclusive_access': True} %elements=700000000.0", "environment": "default", "fail_phase": null, "fail_reason": null, "filename": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py", "fixture": false, "hash": "6a58f265", "jobid": "99179", "job_stderr": "rfm_job.err", "job_stdout": "rfm_job.out", "maintainers": [], "name": "RegressionTest %nb_tasks={'tasks': 4, 'exclusive_access': True} %elements=700000000.0", "nodelist": [ "irma-dhcp-2.math.unistra.fr" ], "outputdir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output/local/default/default/RegressionTest_6a58f265", "perfvars": [ { "name": "computation_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 4.56971 }, { "name": "communication_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 0.005627 }, { "name": "N", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 700000000.0 }, { "name": "sum", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 700000000.0 }, { "name": "num_process", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 4.0 } ], "prefix": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe", "result": "success", "stagedir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage/local/default/default/RegressionTest_6a58f265", "scheduler": "local", "system": "local:default", "tags": [ "async" ], "time_compile": 0.007662057876586914, "time_performance": 0.038092851638793945, "time_run": 8.129333019256592, "time_sanity": 0.03859901428222656, "time_setup": 0.009199142456054688, "time_total": 8.30247712135315, "unique_name": "RegressionTest_10", "check_vars": { "valid_prog_environs": [ "default" ], "valid_systems": [ "local:default" ], "descr": "", "sourcepath": "", "sourcesdir": null, "prebuild_cmds": [], "postbuild_cmds": [], "executable": "/Users/cladellash/Documents/Repos/benchmarking//tests/data/parallelSum", "executable_opts": [ "700000000.0", "/Users/cladellash/Documents/Repos/benchmarking/tests/data/outputs/parallelSum/6a58f265" ], "prerun_cmds": [], "postrun_cmds": [], "keep_files": [], "readonly_files": [], "tags": [ "async" ], "maintainers": [], "strict_check": true, "num_tasks": 4, "num_tasks_per_node": 4, "num_gpus_per_node": null, "num_cpus_per_task": 1, "num_tasks_per_core": null, "num_tasks_per_socket": null, "use_multithreading": null, "max_pending_time": null, "exclusive_access": true, "local": false, "modules": [], "env_vars": {}, "variables": {}, "time_limit": null, "build_time_limit": null, "extra_resources": {}, "build_locally": true, "report_dir_path": "/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28", "use_case": "parallel_sum", "platform": "builtin" }, "check_params": { "nb_tasks": { "tasks": 4, "exclusive_access": true }, "elements": 700000000.0 } }, { "build_stderr": null, "build_stdout": null, "dependencies_actual": [], "dependencies_conceptual": [], "description": "", "display_name": "RegressionTest %nb_tasks={'tasks': 4, 'exclusive_access': True} %elements=400000000.0", "environment": "default", "fail_phase": null, "fail_reason": null, "filename": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py", "fixture": false, "hash": "e8955ef5", "jobid": "99180", "job_stderr": "rfm_job.err", "job_stdout": "rfm_job.out", "maintainers": [], "name": "RegressionTest %nb_tasks={'tasks': 4, 'exclusive_access': True} %elements=400000000.0", "nodelist": [ "irma-dhcp-2.math.unistra.fr" ], "outputdir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output/local/default/default/RegressionTest_e8955ef5", "perfvars": [ { "name": "computation_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 3.85582 }, { "name": "communication_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 0.180105 }, { "name": "N", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 400000000.0 }, { "name": "sum", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 400000000.0 }, { "name": "num_process", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 4.0 } ], "prefix": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe", "result": "success", "stagedir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage/local/default/default/RegressionTest_e8955ef5", "scheduler": "local", "system": "local:default", "tags": [ "async" ], "time_compile": 0.008015155792236328, "time_performance": 0.03525996208190918, "time_run": 7.037631988525391, "time_sanity": 0.03978276252746582, "time_setup": 0.01161503791809082, "time_total": 7.199726104736328, "unique_name": "RegressionTest_09", "check_vars": { "valid_prog_environs": [ "default" ], "valid_systems": [ "local:default" ], "descr": "", "sourcepath": "", "sourcesdir": null, "prebuild_cmds": [], "postbuild_cmds": [], "executable": "/Users/cladellash/Documents/Repos/benchmarking//tests/data/parallelSum", "executable_opts": [ "400000000.0", "/Users/cladellash/Documents/Repos/benchmarking/tests/data/outputs/parallelSum/e8955ef5" ], "prerun_cmds": [], "postrun_cmds": [], "keep_files": [], "readonly_files": [], "tags": [ "async" ], "maintainers": [], "strict_check": true, "num_tasks": 4, "num_tasks_per_node": 4, "num_gpus_per_node": null, "num_cpus_per_task": 1, "num_tasks_per_core": null, "num_tasks_per_socket": null, "use_multithreading": null, "max_pending_time": null, "exclusive_access": true, "local": false, "modules": [], "env_vars": {}, "variables": {}, "time_limit": null, "build_time_limit": null, "extra_resources": {}, "build_locally": true, "report_dir_path": "/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28", "use_case": "parallel_sum", "platform": "builtin" }, "check_params": { "nb_tasks": { "tasks": 4, "exclusive_access": true }, "elements": 400000000.0 } }, { "build_stderr": null, "build_stdout": null, "dependencies_actual": [], "dependencies_conceptual": [], "description": "", "display_name": "RegressionTest %nb_tasks={'tasks': 4, 'exclusive_access': True} %elements=100000000.0", "environment": "default", "fail_phase": null, "fail_reason": null, "filename": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py", "fixture": false, "hash": "9886e190", "jobid": "99181", "job_stderr": "rfm_job.err", "job_stdout": "rfm_job.out", "maintainers": [], "name": "RegressionTest %nb_tasks={'tasks': 4, 'exclusive_access': True} %elements=100000000.0", "nodelist": [ "irma-dhcp-2.math.unistra.fr" ], "outputdir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output/local/default/default/RegressionTest_9886e190", "perfvars": [ { "name": "computation_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 0.062597 }, { "name": "communication_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 0.0161 }, { "name": "N", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 100000000.0 }, { "name": "sum", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 100000000.0 }, { "name": "num_process", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 4.0 } ], "prefix": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe", "result": "success", "stagedir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage/local/default/default/RegressionTest_9886e190", "scheduler": "local", "system": "local:default", "tags": [ "async" ], "time_compile": 0.006299018859863281, "time_performance": 0.017982006072998047, "time_run": 1.7802720069885254, "time_sanity": 0.014677762985229492, "time_setup": 0.005388975143432617, "time_total": 1.927483081817627, "unique_name": "RegressionTest_08", "check_vars": { "valid_prog_environs": [ "default" ], "valid_systems": [ "local:default" ], "descr": "", "sourcepath": "", "sourcesdir": null, "prebuild_cmds": [], "postbuild_cmds": [], "executable": "/Users/cladellash/Documents/Repos/benchmarking//tests/data/parallelSum", "executable_opts": [ "100000000.0", "/Users/cladellash/Documents/Repos/benchmarking/tests/data/outputs/parallelSum/9886e190" ], "prerun_cmds": [], "postrun_cmds": [], "keep_files": [], "readonly_files": [], "tags": [ "async" ], "maintainers": [], "strict_check": true, "num_tasks": 4, "num_tasks_per_node": 4, "num_gpus_per_node": null, "num_cpus_per_task": 1, "num_tasks_per_core": null, "num_tasks_per_socket": null, "use_multithreading": null, "max_pending_time": null, "exclusive_access": true, "local": false, "modules": [], "env_vars": {}, "variables": {}, "time_limit": null, "build_time_limit": null, "extra_resources": {}, "build_locally": true, "report_dir_path": "/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28", "use_case": "parallel_sum", "platform": "builtin" }, "check_params": { "nb_tasks": { "tasks": 4, "exclusive_access": true }, "elements": 100000000.0 } }, { "build_stderr": null, "build_stdout": null, "dependencies_actual": [], "dependencies_conceptual": [], "description": "", "display_name": "RegressionTest %nb_tasks={'tasks': 2, 'exclusive_access': True} %elements=1000000000.0", "environment": "default", "fail_phase": null, "fail_reason": null, "filename": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py", "fixture": false, "hash": "86092ceb", "jobid": "99182", "job_stderr": "rfm_job.err", "job_stdout": "rfm_job.out", "maintainers": [], "name": "RegressionTest %nb_tasks={'tasks': 2, 'exclusive_access': True} %elements=1000000000.0", "nodelist": [ "irma-dhcp-2.math.unistra.fr" ], "outputdir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output/local/default/default/RegressionTest_86092ceb", "perfvars": [ { "name": "computation_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 5.13016 }, { "name": "communication_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 0.000502 }, { "name": "N", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 1000000000.0 }, { "name": "sum", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 1000000000.0 }, { "name": "num_process", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 2.0 } ], "prefix": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe", "result": "success", "stagedir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage/local/default/default/RegressionTest_86092ceb", "scheduler": "local", "system": "local:default", "tags": [ "async" ], "time_compile": 0.00612187385559082, "time_performance": 0.002651214599609375, "time_run": 10.523810148239136, "time_sanity": 0.0021507740020751953, "time_setup": 0.005548954010009766, "time_total": 10.664539098739624, "unique_name": "RegressionTest_07", "check_vars": { "valid_prog_environs": [ "default" ], "valid_systems": [ "local:default" ], "descr": "", "sourcepath": "", "sourcesdir": null, "prebuild_cmds": [], "postbuild_cmds": [], "executable": "/Users/cladellash/Documents/Repos/benchmarking//tests/data/parallelSum", "executable_opts": [ "1000000000.0", "/Users/cladellash/Documents/Repos/benchmarking/tests/data/outputs/parallelSum/86092ceb" ], "prerun_cmds": [], "postrun_cmds": [], "keep_files": [], "readonly_files": [], "tags": [ "async" ], "maintainers": [], "strict_check": true, "num_tasks": 2, "num_tasks_per_node": 2, "num_gpus_per_node": null, "num_cpus_per_task": 1, "num_tasks_per_core": null, "num_tasks_per_socket": null, "use_multithreading": null, "max_pending_time": null, "exclusive_access": true, "local": false, "modules": [], "env_vars": {}, "variables": {}, "time_limit": null, "build_time_limit": null, "extra_resources": {}, "build_locally": true, "report_dir_path": "/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28", "use_case": "parallel_sum", "platform": "builtin" }, "check_params": { "nb_tasks": { "tasks": 2, "exclusive_access": true }, "elements": 1000000000.0 } }, { "build_stderr": null, "build_stdout": null, "dependencies_actual": [], "dependencies_conceptual": [], "description": "", "display_name": "RegressionTest %nb_tasks={'tasks': 2, 'exclusive_access': True} %elements=700000000.0", "environment": "default", "fail_phase": null, "fail_reason": null, "filename": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py", "fixture": false, "hash": "b22a7385", "jobid": "99183", "job_stderr": "rfm_job.err", "job_stdout": "rfm_job.out", "maintainers": [], "name": "RegressionTest %nb_tasks={'tasks': 2, 'exclusive_access': True} %elements=700000000.0", "nodelist": [ "irma-dhcp-2.math.unistra.fr" ], "outputdir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output/local/default/default/RegressionTest_b22a7385", "perfvars": [ { "name": "computation_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 5.06941 }, { "name": "communication_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 0.03559 }, { "name": "N", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 700000000.0 }, { "name": "sum", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 700000000.0 }, { "name": "num_process", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 2.0 } ], "prefix": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe", "result": "success", "stagedir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage/local/default/default/RegressionTest_b22a7385", "scheduler": "local", "system": "local:default", "tags": [ "async" ], "time_compile": 0.00615692138671875, "time_performance": 0.015397310256958008, "time_run": 9.810746192932129, "time_sanity": 0.025763988494873047, "time_setup": 0.006819963455200195, "time_total": 9.945525169372559, "unique_name": "RegressionTest_06", "check_vars": { "valid_prog_environs": [ "default" ], "valid_systems": [ "local:default" ], "descr": "", "sourcepath": "", "sourcesdir": null, "prebuild_cmds": [], "postbuild_cmds": [], "executable": "/Users/cladellash/Documents/Repos/benchmarking//tests/data/parallelSum", "executable_opts": [ "700000000.0", "/Users/cladellash/Documents/Repos/benchmarking/tests/data/outputs/parallelSum/b22a7385" ], "prerun_cmds": [], "postrun_cmds": [], "keep_files": [], "readonly_files": [], "tags": [ "async" ], "maintainers": [], "strict_check": true, "num_tasks": 2, "num_tasks_per_node": 2, "num_gpus_per_node": null, "num_cpus_per_task": 1, "num_tasks_per_core": null, "num_tasks_per_socket": null, "use_multithreading": null, "max_pending_time": null, "exclusive_access": true, "local": false, "modules": [], "env_vars": {}, "variables": {}, "time_limit": null, "build_time_limit": null, "extra_resources": {}, "build_locally": true, "report_dir_path": "/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28", "use_case": "parallel_sum", "platform": "builtin" }, "check_params": { "nb_tasks": { "tasks": 2, "exclusive_access": true }, "elements": 700000000.0 } }, { "build_stderr": null, "build_stdout": null, "dependencies_actual": [], "dependencies_conceptual": [], "description": "", "display_name": "RegressionTest %nb_tasks={'tasks': 2, 'exclusive_access': True} %elements=400000000.0", "environment": "default", "fail_phase": null, "fail_reason": null, "filename": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py", "fixture": false, "hash": "7dd0d8fb", "jobid": "99184", "job_stderr": "rfm_job.err", "job_stdout": "rfm_job.out", "maintainers": [], "name": "RegressionTest %nb_tasks={'tasks': 2, 'exclusive_access': True} %elements=400000000.0", "nodelist": [ "irma-dhcp-2.math.unistra.fr" ], "outputdir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output/local/default/default/RegressionTest_7dd0d8fb", "perfvars": [ { "name": "computation_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 4.70458 }, { "name": "communication_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 0.129757 }, { "name": "N", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 400000000.0 }, { "name": "sum", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 400000000.0 }, { "name": "num_process", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 2.0 } ], "prefix": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe", "result": "success", "stagedir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage/local/default/default/RegressionTest_7dd0d8fb", "scheduler": "local", "system": "local:default", "tags": [ "async" ], "time_compile": 0.006106138229370117, "time_performance": 0.00861215591430664, "time_run": 8.266666889190674, "time_sanity": 0.006582021713256836, "time_setup": 0.005933046340942383, "time_total": 8.394183158874512, "unique_name": "RegressionTest_05", "check_vars": { "valid_prog_environs": [ "default" ], "valid_systems": [ "local:default" ], "descr": "", "sourcepath": "", "sourcesdir": null, "prebuild_cmds": [], "postbuild_cmds": [], "executable": "/Users/cladellash/Documents/Repos/benchmarking//tests/data/parallelSum", "executable_opts": [ "400000000.0", "/Users/cladellash/Documents/Repos/benchmarking/tests/data/outputs/parallelSum/7dd0d8fb" ], "prerun_cmds": [], "postrun_cmds": [], "keep_files": [], "readonly_files": [], "tags": [ "async" ], "maintainers": [], "strict_check": true, "num_tasks": 2, "num_tasks_per_node": 2, "num_gpus_per_node": null, "num_cpus_per_task": 1, "num_tasks_per_core": null, "num_tasks_per_socket": null, "use_multithreading": null, "max_pending_time": null, "exclusive_access": true, "local": false, "modules": [], "env_vars": {}, "variables": {}, "time_limit": null, "build_time_limit": null, "extra_resources": {}, "build_locally": true, "report_dir_path": "/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28", "use_case": "parallel_sum", "platform": "builtin" }, "check_params": { "nb_tasks": { "tasks": 2, "exclusive_access": true }, "elements": 400000000.0 } }, { "build_stderr": null, "build_stdout": null, "dependencies_actual": [], "dependencies_conceptual": [], "description": "", "display_name": "RegressionTest %nb_tasks={'tasks': 2, 'exclusive_access': True} %elements=100000000.0", "environment": "default", "fail_phase": null, "fail_reason": null, "filename": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py", "fixture": false, "hash": "e75954a0", "jobid": "99185", "job_stderr": "rfm_job.err", "job_stdout": "rfm_job.out", "maintainers": [], "name": "RegressionTest %nb_tasks={'tasks': 2, 'exclusive_access': True} %elements=100000000.0", "nodelist": [ "irma-dhcp-2.math.unistra.fr" ], "outputdir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output/local/default/default/RegressionTest_e75954a0", "perfvars": [ { "name": "computation_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 0.183174 }, { "name": "communication_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 0.003264 }, { "name": "N", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 100000000.0 }, { "name": "sum", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 100000000.0 }, { "name": "num_process", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 2.0 } ], "prefix": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe", "result": "success", "stagedir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage/local/default/default/RegressionTest_e75954a0", "scheduler": "local", "system": "local:default", "tags": [ "async" ], "time_compile": 0.00605320930480957, "time_performance": 0.019010066986083984, "time_run": 1.770622968673706, "time_sanity": 0.0030341148376464844, "time_setup": 0.00586390495300293, "time_total": 1.8920440673828125, "unique_name": "RegressionTest_04", "check_vars": { "valid_prog_environs": [ "default" ], "valid_systems": [ "local:default" ], "descr": "", "sourcepath": "", "sourcesdir": null, "prebuild_cmds": [], "postbuild_cmds": [], "executable": "/Users/cladellash/Documents/Repos/benchmarking//tests/data/parallelSum", "executable_opts": [ "100000000.0", "/Users/cladellash/Documents/Repos/benchmarking/tests/data/outputs/parallelSum/e75954a0" ], "prerun_cmds": [], "postrun_cmds": [], "keep_files": [], "readonly_files": [], "tags": [ "async" ], "maintainers": [], "strict_check": true, "num_tasks": 2, "num_tasks_per_node": 2, "num_gpus_per_node": null, "num_cpus_per_task": 1, "num_tasks_per_core": null, "num_tasks_per_socket": null, "use_multithreading": null, "max_pending_time": null, "exclusive_access": true, "local": false, "modules": [], "env_vars": {}, "variables": {}, "time_limit": null, "build_time_limit": null, "extra_resources": {}, "build_locally": true, "report_dir_path": "/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28", "use_case": "parallel_sum", "platform": "builtin" }, "check_params": { "nb_tasks": { "tasks": 2, "exclusive_access": true }, "elements": 100000000.0 } }, { "build_stderr": null, "build_stdout": null, "dependencies_actual": [], "dependencies_conceptual": [], "description": "", "display_name": "RegressionTest %nb_tasks={'tasks': 1, 'exclusive_access': True} %elements=1000000000.0", "environment": "default", "fail_phase": null, "fail_reason": null, "filename": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py", "fixture": false, "hash": "cbfe221b", "jobid": "99225", "job_stderr": "rfm_job.err", "job_stdout": "rfm_job.out", "maintainers": [], "name": "RegressionTest %nb_tasks={'tasks': 1, 'exclusive_access': True} %elements=1000000000.0", "nodelist": [ "irma-dhcp-2.math.unistra.fr" ], "outputdir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output/local/default/default/RegressionTest_cbfe221b", "perfvars": [ { "name": "computation_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 4.8334 }, { "name": "communication_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 2.1e-05 }, { "name": "N", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 1000000000.0 }, { "name": "sum", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 1000000000.0 }, { "name": "num_process", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 1.0 } ], "prefix": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe", "result": "success", "stagedir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage/local/default/default/RegressionTest_cbfe221b", "scheduler": "local", "system": "local:default", "tags": [ "async" ], "time_compile": 0.005920886993408203, "time_performance": 0.0037949085235595703, "time_run": 9.820374727249146, "time_sanity": 0.003256082534790039, "time_setup": 0.005385160446166992, "time_total": 11.37114691734314, "unique_name": "RegressionTest_03", "check_vars": { "valid_prog_environs": [ "default" ], "valid_systems": [ "local:default" ], "descr": "", "sourcepath": "", "sourcesdir": null, "prebuild_cmds": [], "postbuild_cmds": [], "executable": "/Users/cladellash/Documents/Repos/benchmarking//tests/data/parallelSum", "executable_opts": [ "1000000000.0", "/Users/cladellash/Documents/Repos/benchmarking/tests/data/outputs/parallelSum/cbfe221b" ], "prerun_cmds": [], "postrun_cmds": [], "keep_files": [], "readonly_files": [], "tags": [ "async" ], "maintainers": [], "strict_check": true, "num_tasks": 1, "num_tasks_per_node": 1, "num_gpus_per_node": null, "num_cpus_per_task": 1, "num_tasks_per_core": null, "num_tasks_per_socket": null, "use_multithreading": null, "max_pending_time": null, "exclusive_access": true, "local": false, "modules": [], "env_vars": {}, "variables": {}, "time_limit": null, "build_time_limit": null, "extra_resources": {}, "build_locally": true, "report_dir_path": "/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28", "use_case": "parallel_sum", "platform": "builtin" }, "check_params": { "nb_tasks": { "tasks": 1, "exclusive_access": true }, "elements": 1000000000.0 } }, { "build_stderr": null, "build_stdout": null, "dependencies_actual": [], "dependencies_conceptual": [], "description": "", "display_name": "RegressionTest %nb_tasks={'tasks': 1, 'exclusive_access': True} %elements=700000000.0", "environment": "default", "fail_phase": null, "fail_reason": null, "filename": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py", "fixture": false, "hash": "be4af6da", "jobid": "99226", "job_stderr": "rfm_job.err", "job_stdout": "rfm_job.out", "maintainers": [], "name": "RegressionTest %nb_tasks={'tasks': 1, 'exclusive_access': True} %elements=700000000.0", "nodelist": [ "irma-dhcp-2.math.unistra.fr" ], "outputdir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output/local/default/default/RegressionTest_be4af6da", "perfvars": [ { "name": "computation_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 4.90371 }, { "name": "communication_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 2.7e-05 }, { "name": "N", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 700000000.0 }, { "name": "sum", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 700000000.0 }, { "name": "num_process", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 1.0 } ], "prefix": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe", "result": "success", "stagedir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage/local/default/default/RegressionTest_be4af6da", "scheduler": "local", "system": "local:default", "tags": [ "async" ], "time_compile": 0.005860805511474609, "time_performance": 0.0030002593994140625, "time_run": 9.275190114974976, "time_sanity": 0.002341032028198242, "time_setup": 0.005341053009033203, "time_total": 10.844825267791748, "unique_name": "RegressionTest_02", "check_vars": { "valid_prog_environs": [ "default" ], "valid_systems": [ "local:default" ], "descr": "", "sourcepath": "", "sourcesdir": null, "prebuild_cmds": [], "postbuild_cmds": [], "executable": "/Users/cladellash/Documents/Repos/benchmarking//tests/data/parallelSum", "executable_opts": [ "700000000.0", "/Users/cladellash/Documents/Repos/benchmarking/tests/data/outputs/parallelSum/be4af6da" ], "prerun_cmds": [], "postrun_cmds": [], "keep_files": [], "readonly_files": [], "tags": [ "async" ], "maintainers": [], "strict_check": true, "num_tasks": 1, "num_tasks_per_node": 1, "num_gpus_per_node": null, "num_cpus_per_task": 1, "num_tasks_per_core": null, "num_tasks_per_socket": null, "use_multithreading": null, "max_pending_time": null, "exclusive_access": true, "local": false, "modules": [], "env_vars": {}, "variables": {}, "time_limit": null, "build_time_limit": null, "extra_resources": {}, "build_locally": true, "report_dir_path": "/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28", "use_case": "parallel_sum", "platform": "builtin" }, "check_params": { "nb_tasks": { "tasks": 1, "exclusive_access": true }, "elements": 700000000.0 } }, { "build_stderr": null, "build_stdout": null, "dependencies_actual": [], "dependencies_conceptual": [], "description": "", "display_name": "RegressionTest %nb_tasks={'tasks': 1, 'exclusive_access': True} %elements=400000000.0", "environment": "default", "fail_phase": null, "fail_reason": null, "filename": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py", "fixture": false, "hash": "e8e66601", "jobid": "99278", "job_stderr": "rfm_job.err", "job_stdout": "rfm_job.out", "maintainers": [], "name": "RegressionTest %nb_tasks={'tasks': 1, 'exclusive_access': True} %elements=400000000.0", "nodelist": [ "irma-dhcp-2.math.unistra.fr" ], "outputdir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output/local/default/default/RegressionTest_e8e66601", "perfvars": [ { "name": "computation_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 2.24316 }, { "name": "communication_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 3.2e-05 }, { "name": "N", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 400000000.0 }, { "name": "sum", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 400000000.0 }, { "name": "num_process", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 1.0 } ], "prefix": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe", "result": "success", "stagedir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage/local/default/default/RegressionTest_e8e66601", "scheduler": "local", "system": "local:default", "tags": [ "async" ], "time_compile": 0.0059511661529541016, "time_performance": 0.002692699432373047, "time_run": 4.065032005310059, "time_sanity": 0.002173185348510742, "time_setup": 0.005150794982910156, "time_total": 10.587581872940063, "unique_name": "RegressionTest_01", "check_vars": { "valid_prog_environs": [ "default" ], "valid_systems": [ "local:default" ], "descr": "", "sourcepath": "", "sourcesdir": null, "prebuild_cmds": [], "postbuild_cmds": [], "executable": "/Users/cladellash/Documents/Repos/benchmarking//tests/data/parallelSum", "executable_opts": [ "400000000.0", "/Users/cladellash/Documents/Repos/benchmarking/tests/data/outputs/parallelSum/e8e66601" ], "prerun_cmds": [], "postrun_cmds": [], "keep_files": [], "readonly_files": [], "tags": [ "async" ], "maintainers": [], "strict_check": true, "num_tasks": 1, "num_tasks_per_node": 1, "num_gpus_per_node": null, "num_cpus_per_task": 1, "num_tasks_per_core": null, "num_tasks_per_socket": null, "use_multithreading": null, "max_pending_time": null, "exclusive_access": true, "local": false, "modules": [], "env_vars": {}, "variables": {}, "time_limit": null, "build_time_limit": null, "extra_resources": {}, "build_locally": true, "report_dir_path": "/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28", "use_case": "parallel_sum", "platform": "builtin" }, "check_params": { "nb_tasks": { "tasks": 1, "exclusive_access": true }, "elements": 400000000.0 } }, { "build_stderr": null, "build_stdout": null, "dependencies_actual": [], "dependencies_conceptual": [], "description": "", "display_name": "RegressionTest %nb_tasks={'tasks': 1, 'exclusive_access': True} %elements=100000000.0", "environment": "default", "fail_phase": null, "fail_reason": null, "filename": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe/regression.py", "fixture": false, "hash": "da2298ca", "jobid": "99283", "job_stderr": "rfm_job.err", "job_stdout": "rfm_job.out", "maintainers": [], "name": "RegressionTest %nb_tasks={'tasks': 1, 'exclusive_access': True} %elements=100000000.0", "nodelist": [ "irma-dhcp-2.math.unistra.fr" ], "outputdir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/output/local/default/default/RegressionTest_da2298ca", "perfvars": [ { "name": "computation_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 0.622329 }, { "name": "communication_time", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "s", "value": 3.2e-05 }, { "name": "N", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 100000000.0 }, { "name": "sum", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 100000000.0 }, { "name": "num_process", "reference": 0, "thres_lower": null, "thres_upper": null, "unit": "", "value": 1.0 } ], "prefix": "/Users/cladellash/Documents/Repos/benchmarking/.venv/lib/python3.8/site-packages/feelpp/benchmarking/reframe", "result": "success", "stagedir": "/Users/cladellash/Documents/Repos/benchmarking/build/reframe/stage/local/default/default/RegressionTest_da2298ca", "scheduler": "local", "system": "local:default", "tags": [ "async" ], "time_compile": 0.005639791488647461, "time_performance": 0.008454084396362305, "time_run": 1.732982873916626, "time_sanity": 0.0396878719329834, "time_setup": 0.005467891693115234, "time_total": 9.557442903518677, "unique_name": "RegressionTest_00", "check_vars": { "valid_prog_environs": [ "default" ], "valid_systems": [ "local:default" ], "descr": "", "sourcepath": "", "sourcesdir": null, "prebuild_cmds": [], "postbuild_cmds": [], "executable": "/Users/cladellash/Documents/Repos/benchmarking//tests/data/parallelSum", "executable_opts": [ "100000000.0", "/Users/cladellash/Documents/Repos/benchmarking/tests/data/outputs/parallelSum/da2298ca" ], "prerun_cmds": [], "postrun_cmds": [], "keep_files": [], "readonly_files": [], "tags": [ "async" ], "maintainers": [], "strict_check": true, "num_tasks": 1, "num_tasks_per_node": 1, "num_gpus_per_node": null, "num_cpus_per_task": 1, "num_tasks_per_core": null, "num_tasks_per_socket": null, "use_multithreading": null, "max_pending_time": null, "exclusive_access": true, "local": false, "modules": [], "env_vars": {}, "variables": {}, "time_limit": null, "build_time_limit": null, "extra_resources": {}, "build_locally": true, "report_dir_path": "/Users/cladellash/Documents/Repos/benchmarking/reports/parallelSum/parallel_sum/local/2024_12_02T14_46_28", "use_case": "parallel_sum", "platform": "builtin" }, "check_params": { "nb_tasks": { "tasks": 1, "exclusive_access": true }, "elements": 100000000.0 } } ] } ], "restored_cases": [] }
""")

from feelpp.benchmarking.report.atomicReports.model import AtomicReportModel

model = AtomicReportModel(report["runs"])
model.master_df = model.master_df[model.master_df["performance_variable"].isin(["computation_time","communication_time"])].loc[:,["performance_variable","value","unit",	"testcase_time_run","environment","platform","nb_tasks.tasks","nb_tasks.exclusive_access","elements"]]

model.master_df
Out[1]:
   performance_variable     value  ... nb_tasks.exclusive_access      elements
0      computation_time  5.329330  ...                      True  1.000000e+09
1    communication_time  0.052584  ...                      True  1.000000e+09
5      computation_time  4.569710  ...                      True  7.000000e+08
6    communication_time  0.005627  ...                      True  7.000000e+08
10     computation_time  3.855820  ...                      True  4.000000e+08
11   communication_time  0.180105  ...                      True  4.000000e+08
15     computation_time  0.062597  ...                      True  1.000000e+08
16   communication_time  0.016100  ...                      True  1.000000e+08
20     computation_time  5.130160  ...                      True  1.000000e+09
21   communication_time  0.000502  ...                      True  1.000000e+09
25     computation_time  5.069410  ...                      True  7.000000e+08
26   communication_time  0.035590  ...                      True  7.000000e+08
30     computation_time  4.704580  ...                      True  4.000000e+08
31   communication_time  0.129757  ...                      True  4.000000e+08
35     computation_time  0.183174  ...                      True  1.000000e+08
36   communication_time  0.003264  ...                      True  1.000000e+08
40     computation_time  4.833400  ...                      True  1.000000e+09
41   communication_time  0.000021  ...                      True  1.000000e+09
45     computation_time  4.903710  ...                      True  7.000000e+08
46   communication_time  0.000027  ...                      True  7.000000e+08
50     computation_time  2.243160  ...                      True  4.000000e+08
51   communication_time  0.000032  ...                      True  4.000000e+08
55     computation_time  0.622329  ...                      True  1.000000e+08
56   communication_time  0.000032  ...                      True  1.000000e+08

[24 rows x 9 columns]

We can see that this dataframe contains the parameters: - environment - platform - nb_tasks.tasks - nb_tasks.exclusive_access - elements - performance_variable

By having this common structure, we can make use of transformation strategies to manipulate values depending on the desired output.

Strategies will depend on the figure axis. All strategies will create a pivot dataframe that will contain the parameter specified as color_axis as columns, xaxis as first level index and secondary_axis as second level index. Values of the dataframe will always be the values of the master dataframe.

As an example, we will consider the following axis:

"xaxis":{
    "parameter":"nb_tasks.tasks",
    "label":"Number of tasks"
},
"yaxis":{
    "label":"Execution time (s)"
},
"secondary_axis":{
    "parameter":"elements",
    "label":"N"
},
"color_axis":{
    "parameter":"performance_variable",
    "label":"Performance variable"
}

Available strategies are:

  • performance

This strategy should be seen as the "base" strategy. No transformation, other that a pivot, is done. For the given example, it produces the following dataframe

from feelpp.benchmarking.report.transformationFactory import TransformationStrategyFactory
from feelpp.benchmarking.reframe.config.configPlots import Plot
plot_config = Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "stacked_bar", "grouped_bar" ],
    "transformation": "performance",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation","Comunication"],
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"elements",
        "label":"N"
    }
})
strategy = TransformationStrategyFactory.create(plot_config)
df = strategy.calculate(model.master_df)
print(df)
performance_variable         communication_time  computation_time
elements     nb_tasks.tasks
1.000000e+08 1                         0.000032          0.622329
             2                         0.003264          0.183174
             4                         0.016100          0.062597
4.000000e+08 1                         0.000032          2.243160
             2                         0.129757          4.704580
             4                         0.180105          3.855820
7.000000e+08 1                         0.000027          4.903710
             2                         0.035590          5.069410
             4                         0.005627          4.569710
1.000000e+09 1                         0.000021          4.833400
             2                         0.000502          5.130160
             4                         0.052584          5.329330
  • relative_performance

The relative performance strategy computes the proportion of the time that a a color_axis variable takes with regards of the total.

from feelpp.benchmarking.report.transformationFactory import TransformationStrategyFactory
from feelpp.benchmarking.reframe.config.configPlots import Plot
plot_config = Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "stacked_bar", "grouped_bar" ],
    "transformation": "relative_performance",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation","Comunication"],
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"elements",
        "label":"N"
    }
})
strategy = TransformationStrategyFactory.create(plot_config)
df = strategy.calculate(model.master_df)
print(df)
performance_variable         communication_time  computation_time
elements     nb_tasks.tasks
1.000000e+08 1                         0.005142         99.994858
             2                         1.750716         98.249284
             4                        20.458213         79.541787
4.000000e+08 1                         0.001427         99.998573
             2                         2.684070         97.315930
             4                         4.462546         95.537454
7.000000e+08 1                         0.000551         99.999449
             2                         0.697160         99.302840
             4                         0.122985         99.877015
1.000000e+09 1                         0.000434         99.999566
             2                         0.009784         99.990216
             4                         0.977050         99.022950

The sum along the column axis will always be equal to 1.

  • speedup

The speedup strategy computes the speedup of the color_axis variables. The minimum of the xaxis values is taken as the base of the speedup. For the example, this strategy will produce the following.

from feelpp.benchmarking.report.transformationFactory import TransformationStrategyFactory
from feelpp.benchmarking.reframe.config.configPlots import Plot
plot_config = Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "stacked_bar", "grouped_bar" ],
    "transformation": "speedup",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation","Comunication"],
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"elements",
        "label":"N"
    }
})
strategy = TransformationStrategyFactory.create(plot_config)
df = strategy.calculate(model.master_df)
print(df)
performance_variable         communication_time  ...  half-optimal
elements     nb_tasks.tasks                      ...
1.000000e+08 1                         1.000000  ...           1.0
             2                         0.009804  ...           1.5
             4                         0.001988  ...           2.5
4.000000e+08 1                         1.000000  ...           1.0
             2                         0.000247  ...           1.5
             4                         0.000178  ...           2.5
7.000000e+08 1                         1.000000  ...           1.0
             2                         0.000759  ...           1.5
             4                         0.004798  ...           2.5
1.000000e+09 1                         1.000000  ...           1.0
             2                         0.041833  ...           1.5
             4                         0.000399  ...           2.5

[12 rows x 4 columns]

5.3. Plot types

Considering the same example axis as above, the software can generate the following figures:

  • scatter

from feelpp.benchmarking.report.figures.figureFactory import FigureFactory
figures = FigureFactory.create(Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "scatter" ],
    "transformation": "performance",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation","Comunication"],
    "color_axis":{
    "parameter":"performance_variable",
    "label":"Performance variable"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"elements",
        "label":"N"
    },
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    }
}))
fig = figures[0].createFigure(model.master_df)
fig.show()



    
  • marked_scatter

The marked scatter plot type supports from 2 to 4 dimensions. The symbol/marks axis will correspond to the secondary_axis parameter. This plot type will behave as follows:

  • If 1 or 2 dimensions are specified (x-axis and optionally color-axis), then this plot type will be equivalent to scatter.

  • If 3 dimensions are specified (x-axis, color-axis and secondary-axis), then the secondary_axis will correspond to the symbol/marks axis.

  • If 4 dimensions are specified (x-axis, color-axis, secondary-axis and one extra-axis), then the first element of the extra_axes list will correspond to the symbol/marks axis, and the secondary_axis will correspond to the slider of the returned animation.

figures = FigureFactory.create(Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "marked_scatter" ],
    "transformation": "performance",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation", "Communication"],
    "color_axis":{
        "parameter":"performance_variable",
        "label":"Performance variable"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"elements",
        "label":"N"
    },
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    }
}))
for f in figures:
    fig = f.createFigure(model.master_df)
    fig.show()



    
  • stacked_bar

figures = FigureFactory.create(Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "stacked_bar" ],
    "transformation": "performance",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation","Comunication"],
    "color_axis":{
    "parameter":"performance_variable",
    "label":"Performance variable"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"elements",
        "label":"N"
    },
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    }
}))
fig = figures[0].createFigure(model.master_df)
fig.show()



    
  • grouped_bar

figures = FigureFactory.create(Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "grouped_bar" ],
    "transformation": "performance",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation","Comunication"],
    "color_axis":{
    "parameter":"performance_variable",
    "label":"Performance variable"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"elements",
        "label":"N"
    },
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    }
}))
fig = figures[0].createFigure(model.master_df)
fig.show()



    
  • heatmap

For this case, we will consider the elements (N) as color_axis and performance_variable for secondary axis (slider).

figures = FigureFactory.create(Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "heatmap" ],
    "transformation": "performance",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation", "Communication"],
    "color_axis":{
        "parameter":"elements",
        "label":"N"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"performance_variable",
        "label":"Performance variable"
    },
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    }
}))
fig = figures[0].createFigure(model.master_df)
fig.show()



    
  • table

figures = FigureFactory.create(Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "table" ],
    "transformation": "performance",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation","Comunication"],
    "color_axis":{
    "parameter":"performance_variable",
    "label":"Performance variable"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"elements",
        "label":"N"
    },
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    }
}))
fig = figures[0].createFigure(model.master_df)
fig.show()



    
  • sunburst

This figure considers the color_axis parameter as the outer-most ring. Users can supply an extra_axes field, containing a list of additional parameters. Values for these parameters whill be shown on the rings that follow the color_axis ring, in the order they are provided. The secondary_axis and xaxis parameter are present respectively on the inner-most and second inner-most rings.

figures = FigureFactory.create(Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "sunburst" ],
    "transformation": "performance",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation", "Communication"],
    "color_axis":{
        "parameter":"performance_variable",
        "label":"Performance variable"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"elements",
        "label":"N"
    },
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    }
}))
fig = figures[0].createFigure(model.master_df)
fig.show()



    
  • parallelcoordinates

Axes will be shown on the following order: secondary_axis, xaxis, all additional extra_axes, color_axis. The yaxis will be shown in the line color.

figures = FigureFactory.create(Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "parallelcoordinates" ],
    "transformation": "performance",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation", "Communication"],
    "color_axis":{
        "parameter":"performance_variable",
        "label":"Performance variable"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"elements",
        "label":"N"
    },
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    },
    "extra_axes":[
        {
            "parameter":"dim3",
            "label":"Dim3"
        }
    ]
}))
for f in figures:
    fig = f.createFigure(model.master_df)
    fig.show()



    

5.3.1. 3D Plots

3D plots are also supported, and then can show up to 4 dimensions. At least 3 parameters must be provided (xaxis,color_axis and secondary_axis ). Axes correspondance is as follows:

  • x-axis of the 3D plot: xaxis

  • y-axis of the 3D plot: secondary_axis if no extra axes are provided, else, the first element of the extra_axes list.

  • z-axis of the 3D plot: yaxis (contains the measured values)

  • color of the 3D traces: color_axis

  • Slider: secondary_axis if extra axes are provided.

  • scatter3d

figures = FigureFactory.create(Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "scatter3d" ],
    "transformation": "performance",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation", "Communication"],
    "color_axis":{
        "parameter":"performance_variable",
        "label":"Performance variable"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"elements",
        "label":"N"
    },
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    }
}))
for f in figures:
    fig = f.createFigure(model.master_df)
    fig.show()



    
  • surface3d

figures = FigureFactory.create(Plot(**{
    "title": "Absolute performance",
    "plot_types": [ "surface3d" ],
    "transformation": "performance",
    "variables": [ "computation_time","communication_time" ],
    "names": ["Computation", "Communication"],
    "color_axis":{
        "parameter":"performance_variable",
        "label":"Performance variable"
    },
    "yaxis":{"label":"Execution time (s)"},
    "secondary_axis":{
        "parameter":"elements",
        "label":"N"
    },
    "xaxis":{
        "parameter":"nb_tasks.tasks",
        "label":"Number of tasks"
    }
}))
for f in figures:
    fig = f.createFigure(model.master_df)
    fig.show()



    

5.4. Aggregations

Depending on the dashboard level that we are located at, it might be necessary to aggregate the data on the master dataframe. For example, if we have all use cases, applications and machines on the dataframe, and we want to see how a certain use case performs on different machines, we can make use of the aggregations field to group the data accordingly.

"aggregations":[
    {"column":"date","agg":"max"},
    {"column":"applications","agg":"filter:my_app"},
    {"column":"use_cases","agg":"filter:my_use_case"},
    {"column":"performance_variable","agg":"sum"}
]

The previous example will first get only the latest benchmarks (by getting the maximum date), then it will filter the application and the use case to find applications and use cases that correspond to "my_app" and "my_use_case". And finally it will compute the sum of all performance variables for the remaining rows.

Users must provide a column and an aggregation function as a string.

Available aggregations are:

  • mean : Computes the mean of the column

  • mean : Computes the sum of the column

  • max : Computes the maximum of the column

  • min : Computes the minimum of the column

  • filter:value: Filters the column by value.

The order of the aggregations list is important.

5.5. Custom layouts

By providing the layout_modifiers field, users can pass custom layout options for rendering the figures. These options correspond to the accepted layout reference for Plotly: Plotly layout reference It accepts a nested dictionnary just as Plotly does.

For example, we could customize a figure to have have its x-axis on a logscale.

"layout_modifiers":{
    "xaxis":{
        "type":"log"
    }
}