Containers - Handbook - Project utilities - Examples - Basic

What you will learn?

The guide provided by this document covers the following topics:

Build error example

In this example, we show how to handle the situation with the project failing during the build stage. We will consider a hypothetical advanced multi-file C++ application with the Makefile. So, we assume the following:

  • there is a Makefile file in the workspace ($SE_PATH_WORKSPACE) directory,
  • the build process is performed by calling ./configure and make commands,
  • the successful build produces a binary file app in the workspace ($SE_PATH_WORKSPACE) directory,
  • there is a run scenario launching the application.

config.json

{
(...)
    "scenarios": {
        // run scenario for run button
        "run": {
            "stages": {
                "build": {
                    "command": ".sphere-engine/build.sh"
                },
                "run": {
                    "command": ".sphere-engine/run.sh"
                }
            },
            "output": {
                "type": "console"
            }
        }
    },
(...)

Script .sphere-engine/build.sh

#!/bin/bash

./configure && make
BUILD_EXIT_CODE="$?"

if [ $BUILD_EXIT_CODE -ne 0 ]; then
    echo "Build failed"
    se_utils_cli scenario_result status "BE"
elif [[ ! -f app ]]; then
    echo "Build failed - application binary is missing"
    se_utils_cli scenario_result status "BE"
elif [[ ! -x app ]]; then
    se_utils_cli debug_log "Warning: application binary is not executable"
    chmod +x app
    echo "Build successful"
else
    echo "Build successful"
fi

Script .sphere-engine/run.sh

#!/bin/bash

./app

Unit tests example

This is an example of a hypothetical Java application using mvn build automation tool. We assume the following:

  • there is a pom.xml file in the workspace ($SE_PATH_WORKSPACE) directory,
  • the application has jUnit unit tests generating a report.xml report to the $SE_PATH_WORKSPACE/target directory,
  • there is a run scenario launching unit tests,
  • there is a test scenario evaluating the application:
    • the test of the the application is successful if at least one unit test passes,
    • the score is equal to the percentage of passing unit tests.

config.json

{
(...)
    "scenarios": {
        // run scenario for run button
        "run": {
            "stages": {
                "build": {
                    "command": ".sphere-engine/build.sh"
                },
                "run": {
                    "command": ".sphere-engine/run.sh"
                }
            },
            "output": {
                "type": "console"
            }
        },
        // test scenario for test button
        "test": {
            "stages": {
                "build": {
                    "command": ".sphere-engine/build.sh"
                },
                "run": {
                    "command": ".sphere-engine/run.sh"
                },
                "test": {
                    "command": ".sphere-engine/test.sh"
                    // or alternatively for Python stage script:
                    // "command": "python3 .sphere-engine/test.py"
                },
                "post": {
                    "command": ".sphere-engine/post.sh"
                }
            },
            "output": {
                "type": "tests"
            }
        }
    },
(...)
}

Script .sphere-engine/build.sh

#!/bin/bash

mvn install -DskipTests
MAVEN_BUILD_EXIT_CODE="$?"

if [ $MAVEN_BUILD_EXIT_CODE -ne 0 ]; then
    echo "Build failed"
    se_utils_cli scenario_result status "BE"
else
    echo "Build successful"
fi

Script .sphere-engine/run.sh

#!/bin/bash

mvn test

Script .sphere-engine/test.sh

#!/bin/bash

if [ -f $SE_PATH_WORKSPACE/target/report.xml ]; then
    cp "$SE_PATH_WORKSPACE/target/report.xml" "$SE_PATH_UNIT_TESTS_REPORT"
else
    se_utils_cli debug_log "Missing unit test report file"
    se_utils_cli scenario_result status "FAIL"
    exit 0;
fi

se_utils_cli evaluate UnitTestsEvaluator

Script .sphere-engine/test.py

Note: this is an alternative Python variant of the bash script test.sh presented above.

from se_utils.evaluator import UnitTestsEvaluator
from se_utils.validator import XUnitValidator
from se_utils.converter import XUnitConverter
from se_utils import environment

custom_ut_report_path=f'{environment.path.workspace}/target/report.xml'

UnitTestsEvaluator(
    validator=XUnitValidator(ut_report_path=custom_ut_report_path),
    converter=XUnitConverter(ut_report_path=custom_ut_report_path),
).run()

Script .sphere-engine/post.sh

#!/bin/bash

FINAL_STATUS="$(se_utils_cli scenario_result status)"

if [[ "$FINAL_STATUS" == "BE" ]]; then
    se_utils_cli debug_log "BUILD STAGE OUTPUT"
    se_utils_cli debug_log < "$SE_PATH_STAGE_BUILD_STDOUT"
    se_utils_cli debug_log "BUILD STAGE ERROR"
    se_utils_cli debug_log < "$SE_PATH_STAGE_BUILD_STDERR"
fi

Input output example

This is an example of a hypothetical C++ console application. We assume the following:

  • this application reads data from the stdin stream and produce output to the stdout stream,
  • there is a reference pair of an input file and an output file called a TEST CASE,
    • the output file of the TEST CASE is a correct answer for the input file of the TEST CASE,
    • the input file is located in the $SE_PATH_TEST_CASES directory as a file of the name 0.in,
    • the output file is located in the $SE_PATH_TEST_CASES directory as a file of the name 0.out,
  • there is a run scenario launching an application with the input file provided to the stdin stream,
  • there is a test scenario evaluating the application:
    • the application is also executed with the input file provided to the stdin stream,
    • the output data (generated by the console application) is stored in a default location (i.e., $SE_PATH_STAGE_RUN_STDOUT)
    • the output data is expected to match the output data of the TEST CASE.

config.json

{
(...)
    "scenarios": {
        // run scenario for run button
        "run": {
            "stages": {
                "build": {
                    "command": ".sphere-engine/build.sh"
                },
                "run": {
                    "command": ".sphere-engine/run.sh"
                }
            },
            "output": {
                "type": "console"
            }
        },
        // test scenario for test button
        "test": {
            "stages": {
                "build": {
                    "command": ".sphere-engine/build.sh"
                },
                "run": {
                    "command": ".sphere-engine/run.sh",
                    // this is to redirect output data to the $SE_PATH_STAGE_RUN_STDOUT file
                    "redirect_streams_to": "files"
                },
                "test": {
                    "command": ".sphere-engine/test.sh"
                    // or alternatively for Python stage script:
                    // "command": "python3 .sphere-engine/test.py"
                }
            },
            "output": {
                "type": "tests"
            }
        }
    },
(...)
}

Script .sphere-engine/build.sh

#!/bin/bash

g++ -o app app.cpp
COMPILATION_EXIT_CODE="$?"

if [ $COMPILATION_EXIT_CODE -ne 0 ]; then
    echo "Compilation failed"
    se_utils_cli scenario_result status "BE"
else
    echo "Compilation successful"
fi

Script .sphere-engine/run.sh

#!/bin/bash

./app < "$SE_PATH_TEST_CASES/0.in"

Script .sphere-engine/test.sh

#!/bin/bash

# both user output and model output files are in their default location
# so there is no need for additional configuration
se_utils_cli evaluate IOEvaluator

Script .sphere-engine/test.py

Note: this is an alternative Python variant of the bash script test.sh presented above.

from se_utils.evaluator import IOEvaluator

# both user output and model output files are in their default location
# so there is no need for additional configuration
IOEvaluator().run()