- Sphere Engine overview
- Compilers
- Overview
- API integration
- JavaScript widget
- Resources
- Problems
- Overview
- API integration
- JavaScript widget
- Handbook
- Resources
- Containers
- Overview
- API
- Workspaces
- Handbook
- RESOURCES
- Programming languages
- Modules comparison
- Webhooks
- API Changelog
- FAQ
Effective project configuration
The vast flexibility of the Sphere Engine Containers module allows numerous different approaches for configuring projects. Each approach can yield a similar effect, but they are not equal in terms of efficiency.
As a rule of thumb, it's worth keeping in mind the following:
- it is better to have all dependencies pre-downloaded than to download them for every submission
- it is better to keep files as a part of the project if they don't change rather than submit them separately every time
- submissions should contain only new or modified files prepared by the end-users
If you stick to the above rules, your end-users' submissions will be executed fast, improving their user experience and keeping your budget in line.
Available project templates
The Sphere Engine Containers module supports various base projects for different purposes. They can be grouped into generic categories. Each category reflects the specificity of a given type of project and has a distinctive user layout in the Workspace.
The following main types are currently available for Sphere Engine Containers projects:
Web applications
Web applications are one of the most popular software products. Although they use a wide variety of technologies (e.g. Python, PHP, NodeJS), back-end frameworks (e.g. Django, Symfony), and front-end frameworks (e.g. React, Angular, Vue.js), they have a lot in common, especially when it comes to running or testing.
When working in the Workspace, web application projects allow displaying a live view of the web application being created.

Desktop applications
Desktop applications may not be as popular as they were a few years ago, however in some cases will be the preferable solution. Launching such applications usually leads to opening a graphical user interface window.
The following use cases are still very popular as desktop apps:
- scientific packages with charts and plots (e.g. PyPlot, Octave, R)
- game frameworks (e.g. PyGame)

Mobile applications
Nowadays, mobile applications are very popular, and the interest in related technologies continues to grow. To be able to monitor the end-user experience, working with a mobile app requires emulating a mobile device environment.

Console applications
This is the most generic project type. Anything that communicates with the external world using data streams
(like stdout
or stderr
) or files falls into this category.
The following projects often appear as console applications:
- C/C++ multi-files projects built with
Makefile
- Python or PHP scripts
- Java projects defined by Maven's
pom.xml
file withjUnit
unit tests - Machine learning projects powered by the Tensorflow framework
- .NET framework C# projects
- projects using MySQL relational database operations

Tool applications
This is a broad category of single or multi purpose tools such as Jupyter, Git or Ansible.

Note: At this point, the specificity of the project type (especially user interface layout) has no effect on the end-user because the Workspace functionalities are limited to be used by Content Managers. Yet, in the upcoming releases, the Workspace will be interchangeable with the current available RESTful API. This is why selecting the proper project type is recommended already, to get the most out of it in the future.
Archive with API submission files
A typical submission to Sphere Engine Containers API is a part of a larger project. Such submissions are packages containing a number of files arranged in the directory tree. However, the submission doesn't need to contain all the project files, which would be wasteful. The submission should deliver only new or modified files in the ideal situation.
Consider an example project structure:
src
├── models
│ ├── Book.ts
│ ├── Bookstore.ts
│ └── User.ts
└── views
├── AddBook.tsx
├── EditBook.tsx
├── EditUser.tsx
├── Library.tsx
└── User.tsx
test
└── models
├── Bookstore.ts
└── User.ts
package.json
tsconfig.json
In the above project, there are many files in different directories, and this is only a sample to aid our discussion. Actual projects are much more complex. Usually, the submission affects only a small part of the project.
Let's assume that we would like to:
- add a new
test/models/Book.ts
file - edit the
src/models/User.ts
file - edit the
test/modes/User.ts
file
Our goal is to have the following project:
src
├── models
│ ├── Book.ts
│ ├── Bookstore.ts
│ └── User.ts <-- modified by submission
└── views
├── AddBook.tsx
├── EditBook.tsx
├── EditUser.tsx
├── Library.tsx
└── User.tsx
test
└── models
├── Book.ts <-- added by submission
├── Bookstore.ts
└── User.ts <-- modified by submission
package.json
tsconfig.json
We intend to create a tar.gz
archive containing all the files (i.e. src/models/Book.ts
, src/models/User.ts
,
test/models/User.ts
) and keep the directory structure. In other words, we want to create an archive of the following
structure:
src
└── models
├── Book.ts
└── User.ts
test
└── models
└── User.ts
In addition, Sphere Engine Containers API follows the convention in which it is required to have an archive in
canonical form. The canonical form requires putting all submission files into a single directory named workspace
,
which should be placed in the root of the tar.gz
archive.
Assuming we are in the directory directly above the src
and the test
directories, we can do it as follows:
tar -czf source.tar.gz --transform 's,^,workspace/,' ./src ./test
We should end up with the source.tar.gz
archive that is ready to be submitted by the API method. The archive yields the
following structure (note the added workspace
directory in the root of the directory structure):
workspace
├── src
│ └── models
│ ├── Book.ts
│ └── User.ts
└── test
└── models
└── User.ts
Note: The presented method shows how to create an archive manually. While integrating Sphere Engine Containers,
you can use any programming method to automate this process. For example, you can use PharData
in PHP and
tarfile
in Python.
Submission results
Basic feedback
After submission execution, a couple of fundamental parameters are returned. They are related to measurement and evaluation.
Basic feedback parameters:
Name | Type | Description |
---|---|---|
status | integer | status code of the execution process (see. status) |
execution time | float | time spent executing the submission |
score | float | for projects with evaluation stage, it holds the submission score |
Resulting data steams
During the submission execution, the feedback data is produced. This data usually includes output written by the application, build and run-time errors, unit test reports, and even auxiliary files created by the application during operation.
A typical submission execution is performed in steps called stages, which are more widely discussed in a dedicated article. Each stage produces its output and error data stream, which is saved and available after a complete execution. Additionally, one specialized optional stream is available for unit test reports. There is also a second optional stream, intended for a package with all other miscellaneous files that should be stored among all other execution results.
Here is a complete list of available streams produced during execution:
Name | Description |
---|---|
stage init output | Output data generated during initialization stage |
stage init error | Error data generated during initialization stage |
stage build output | Output data generated during build stage |
stage build error | Error data generated during build stage |
stage run output | Output data generated during execution stage |
stage run error | Error data generated during execution stage |
stage test output | Output data generated during evaluation stage |
stage test error | Error data generated during evaluation stage |
stage post output | Output data generated during finalization stage |
stage post error | Error data generated during finalization stage |
workspace init output | Output data generated during execution of workspace initialization script |
workspace init error | Error data generated during execution of workspace initialization script |
auxiliary data | A tar.gz package with miscellaneous files |
debug log | Additional information for debugging purposes for a Content Manager |
Keeping custom files after execution
As discussed in the submission results section, during the submission execution, some feedback is produced and stored. Most of it is strictly defined, yet there is one dedicated stream designed to keep auxiliary files selected by the Content Manager.
For each scenario, a project configuration allows defining
which directories should be kept after the submission execution of this scenario. This is defined in the
root.scenarios.scenarioName.auxdata
key of the configuration JSON
file.
The auxdata
configuration field is a key: value
collection compatible with the following:
- both
key
andvalue
are of the typestring
, - the
key
is a string that is valid as a filename in Linux operating systems, - the
value
is an absolute path, or a relative path to some directory in the filesystem,- if a relative path is given, it is assumed to be relative to the
$SE_PATH_WORKSPACE
directory (usually/home/user/workspace
).
- if a relative path is given, it is assumed to be relative to the
After the submission execution of the scenario the following steps are performed:
- a single compressed archive
auxdata.tar.gz
with all specified resources will be created,- the archive is of the
gzip
type,
- the archive is of the
- for each
key
a directory of the same name is created in theauxdata.tar.gz
archive, - for the
value
corresponding with thekey
, the content of the directory pointed byvalue
is copied into the directory pointed bykey
in theauxdata.tar.gz
archive, - if the directory pointed by
value
is missing, the wholekey: value
pair is omitted,- this is not considered an error,
- an empty directory of the name
key
will not be created in theauxdata.tar.gz
archive,
- the copying process does not follow symbolic links in the
copied directories
- this means that files and directories pointed by symbolic link will not be present in the
auxdata.tar.gz
archive.
- this means that files and directories pointed by symbolic link will not be present in the
Example
Now, let's summarize this by analyzing a simple example.
config.json file:
{
(...)
"auxdata": {
"custom_dir1": "d1",
"custom_dir2": "d2",
"custom_dir3": "/tmp/xyz"
}
(...)
}
By this part of a configuration file it is expected that after submission execution of the scenario, there are three directories that should be preserved:
- directory
d1
in the$SE_PATH_WORKSPACE
, so usually/home/user/workspace/d1
, - directory
d2
in the$SE_PATH_WORKSPACE
, so usually/home/user/workspace/d2
, - directory
/tmp/xyz
.
Based on this, a compressed archive auxdata.tar.gz
with the content specified be auxdata
configuration field
will be created. Assuming all specified directories exist, the structure of the archive is as follows:
.
├── custom_dir1
│ ├── ...
│ └── ...
├── custom_dir2
│ ├── ...
│ └── ...
└── custom_dir3
│ ...
└── ...
For example the content of auxdata.tar.gz
archive can look like this:
.
├── custom_dir1
│ ├── file.txt
│ └── .hidden_file
├── custom_dir2
│ ├── file.txt
│ └── subdirectory
│ ├── file.txt
│ └── .hidden_file
└── custom_dir3
└── file.txt
Finally, the auxdata.tar.gz
archive is stored along with other submission execution of the scenario results. It can be
later downloaded by the
GET /submissions/:id/:stream Containers API
method call with stream=auxdata
.