- Sphere Engine overview
- Compilers
- Overview
- API integration
- JavaScript widget
- Problems
- Overview
- API integration
- JavaScript widget
- E-learning platforms
- Problem setter's handbook
- Problems archive
- RESOURCES
- Programming languages
- Submission streams
- Webhooks
- Disk operations
- Multi-file submissions
- Generating images
- Client libraries
- API Changelog
- FAQ
Effective project configuration
The vast flexibility of the Sphere Engine Containers module allows numerous different approaches for configuring projects. Each approach can yield a similar effect, but they are not equal in terms of efficiency.
As a rule of thumb, it's worth keeping in mind the following:
- it is better to have all dependencies pre-downloaded than to download them for every submission
- it is better to keep files as a part of the project if they don't change rather than submit them separately every time
- submissions should contain only new or modified files prepared by the end-users
If you stick to the above rules, your end-users' submissions will be executed fast, improving their user experience and keeping your budget in line.
Available project templates
The Sphere Engine Containers module supports various base projects for different purposes. They can be grouped into generic categories. Each category reflects the specificity of a given type of project and has a distinctive user layout in the Workspace.
The following main types are currently available for Sphere Engine Containers projects:
Web applications
Web applications are one of the most popular software products. Although they use a wide variety of technologies (e.g. Python, PHP, NodeJS), back-end frameworks (e.g. Django, Symfony), and front-end frameworks (e.g. React, Angular, Vue.js), they have a lot in common, especially when it comes to running or testing.
When working in the Workspace, web application projects allow displaying a live view of the web application being created.

Desktop applications
Desktop applications may not be as popular as they were a few years ago, however in some cases will be the preferable solution. Launching such applications usually leads to opening a graphical user interface window.
The following use cases are still very popular as desktop apps:
- scientific packages with charts and plots (e.g. PyPlot, Octave, R)
- game frameworks (e.g. PyGame)

Mobile applications
Nowadays, mobile applications are very popular, and the interest in related technologies continues to grow. To be able to monitor the end-user experience, working with a mobile app requires emulating a mobile device environment.

Console applications
This is the most generic project type. Anything that communicates with the external world using data streams
(like stdout
or stderr
) or files falls into this category.
The following projects often appear as console applications:
- C/C++ multi-files projects built with
Makefile
- Python or PHP scripts
- Java projects defined by Maven's
pom.xml
file withjUnit
unit tests - Machine learning projects powered by the Tensorflow framework
- .NET framework C# projects
- projects using MySQL relational database operations

Tool applications
This is a broad category of single or multi purpose tools such as Jupyter, Git or Ansible.

Note: At this point, the specificity of the project type (especially user interface layout) has no effect on the end-user because the Workspace functionalities are limited to be used by Content Managers. Yet, in the upcoming releases, the Workspace will be interchangeable with the current available RESTful API. This is why selecting the proper project type is recommended already, to get the most out of it in the future.
Archive with API submission files
A typical submission to Sphere Engine Containers API is a part of a larger project. Such submissions are packages containing a number of files arranged in the directory tree. However, the submission doesn't need to contain all the project files, which would be wasteful. The submission should deliver only new or modified files in the ideal situation.
Consider an example project structure:
src
models
Bookstore.ts
User.ts
Book.ts
views
AddBook.tsx
EditBook.tsx
EditUser.tsx
Library.tsx
User.tsx
test
models
Bookstore.ts
User.ts
package.json
tsconfig.json
In the above project, there are many files in different directories, and this is only a sample to aid our discussion. Actual projects are much more complex. Usually, the submission affects only a small part of the project.
Let's assume that we would like to:
- add a new
test/models/Book.ts
file - edit the
src/models/User.ts
file - edit the
test/modes/User.ts
file
Our goal is to have the following project:
src
models
Book.ts
Bookstore.ts
User.ts <-- modified by submission
views
AddBook.tsx
EditBook.tsx
EditUser.tsx
Library.tsx
User.tsx
test
models
Book.ts <-- added by submission
Bookstore.ts
User.ts <-- modified by submission
package.json
tsconfig.json
We intend to create a tar.gz
archive containing all the files (i.e. src/models/Book.ts
, src/models/User.ts
,
test/models/User.ts
) and keep the directory structure. In other words, we want to create an archive of the following
structure:
src
models
Book.ts
User.ts
test
models
User.ts
In addition, Sphere Engine Containers API follows the convention in which it is required to have an archive in
canonical form. The canonical form requires putting all submission files into a single directory named workspace
,
which should be placed in the root of the tar.gz
archive.
Assuming we are in the directory directly above the src
and the test
directories, we can do it as follows:
tar -czf source.tar.gz --transform 's,^,workspace/,' ./src ./test
We should end up with the source.tar.gz
archive that is ready to be submitted by the API method. The archive yields the
following structure (note the added workspace
directory in the root of the directory structure):
workspace
src
models
Book.ts
User.ts
test
models
User.ts
Note: The presented method shows how to create an archive manually. While integrating Sphere Engine Containers,
you can use any programming method to automate this process. For example, you can use PharData
in PHP and
tarfile
in Python.
Submission results
Basic feedback
After submission execution, a couple of fundamental parameters are returned. They are related to measurement and evaluation.
Basic feedback parameters:
Name | Type | Description |
---|---|---|
status | integer | status code of the execution process (see. status) |
execution time | float | time spent executing the submission |
memory consumption | int | random-access memory (RAM) consumed during the submission execution |
signal | int | exit code returned by the executed application |
score | float | for projects with evaluation stage, it holds the submission score |
Resulting data steams
During the submission execution, the feedback data is produced. This data usually includes output written by the application, compilation and run-time errors, unit test reports, and even auxiliary files created by the application during operation.
A typical submission execution is performed in steps called stages, which are more widely discussed in [LINK]. Each stage produces its output and error data stream, which is saved and available after a complete execution. Additionally, one specialized optional stream is available for unit test reports. There is also a second optional stream, intended for a package with all other miscellaneous files that should be stored among all other execution results.
Here is a complete list of available streams produced during execution:
Name | Description |
---|---|
stage init output | Output data generated during initialization stage |
stage init error | Error data generated during initialization stage |
stage compilation output | Output data generated during compilation stage |
stage compilation error | Error data generated during compilation stage |
stage execution output | Output data generated during execution stage |
stage execution error | Error data generated during execution stage |
stage judging output | Output data generated during judging stage |
stage judging error | Error data generated during judging stage |
stage finalization output | Output data generated during finalization stage |
stage finalization error | Error data generated during finalization stage |
auxiliary data | A tar.gz package with miscellaneous files |
debug log | Additional information for debugging purposes for a Content Manager |