# Introduction to Online Judges

In this chapter, we are going to introduce the general idea that forms the basis of Online Judge systems (such as Sphere Engine Problems and Spoj). We will compare the older method used during manual verification by the tutor and contrast it with the modern automatic approach, and finally, highlight the differences between these two approaches and indicate the advantages of the automatic method.

## Manual verification

The problem setter needs to try the code manually i.e. they need to try it for some simple input data. In such a scenario, even simple programs require at least a small human-readable interface. Let us consider the following toy model example along with the C language solution:

The integer power problem description
Write a program which takes two numbers a and b and returns the value of ab.
``````int atob(int a, int b)
{
int i, result = 1;
for(i=0; i < b; i++)
{
result *= a;
}
return result;
}

int main()
{
int a,b;
printf("Give a: ");
scanf("%d", &a);
printf("Give b: ");
scanf("%d", &b);
printf("a to the power of b = %d", atob(a,b));
return 0;
}``````

After compiling and running this piece of code the problem setter is ready to use the program.

Nevertheless, it is clear that manual verification requires the author's continuous attention. For each input, the author has to remember the correct output. Moreover, they have to do it for all the problems that they manage - the author is forced to remember how to verify the correctness of all their problems.

Another issue is that they likely aim to be accurate. Therefore, they need to manage all boundary and problematic input data samples. This simple problem above gives rise to some interesting problems. For example, does `a` to the power of `zero` equal `1`?

Yet another question is how to deal with time and memory complexity issues. Our team members' academic experience demonstrates that the automatic judge solutions save considerable time and significantly improve the quality of verification. Both tutors and students benefit from it.

## Automatic verification

Moving on to online judging we will illustrate how we need to change approach. The compiling, running and testing process is going to be conducted by the machine. Therefore, the problem setter needs to clearly specify how the program should communicate - we will call this part input/output specification.

Let's suppose that we want to prepare the `Integer power` problem to automatic judging and that the presented source code fulfills the desired input/output specification (usually the direction is opposite - first we specify the input and output structure, then we implement the solution). Along with the problem definition, we need to provide information that the program should expect two integer numbers on the input and needs to print the information in the following format:

``Print a: Print b: a to the power of b = c``

Clearly, the output specification is very unnatural. As you can see we don't need to have any human-readable interface at all. In fact, it is rather an obstruction, which can even lead to rejecting correct solutions due to some complicated or non-intuitive input or output requirements.

How simple can the previous `Integer power` problem be if we drop all the input/output formatting? We don't want to have any "Give me the value" statements to obtain the input and "Result is" statements in the output. It would, therefore, be sufficient to stay with minimal source code:

``````int atob(int a, int b)
{
int i, result = 1;
for(i=0; i < b; i++)
{
result *= a;
}
return result;
}

int main()
{
int a,b;
scanf("%d %d", &a, &b);
printf("%d", atob(a,b));
return 0;
}``````

Programming problems for Online Judge (like Sphere Engine Problems) systems are always defined in this simple way. The precise input specification is delivered to the user so they are able to prepare the solution in this data format. As a result, the user's solution should produce the answer. The format of the answer is also specified so the automatic verification is possible.

The automated process of verifying the user's solution is executed against the input data (potentially many data sets) prepared by the problem setter. After execution, the output data produced by the user's solution is compared with the model output data prepared by the problem setter. The most important development to be aware of is that the problem setter sets up the data used for verification only once (i.e., when they create the programming problem).