Coding Exercise
A Coding Exercise is a timed technical challenge designed to assess a candidate’s programming skills in a real-world scenario.
Candidates have up to 25 minutes to complete the task using a simple, browser-based IDE that allows them to write, edit, and run their code directly — no setup required.
This format is ideal for evaluating problem-solving ability, code quality, and familiarity with core language features.
Supported languages include:
- C
- C++ (Clang or GCC)
- C#
- Go
- Java
- JavaScript
- PHP
- Python (v2 or v3)
- Ruby
- Rust
- Swift
- TypeScript
If you need a different language not listed here, check with our team.
Custom Exercise
A Custom Exercise allows you to create tailored evaluation tasks by defining your own instructions.
Candidates will respond either by typing into a provided input textarea or by recording audio, depending on the format you choose.
You can also set a time limit for completion to simulate real-world constraints and maintain consistency across interviews.
Examples of what you can assess with Custom Exercises:
- Script reading (audio): Evaluate pronunciation, tone, and fluency
- Writing an email (typed): Assess clarity, professionalism, and grammar
- Answering a chat message (typed): Test responsiveness, empathy, and communication style
Data Annotation Exercise
A Data Annotation Exercise is designed to evaluate a candidate’s ability to assess and refine AI-generated outputs.
You start by defining a subject and a context, and our AI will automatically generate a relevant annotation task.
The candidate’s role is to evaluate the AI’s responses based on your instructions—identifying errors, suggesting improvements, or validating correctness.
This format is especially valuable for vetting domain experts who will contribute to training and improving AI models through high-quality feedback.
- Define a subject and context
- AI generates a tailored annotation task
- Candidate evaluates AI-generated answers
- Ideal for building expert teams for AI training and validation