IDE & Assessments

When you start a task, DynaLab.ai launches a sandboxed coding environment with a full-featured web IDE. Here's everything available to you during an assessment.

Code Editor

The editor is built on Monaco (the same engine as VS Code) and supports:

  • Syntax highlighting for all major languages
  • Multi-file editing with tabs
  • Find and replace (Ctrl/Cmd + F)
  • Go to line (Ctrl/Cmd + G)
  • Keyboard shortcuts matching VS Code defaults
  • Auto-save — files are saved automatically as you type

File Explorer

The left panel shows the complete file tree of the task's codebase. You can:

  • Click files to open them in the editor
  • Expand and collapse directories
  • See file type icons for quick recognition

Scoring tip

Exploring relevant files before your first AI interaction is one of the strongest signals for the Context Engineering and Problem Decomposition dimensions. Take time to understand the codebase first.

Terminal

A full interactive terminal connected to the sandbox environment via WebSocket. Use it to:

  • Run the project's test suite
  • Execute shell commands
  • Install dependencies
  • View logs and debug output
  • Resize the terminal by dragging the panel border
The terminal runs inside an isolated container. You have full shell access but the environment is destroyed after the session ends.

AI Chat

The AI assistant panel lets you interact with an AI model during the assessment. Key features:

  • Streaming responses — See the AI's response as it's generated in real-time
  • Tool calls — The AI can read files, write files, and run shell commands in your sandbox
  • Context attachment — Reference specific files or code snippets in your messages
  • Conversation history — Full chat history is preserved throughout the session

Scoring tip

How you interact with the AI is the core of what gets scored. Specific, well-contextualized prompts score higher than vague ones. Always verify AI output before accepting it.

AI Call Limits

  • Free plan: 5 AI calls per session (basic model)
  • Pro / Pro+: Unlimited AI calls (full model)
  • Team assessments: Unlimited AI calls

Diff Viewer

For code review tasks (task-201 through 205), the IDE includes a diff viewer showing the PR's changes. You can:

  • View file-by-file diffs in a split or unified view
  • Add inline review comments on specific lines
  • Submit a review verdict (approve, request changes, or comment)

Observable Fixtures

Some tasks include observable fixtures — simulated production data that provides context for the problem. Available fixture types:

  • Logs — Structured application logs showing errors, warnings, and request traces
  • Metrics — Performance dashboards with graphs (response times, error rates, throughput)
  • Traces — Distributed tracing data showing request flow through services
  • Network — HTTP request/response monitoring
  • Alerts — Monitoring alerts and incident notifications
  • CI Pipeline — Build and test pipeline status
  • Runbook — Incident response playbooks
Not all tasks include fixtures. Production triage tasks (101-105) heavily use logs, metrics, and traces. The task panel indicates which fixtures are available.

Task Panel

The right-side task panel displays:

  • Task description and requirements
  • Expected behavior and acceptance criteria
  • Hints (unlocked progressively as time passes)
  • Expected test results (how many tests should pass)

Timer & Session Management

Each task has a time limit (typically 30 minutes) displayed at the top of the IDE. Key behaviors:

  • The timer counts down from the task's time limit
  • When time runs out, the session is automatically submitted
  • You can submit early at any time via the Submit button
  • Sessions can be paused and resumed (within the overall time limit)
  • Idle sessions time out after 90 minutes of inactivity

Submitting Your Work

When you're ready to submit:

  1. Click the Submit button in the top-right corner
  2. A confirmation modal shows your current test pass rate
  3. Confirm submission to end the session
  4. After submission, you'll be directed to the debrief — 3-5 comprehension questions about your work
  5. Your scorecard is generated (typically 30-60 seconds)

Submission is final

Once submitted, you cannot go back and make changes. Make sure you've run tests and are satisfied with your solution before submitting.