IDE & Assessments
When you start a task, DynaLab.ai launches a sandboxed coding environment with a full-featured web IDE. Here's everything available to you during an assessment.
Code Editor
The editor is built on Monaco (the same engine as VS Code) and supports:
- Syntax highlighting for all major languages
- Multi-file editing with tabs
- Find and replace (Ctrl/Cmd + F)
- Go to line (Ctrl/Cmd + G)
- Keyboard shortcuts matching VS Code defaults
- Auto-save — files are saved automatically as you type
File Explorer
The left panel shows the complete file tree of the task's codebase. You can:
- Click files to open them in the editor
- Expand and collapse directories
- See file type icons for quick recognition
Scoring tip
Terminal
A full interactive terminal connected to the sandbox environment via WebSocket. Use it to:
- Run the project's test suite
- Execute shell commands
- Install dependencies
- View logs and debug output
- Resize the terminal by dragging the panel border
AI Chat
The AI assistant panel lets you interact with an AI model during the assessment. Key features:
- Streaming responses — See the AI's response as it's generated in real-time
- Tool calls — The AI can read files, write files, and run shell commands in your sandbox
- Context attachment — Reference specific files or code snippets in your messages
- Conversation history — Full chat history is preserved throughout the session
Scoring tip
AI Call Limits
- Free plan: 5 AI calls per session (basic model)
- Pro / Pro+: Unlimited AI calls (full model)
- Team assessments: Unlimited AI calls
Diff Viewer
For code review tasks (task-201 through 205), the IDE includes a diff viewer showing the PR's changes. You can:
- View file-by-file diffs in a split or unified view
- Add inline review comments on specific lines
- Submit a review verdict (approve, request changes, or comment)
Observable Fixtures
Some tasks include observable fixtures — simulated production data that provides context for the problem. Available fixture types:
- Logs — Structured application logs showing errors, warnings, and request traces
- Metrics — Performance dashboards with graphs (response times, error rates, throughput)
- Traces — Distributed tracing data showing request flow through services
- Network — HTTP request/response monitoring
- Alerts — Monitoring alerts and incident notifications
- CI Pipeline — Build and test pipeline status
- Runbook — Incident response playbooks
Task Panel
The right-side task panel displays:
- Task description and requirements
- Expected behavior and acceptance criteria
- Hints (unlocked progressively as time passes)
- Expected test results (how many tests should pass)
Timer & Session Management
Each task has a time limit (typically 30 minutes) displayed at the top of the IDE. Key behaviors:
- The timer counts down from the task's time limit
- When time runs out, the session is automatically submitted
- You can submit early at any time via the Submit button
- Sessions can be paused and resumed (within the overall time limit)
- Idle sessions time out after 90 minutes of inactivity
Submitting Your Work
When you're ready to submit:
- Click the Submit button in the top-right corner
- A confirmation modal shows your current test pass rate
- Confirm submission to end the session
- After submission, you'll be directed to the debrief — 3-5 comprehension questions about your work
- Your scorecard is generated (typically 30-60 seconds)