Quick Start Guide
Get up and running with DynaLab.ai in under 5 minutes. This guide walks you through account creation, onboarding, and taking your first assessment.
Create Your Account
DynaLab.ai supports three authentication methods. Choose whichever is most convenient for you:
- GitHub OAuth — Sign in with your GitHub account. Fastest option if you already have one.
- Google OAuth — Sign in with your Google account.
- Email OTP — Enter your email address and verify with a one-time code. No third-party account needed.
For hiring teams
Onboarding Flow
After creating your account, you'll go through a brief onboarding to personalize your experience:
- Welcome — A quick introduction to what DynaLab.ai measures and how it works.
- Role Questions — Optional questions about your experience level and focus areas. Helps us recommend relevant tasks.
- Scoring Intro — Brief explanation of the 8 scoring dimensions so you know what's being measured.
- Diagnostic Assessment — A short 4-minute, 6-8 question assessment to establish your baseline skill level. This personalizes your learning path recommendations.
Diagnostic is optional
Your First Assessment
Choosing a Task
Browse the task library from the dashboard or the Tasks page. Each task shows:
- Difficulty level (beginner, intermediate, advanced)
- Category (bug fix, code review, debugging, performance, etc.)
- Language and framework
- Time limit (typically 30 minutes)
- Whether it's free or requires a paid plan
18 tasks are free for all users. Start with a beginner-level task like Task 001: Fix Intermittent 500 Errors to get comfortable with the IDE.
Taking the Assessment
- Click "Start" on a task to launch the IDE. A sandboxed coding environment loads with the task's codebase.
- Read the brief — The task panel on the right describes the problem, expected behavior, and any hints.
- Explore before acting — Browse files, read logs, understand the codebase before jumping to a fix. This is one of the most heavily weighted scoring signals.
- Use the AI chat — Ask the AI assistant questions, request code changes, and iterate. How you interact with the AI is what gets scored.
- Run tests — Use the terminal to run the project's test suite and verify your changes work.
- Submit — Click the Submit button when you're satisfied with your work (or when time runs out).
Understanding Your Scorecard
After submission, your session is scored across 7 calibrated dimensions. The scorecard typically takes 30-60 seconds to generate and includes:
- Overall score (0-100) and letter grade (S/A/B/C/D/F)
- Dimension scores — How you performed on each of the 8 scoring dimensions
- Strengths and weaknesses — Key areas where you excelled or need improvement
- Key moments — Specific timestamped events from your session (e.g., catching a hallucination, running tests after AI changes)
- Session replay — A full timeline of your session that you can replay
See the Scoring System docs for a detailed breakdown of how each dimension is calculated.
Next Steps
- IDE & Assessments — Deep dive into all the tools available during an assessment
- Scoring System — Understand exactly what's being measured and how to improve
- Learning Features — Explore Academy modules, Labs drills, and Arena challenges
- Task Categories — Browse all 23 tasks by category and difficulty