Open-Sourcing DrikLabel: AI-Assisted Video Annotation
Why we're giving away our annotation tool — and how active learning propagation makes video labeling 10x faster.
Why Open Source an Annotation Tool?
Every computer vision team needs an annotation tool. Most of them are either expensive, clunky, or both. We built DrikLabel (Chitrā) for ourselves, and realized it could help the entire community.
What Makes DrikLabel Different
Active Learning Propagation
The core innovation is simple: when you correct an annotation on frame 100, that correction automatically propagates to frames 101–500. Not just copying — the system re-tracks the object using the corrected bounding box as a seed.
Multi-Direction Re-tracking
Most trackers only go forward. DrikLabel can re-track objects backwards too. Correct a label on frame 200, and it will update frames 1–199 as well.
Classification Updates
Change a vehicle’s class from “car” to “SUV” on any frame, and the classification updates across the entire track — past and future.
The Tech Stack
- Frontend: React + Zustand for state management
- Backend: Python annotation server with YOLO/SAM pre-annotation
- Tracking: ByteTrack with custom re-identification module
- Export: COCO, YOLO, Pascal VOC formats
Get Started
Check out DrikLabel on our Open Source page or head directly to the GitHub repo.