Depth-Anything Video Analysis
Published:

Overview
Fine-tuned monocular depth estimation networks for autonomous flight applications at KEF Robotics. While I cannot share the fine-tuned code, I created a HuggingFace Space that allows users to test the base model (from TikTok) on their own custom videos.
Links
- HuggingFace Spaces Demo: huggingface.co/spaces/JohanDL/Depth-Anything-Video
Technical Details
- Framework: PyTorch
- Model: Depth-Anything monocular depth estimation
- Application: Autonomous UAV flight
- Company: KEF Robotics
Key Features
- Monocular depth prediction from single camera images
- Fine-tuned for aerial/UAV perspectives
- Depth estimation for flight applications
- Interactive demo on HuggingFace Spaces
Awards
- 2024: Awarded Community Grant from Hugging Face to demonstrate Depth Anything results on videos
Impact
This work enables safer autonomous flight by providing accurate depth perception from standard camera feeds, crucial for obstacle avoidance and navigation.
Organization: KEF Robotics (2023-2024)
