Skip to content
YOLOv5
Technology

YOLOv5

Open-source real-time object detection model; confirmed as target-recognition layer in Russia's V2U combat drone.

Last refreshed: 18 April 2026

Key Question

How does a freely downloadable computer vision model end up inside a Russian autonomous combat drone?

Timeline for YOLOv5

View full timeline →
Common Questions
What is YOLOv5 and how does it work?
YOLOv5 is an open-source real-time object detection model that processes entire images in a single neural-network pass. It achieves sub-millisecond inference on modern GPUs and runs on edge devices like the Nvidia Jetson Orin.Source: Ultralytics
Is YOLOv5 being used in military drones?
Yes. CSIS confirmed in April 2026 that Russia's V2U autonomous loitering munition uses YOLOv5 as its target-recognition layer on a Jetson Orin module, identifying targets without operator input.Source: CSIS
Can open-source AI be used to build autonomous weapons?
Yes. Models like YOLOv5 can be fine-tuned on battlefield imagery to recognise target classes such as armoured vehicles. The V2U drone's use of YOLOv5 demonstrates that off-the-shelf open-source AI now underpins operational autonomous-strike systems.
How is YOLOv5 different from newer YOLO versions?
YOLOv5 uses an anchor-based detection head, requiring post-processing via NMS. Newer versions (v8, v9, v10) offer improved accuracy and anchor-free heads, but YOLOv5 remains widely deployed due to its maturity, ecosystem support, and Jetson optimisation.Source: Ultralytics

Background

YOLOv5 ("You Only Look Once" version 5) is an open-source object detection framework developed by Ultralytics and released in 2020. Built on a CSPDarknet53 backbone with a PANet neck and anchor-based detection head, it processes entire images in a single neural network pass, achieving real-time inference speeds while maintaining high accuracy. It exports to TensorRT, ONNX, TFLite, and CoreML, making it trivially portable onto embedded compute modules including the Nvidia Jetson family. CSIS analysis published in April 2026 confirmed that Russia's V2U autonomous loitering munition uses YOLOv5 as its target-recognition layer, running on a Jetson Orin module.

YOLOv5's architecture is designed for speed: on an Nvidia A100 via TensorRT, the baseline model achieves 1.06 ms inference latency, while Jetson-optimised deployments run at usable speeds for real-time scene processing. The GitHub repository has accumulated over 50,000 STARs and remains one of the most widely forked computer vision projects in existence. Its open licence and straightforward fine-tuning pipeline have made it the default starting point for custom detection tasks, including military targeting research.

The V2U deployment illustrates a broader pattern in the Ukraine conflict: publicly available open-source models, fine-tuned on battlefield imagery for target classes such as armoured vehicles and personnel, provide a rapid path to autonomous engagement without the cost or lead time of bespoke military AI. YOLOv5's combination of edge-AI compatibility, documented fine-tuning ease, and publicly available pre-trained weights makes it a recurring component in autonomous drone programmes on both sides of the conflict.