The rise of embedded artificial intelligence is radically transforming the architectures of critical systems. In its latest white paper, SOC-E presents a practical, operational approach combining real-time video analysis, hardware acceleration and deterministic communication via TSN. The objective is clear: to meet the extreme constraints on latency, reliability and interoperability imposed, in particular, by defence and embedded environments.
- A break from traditional cloud architectures
- On-board video analysis: AI models optimised for real-time processing
- A hardware architecture designed for performance
- TSN (Time-Sensitive Networking): making Ethernet deterministic
- Native integration into NGVA architectures
- A solid foundation for next-generation embedded systems
- Contact us
- FAQ: Edge AI, TSN and Embedded Architectures
- What is Edge Intelligence and why has it become essential?
- Why is YOLO particularly well suited to real-time applications?
- What is the role of a DPU in embedded architecture?
- How does TSN change the way Ethernet is used?
- What is the NGVA standard?
- Why is an FPGA used in this type of system?
- What is the quantification of an AI model?
- What role does GStreamer play in this architecture?
- What performance levels are achieved with this system?
- What types of applications can benefit from this architecture (excluding defence)?
A break from traditional cloud architectures
For a long time, the processing of data from sensors, particularly video, relied heavily on cloud infrastructure. This approach is now reaching its limits as real-time constraints become critical. The volume of data generated by high-resolution video streams, combined with the need for responsiveness, creates bottlenecks that are difficult to overcome.
It is precisely in this context that the Edge Intelligence paradigm comes to the fore. By bringing data processing closer to the source, it becomes possible to drastically reduce latency whilst ensuring better control over data flows and security.
In critical applications – whether autonomous vehicles, ISR systems or advanced surveillance – this ability to analyse and make decisions locally is no longer an advantage, but a necessity.
On-board video analysis: AI models optimised for real-time processing
The use case presented in the white paper by our partner SOC-E focuses on real-time object detection, specifically traffic cones. Behind this deliberately simple example lies a much broader challenge: automated environmental perception.
To address these challenges, the architecture relies on convolutional neural networks (CNNs), which are now indispensable in the field of computer vision. The choice fell on the YOLO model, recognised for its ability to process an image in a single pass, making it particularly well-suited to real-time constraints.
Each analysed image allows for the extraction of directly usable information:
- Object positions,
- Dimensions,
- Class and confidence level.
After training on a varied and representative dataset, the model achieves an accuracy of around 96%, making it suitable for use in demanding operational contexts.
A hardware architecture designed for performance
Beyond the algorithm, it is the hardware architecture that enables these levels of performance to be achieved. The system is based on a platform incorporating an AMD-Xilinx Zynq UltraScale+ MPSoC, capable of combining software processing and hardware acceleration within a single component.
The entire pipeline is optimised for real-time processing. The video stream is first captured and then decoded via hardware. It is subsequently pre-processed to be adapted to the format required by the neural network. Inference is performed by a DPU (Deep Learning Processor Unit), specifically designed to accelerate AI-related calculations.
Once objects have been detected, the information is enriched (bounding boxes) and then prepared for transmission to other systems. The entire process, from capture to transmission, takes place with a latency of approximately 80 milliseconds, which is particularly efficient for this type of application.
TSN (Time-Sensitive Networking): making Ethernet deterministic
One of the key contributions of the work presented lies in the integration of Time-Sensitive Networking (TSN). Historically, Ethernet was not designed for strict real-time applications. TSN specifically addresses this shortcoming by introducing mechanisms for scheduling and prioritising data flows.
In practical terms, the network is organised to ensure that critical data is transmitted within strictly defined time windows. This approach ensures controlled latency, reduced jitter and consistent quality of service, even under heavy network load. In the architecture described, data streams are structured across three levels:
- Critical data (object position and size) is transmitted in strict real time
- The enhanced video stream is processed in flexible real time
- Other communications are handled on a best-effort basis
This prioritisation ensures that essential information is never disrupted by secondary streams.
Native integration into NGVA architectures
The system is fully compliant with the NGVA (NATO Generic Vehicle Architecture) standard, which aims to standardise military vehicle architectures. One of the key challenges of this standard is to enable interoperability between heterogeneous subsystems, whilst maintaining strict real-time performance.
The integration of TSN plays a key role here. It ensures deterministic communication between the vehicle’s various modules, whether these are sensors, control systems or decision-making modules. This approach also facilitates the modularity and scalability of platforms, two important criteria in environments where life cycles are long and requirements are constantly evolving.
A solid foundation for next-generation embedded systems
This white paper provides a practical illustration of the convergence between artificial intelligence, embedded computing and deterministic networks. The proposed architecture demonstrates that it is now possible to deploy advanced real-time perception capabilities directly on the ground.
The prospects are numerous: multi-sensor fusion, increased platform autonomy, and improved decision-making in degraded environments. As these technologies mature, they are establishing themselves as essential building blocks of today’s critical systems.
For system integrators and manufacturers, this means developing expertise in these hybrid architectures, at the intersection of hardware, software and networking.
Contact us
Architectures combining embedded intelligence and deterministic communication are becoming the norm in mission-critical systems. Integrating them requires specialist expertise in hardware, software and networking.
The Ecrin teams support you in defining and deploying these solutions, drawing on proven technological building blocks tailored to your business requirements.
👉 Discover our solutions and speak to our experts: Contact us.
👉 To download the SOC-E White Paper, click here: White Paper: Time-Sensitive Networking to meet Hard-real Time Boundaries on Edge Intelligence Applications for NGVA Land Vehicles.
FAQ: Edge AI, TSN and Embedded Architectures
What is Edge Intelligence and why has it become essential?
Edge Intelligence involves running AI processes directly on devices at the network’s edge. It helps to reduce latency, minimise network traffic and improve the responsiveness of critical systems.
Why is YOLO particularly well suited to real-time applications?
YOLO processes the image in a single pass, which significantly reduces inference time. This speed makes it the preferred choice for systems requiring instant detection.
What is the role of a DPU in embedded architecture?
The DPU is a hardware accelerator designed specifically for artificial intelligence computations. It enables complex models to be run with reduced latency and improved energy efficiency.
How does TSN change the way Ethernet is used?
TSN introduces time-planning mechanisms that ensure transmission deadlines are met. This makes Ethernet compatible with strict real-time applications.
What is the NGVA standard?
NGVA (NATO Generic Vehicle Architecture) is a standardised architecture for military vehicles, designed to ensure interoperability, modularity and scalability.
Why is an FPGA used in this type of system?
The FPGA allows the hardware architecture to be tailored to the specific requirements of the application. It offers a high degree of flexibility whilst ensuring high performance.
What is the quantification of an AI model?
Quantisation involves reducing the precision of the model data (for example, from 32 bits to 8 bits) in order to speed up calculations and reduce power consumption, without significantly compromising performance.
What role does GStreamer play in this architecture?
GStreamer handles multimedia streams:
- video capture
- encoding/decoding
- processing pipeline
It is widely used in embedded video systems.
What performance levels are achieved with this system?
- Accuracy: 96%
- Overall latency: ~80 ms
- Video processing: 720p at 25 FPS
What types of applications can benefit from this architecture (excluding defence)?
This type of solution is particularly suitable for:
- autonomous vehicles
- robotics
- smart surveillance
- visual inspection
- smart cities
- critical industrial systems, etc.