The Digitalist Team
January 19, 2026

Is Your Radio Device a "High-Risk" AI System? Navigating the RED and AI Act Overlap

9

min reading time

Recent regulatory updates have shifted the ground under the feet of hardware manufacturers.

While the industry has been focused on the immediate hurdles of the Radio Equipment Directive (RED) and the incoming Cyber Resilience Act (CRA), a third player has entered the arena.

Regulation (EU) 2024/1689, better known as the AI Act, is now in force.

For many, the assumption is that this regulation applies only to "software companies" building chatbots or generative models.

But for manufacturers of connected devices, the reality is starkly different.

If you are building radio equipment today, the AI Act is not just a distant concern; it is likely a direct obligation.

Just as we discussed in The Hidden Risks of AI Toys: Navigating the Regulatory Gap, where smart features can inadvertently trigger safety laws, the AI Act explicitly targets hardware regulation.

Annex I, Section A of the AI Act lists the Radio Equipment Directive (2014/53/EU) as a critical product safety legislation.

This means the "brain" of your connected device might soon be classified as "High-Risk," triggering a conformity assessment process far more rigorous than standard RED compliance. Ignoring this overlap is not just a compliance risk; it is a strategic error that could force a total product redesign in 2027.

Legally, if your device uses AI for safety, it may be a High-Risk AI System. Source: Freepik

Is Your Radio Device a "High-Risk" AI System? Navigating the RED and AI Act Overlap

The AI Act introduces a classification system based on risk.

While most consumer IoT devices might fall into lower risk categories, the inclusion of RED in Annex I changes the equation for many manufacturers.

Under the new rules, a product regulated under RED is classified as a High-Risk AI System if two conditions are met:

  1. The AI system is used as a safety component of the product or is the safety component itself.
  2. The product is required to undergo a third-party conformity assessment under RED.

This sounds straightforward until you ask: What is a "safety component" in a radio device?

Traditionally, safety meant preventing physical harm (Article 3.1a).

However, as we detailed in Radio Equipment Directive in 2025: The 3 Key Pillars for a Successful Market Entry, the definition of "essential requirements" under RED has expanded significantly.

With the activation of Articles 3.3(d), 3.3(e), and 3.3(f), safety now encompasses:

  • Network Protection (3.3d): Ensuring the device does not harm the network.
  • Personal Data Privacy (3.3e): Protecting sensitive user data.
  • Fraud Prevention (3.3f): Mitigating monetary risks.

If your device uses an AI model, such as a machine learning algorithm for intrusion detection or biometric authentication, to fulfill these mandatory cybersecurity requirements, that AI becomes a safety component.

Therefore, it triggers the High-Risk classification under the AI Act.

This effectively closes the loop we warned about in RED Certification in the Age of 5G Adapting to New Risks: complex networks demand complex controls, and those controls are now heavily regulated.

The timeline for compliance is converging: RED in 2025, AI Act High-Risk in 2027. Source: Freepik

The "Double Lock" on Conformity Assessment

The implication of this "High-Risk" status is a potential upheaval in your conformity assessment procedure.

Manufacturers are accustomed to the Radio Equipment Directive, where using harmonized standards (like EN 18031) often allows for a streamlined assessment (Module A).

However, the AI Act adds a second layer.

Article 43 of the AI Act mandates that the assessment for the AI component must be integrated into the RED conformity assessment.

But here is the trap: Even if you fully comply with RED using EN 18031, Article 43(3) of the AI Act suggests that if you have not applied harmonized AI standards (which are currently still in development), you may be forced to use a Notified Body for the AI portion of your certification.

This creates a "Double Lock" scenario:

  1. Lock 1 (RED): You need to prove cybersecurity (EN 18031).
  2. Lock 2 (AI Act): You need to prove the AI model is robust, accurate, and unbiased.

This aligns with the challenges we highlighted in Why RED is the Blueprint for CRA Success: regulations are no longer silos; they are a matrix.

A failure to plan for this integration means you might pass your RED audit in 2025, only to have your technical file rejected in 2027 because the AI documentation is insufficient.

The AI Act creates a 'Double Lock': even if you comply with RED standards, the lack of harmonized AI standards may still force a mandatory third-party assessment. Source: Freepik

Educational Deep Dive: What Counts as a High-Risk AI Feature?

How do you know if your specific feature triggers this classification?

It often comes down to the function the AI performs relative to the RED Article 3.3 requirements.

  • AI for Battery Management: If a fault could cause overheating (Article 3.1a), it is a safety component.
  • AI for Network Optimization: If it autonomously blocks traffic to prevent DDoS participation (Article 3.3d), it is a safety component.
  • AI for Voice Recognition: If it processes biometric data to authenticate a user (Article 3.3e), it is a safety component.

As noted in The Hidden Risks of AI Toys, relying on third-party APIs for these features does not absolve the manufacturer of responsibility.

You must assess how these "black box" components interact with the safety and fundamental rights of the user.

Integrated compliance strategies save engineering time and prevent redundant testing. Source: Freepik

How CCLab Helps Manufacturers Navigate the Overlap

The convergence of RED, the CRA, and now the AI Act requires a compliance partner who sees the bigger picture.

At CCLab, we specialize in the intersection of these regulations.

We help manufacturers of Consumer IoT and Industrial IoT ensure that the work done for RED today serves as the foundation for the AI Act tomorrow.

CCLab supports your transition through:

  • Integrated Gap Analysis: We map your device's features against RED Articles 3.3 and the AI Act Annex I criteria to identify "High-Risk" triggers early.
  • Evidence Reuse Strategy: As detailed in RED Compliance Beyond Europe, we structure your technical documentation so that evidence gathered for EN 18031 can be reused for future AI assessments, reducing duplication.
  • Cybersecurity Evaluation: We perform the rigorous vulnerability scanning and penetration testing required by RED, which forms the baseline for AI robustness requirements.
  • Notified Body Coordination: We facilitate the complex interaction between radio assessment and AI conformity, ensuring a smooth path through the "Double Lock."

By addressing these requirements in parallel, you avoid the cost of retrofitting your product for the AI Act just two years after your RED certification.

Summary

The regulatory landscape for radio equipment has evolved beyond simple connectivity.

With Regulation (EU) 2024/1689, the AI Act, explicitly listing RED in Annex I, manufacturers must face a new reality.

The cybersecurity features you implement today to meet RED Articles 3.3(d), (e), and (f) may well be the "safety components" that classify your device as High-Risk in 2027.

Ignoring this connection risks locking your product out of the EU market just as the Cyber Resilience Act also comes into force.

By viewing RED, the AI Act, and the CRA as a single, phased compliance journey, you can turn regulatory pressure into a competitive advantage.

The takeaway: The best time to prepare for the AI Act's impact on your radio equipment was yesterday.

The second best time is now. Start mapping your AI safety components today and ensure your RED technical file is future-proof.

Ready to start your compliance journey? Explore our full range of Radio Equipment Directive (RED) Compliance Services or Contact our experts today for a personalized consultation.

Related downloadables

RED Cybersecurity - Steps of Compliance InfographicsRED Cybersecurity - Steps of Compliance Infographics
Infographics
Infographics

RED Cybersecurity - Steps of Compliance Infographics

RED Cybersecurity - Steps of Compliance Infographics

Download this comprehensive infographic guide, which deep dive into the key stages of the Radio Equipment Directive (RED). Gain clarity on technical requirements, risk assessment, and strategic decisions to ensure your products meet EU regulations.

download now
download now
EU Cyber Resilience Act (CRA) InfographicsEU Cyber Resilience Act (CRA) Infographics
Infographics
Infographics

EU Cyber Resilience Act (CRA) Infographics

EU Cyber Resilience Act (CRA) Infographics

The EU Cyber Resilience Act (CRA) introduces a unified cybersecurity framework for products with digital elements that have direct or indirect, logical or physical data connection to a device or network, including everything from software or hardware products to free and open-source software that is monetized or integrated into commercial products.

download now
download now
Guide for Radio Equipment Directive (RED)Guide for Radio Equipment Directive (RED)
E-book
E-book

Guide for Radio Equipment Directive (RED)

Guide for Radio Equipment Directive (RED)

Read and learn more about the Radio Equipment Directive (RED), download our free material now.

download now
download now

Related news