
9
min reading time
While the industry has been focused on the immediate hurdles of the Radio Equipment Directive (RED) and the incoming Cyber Resilience Act (CRA), a third player has entered the arena.
Regulation (EU) 2024/1689, better known as the AI Act, is now in force.
For many, the assumption is that this regulation applies only to "software companies" building chatbots or generative models.
But for manufacturers of connected devices, the reality is starkly different.
If you are building radio equipment today, the AI Act is not just a distant concern; it is likely a direct obligation.
Just as we discussed in The Hidden Risks of AI Toys: Navigating the Regulatory Gap, where smart features can inadvertently trigger safety laws, the AI Act explicitly targets hardware regulation.
Annex I, Section A of the AI Act lists the Radio Equipment Directive (2014/53/EU) as a critical product safety legislation.
This means the "brain" of your connected device might soon be classified as "High-Risk," triggering a conformity assessment process far more rigorous than standard RED compliance. Ignoring this overlap is not just a compliance risk; it is a strategic error that could force a total product redesign in 2027.

The AI Act introduces a classification system based on risk.
While most consumer IoT devices might fall into lower risk categories, the inclusion of RED in Annex I changes the equation for many manufacturers.
Under the new rules, a product regulated under RED is classified as a High-Risk AI System if two conditions are met:
This sounds straightforward until you ask: What is a "safety component" in a radio device?
Traditionally, safety meant preventing physical harm (Article 3.1a).
However, as we detailed in Radio Equipment Directive in 2025: The 3 Key Pillars for a Successful Market Entry, the definition of "essential requirements" under RED has expanded significantly.
With the activation of Articles 3.3(d), 3.3(e), and 3.3(f), safety now encompasses:
If your device uses an AI model, such as a machine learning algorithm for intrusion detection or biometric authentication, to fulfill these mandatory cybersecurity requirements, that AI becomes a safety component.
Therefore, it triggers the High-Risk classification under the AI Act.
This effectively closes the loop we warned about in RED Certification in the Age of 5G Adapting to New Risks: complex networks demand complex controls, and those controls are now heavily regulated.

The implication of this "High-Risk" status is a potential upheaval in your conformity assessment procedure.
Manufacturers are accustomed to the Radio Equipment Directive, where using harmonized standards (like EN 18031) often allows for a streamlined assessment (Module A).
However, the AI Act adds a second layer.
Article 43 of the AI Act mandates that the assessment for the AI component must be integrated into the RED conformity assessment.
But here is the trap: Even if you fully comply with RED using EN 18031, Article 43(3) of the AI Act suggests that if you have not applied harmonized AI standards (which are currently still in development), you may be forced to use a Notified Body for the AI portion of your certification.
This creates a "Double Lock" scenario:
This aligns with the challenges we highlighted in Why RED is the Blueprint for CRA Success: regulations are no longer silos; they are a matrix.
A failure to plan for this integration means you might pass your RED audit in 2025, only to have your technical file rejected in 2027 because the AI documentation is insufficient.

How do you know if your specific feature triggers this classification?
It often comes down to the function the AI performs relative to the RED Article 3.3 requirements.
As noted in The Hidden Risks of AI Toys, relying on third-party APIs for these features does not absolve the manufacturer of responsibility.
You must assess how these "black box" components interact with the safety and fundamental rights of the user.

The convergence of RED, the CRA, and now the AI Act requires a compliance partner who sees the bigger picture.
At CCLab, we specialize in the intersection of these regulations.
We help manufacturers of Consumer IoT and Industrial IoT ensure that the work done for RED today serves as the foundation for the AI Act tomorrow.
CCLab supports your transition through:
By addressing these requirements in parallel, you avoid the cost of retrofitting your product for the AI Act just two years after your RED certification.
The regulatory landscape for radio equipment has evolved beyond simple connectivity.
With Regulation (EU) 2024/1689, the AI Act, explicitly listing RED in Annex I, manufacturers must face a new reality.
The cybersecurity features you implement today to meet RED Articles 3.3(d), (e), and (f) may well be the "safety components" that classify your device as High-Risk in 2027.
Ignoring this connection risks locking your product out of the EU market just as the Cyber Resilience Act also comes into force.
By viewing RED, the AI Act, and the CRA as a single, phased compliance journey, you can turn regulatory pressure into a competitive advantage.
The takeaway: The best time to prepare for the AI Act's impact on your radio equipment was yesterday.
The second best time is now. Start mapping your AI safety components today and ensure your RED technical file is future-proof.
Ready to start your compliance journey? Explore our full range of Radio Equipment Directive (RED) Compliance Services or Contact our experts today for a personalized consultation.


Download this comprehensive infographic guide, which deep dive into the key stages of the Radio Equipment Directive (RED). Gain clarity on technical requirements, risk assessment, and strategic decisions to ensure your products meet EU regulations.


The EU Cyber Resilience Act (CRA) introduces a unified cybersecurity framework for products with digital elements that have direct or indirect, logical or physical data connection to a device or network, including everything from software or hardware products to free and open-source software that is monetized or integrated into commercial products.


Read and learn more about the Radio Equipment Directive (RED), download our free material now.

The newly enforced AI Act significantly shifts the regulatory landscape for hardware manufacturers by explicitly listing the Radio Equipment Directive (RED) as critical safety legislation. If a radio device uses AI for mandatory functions like network protection or data privacy, it will likely be classified as a "High-Risk AI System" under these new rules. This classification creates a "Double Lock" on compliance, requiring manufacturers to integrate AI-specific audits into their existing 2025 RED conformity assessments. Failing to plan for this overlap today is a strategic error that could force a total product redesign by 2027 when the regulations fully converge. By adopting an integrated compliance strategy now, manufacturers can ensure long-term market access and avoid the costs of redundant testing.
9
min reading time

Smart toys are more than just software; they are radio equipment and thus subject to strict EU regulations. Our analysis explores the interplay between RED, the CRA, and the AI Act, while outlining the essential cybersecurity testing processes for a safe market entry.
7
min reading time

The August 1, 2025 deadline for the Radio Equipment Directive (RED) Delegated Act has passed. You have likely spent the last year scrambling to test devices, freeze software, and secure approvals. But just as the dust settles, a new challenge looms: the Cyber Resilience Act (CRA) is now getting in force, with full application expected by December 11, 2027. The immediate worry for many manufacturers is simple: Was the investment for RED wasted? Is the work done for the 2025 deadline just a temporary fix destined to be withdrawn when the CRA takes over? The answer is no, if a strategic approach is taken. The two regulations are "in sync," and the work done for RED-DA is the essential foundation for future CRA compliance.
9
min reading time