On 11 September 2026, every manufacturer that sells software, hardware, or connected devices into the European Union becomes legally obligated to report actively exploited vulnerabilities within 24 hours of becoming aware of them. The penalty for getting this wrong is up to €15 million or 2.5 percent of global annual turnover, whichever is higher. The clock has started, and the law applies retroactively to products that shipped years before the regulation existed.
The Cyber Resilience Act, formally Regulation (EU) 2024/2847, entered into force on 10 December 2024 and unfolds in three phases. Most coverage focuses on the December 2027 full-application deadline for CE marking and conformity assessments. That focus is misleading. The first deadline that carries teeth arrives 21 months after entry into force, on 11 September 2026, when Article 14 reporting obligations become enforceable for every product with digital elements placed on the EU market — including products that shipped a decade ago.
The regulation itself states the obligation with blunt economy. Article 14(1) requires the manufacturer to “notify any actively exploited vulnerability contained in the product with digital elements” simultaneously to the designated CSIRT and to ENISA. That single sentence, enforceable from 11 September 2026, rewires the responsibility model for software and connected hardware across the internal market.
The political framing matches the legal text. When the European Parliament and Council reached agreement on the file in December 2023, then-Commissioner for Internal Market Thierry Breton said the regulation embeds “cybersecurity by design” across the product lifecycle, describing it as essential for the security of consumers and society at large. (Breton resigned in September 2024; CRA oversight now falls under Executive Vice-President for Tech Sovereignty, Security and Democracy Henna Virkkunen in the 2024–2029 Commission.) The Commission has since reinforced that position repeatedly. In its official reporting obligations page, the Commission states that as of 11 September 2026, manufacturers must submit an early warning within 24 hours of awareness, a full notification within 72 hours, and a final report no later than 14 days after a corrective measure is available. Reports flow through a centralized infrastructure operated by the European Union Agency for Cybersecurity (ENISA), and they reach the relevant national Computer Security Incident Response Team simultaneously. Missing the deadlines is not a procedural slip; it is grounds for the highest tier of administrative fines defined in the regulation.
What the Cyber Resilience Act Is
The CRA is the first horizontal cybersecurity regulation in EU history, meaning it applies across product categories rather than being confined to a sector like finance, health, or critical infrastructure. It targets what the regulation calls "products with digital elements," defined as any software or hardware product, plus its remote data processing solutions, that includes direct or indirect data connection to a device or network. That definition deliberately captures almost everything connected: smart speakers, routers, baby monitors, industrial control systems, password managers, antivirus suites, smartphones, IoT sensors, embedded firmware, and the cloud backends that support them.
The regulation introduces three categories of obligation: essential cybersecurity requirements that must be designed into products from inception (Annex I), vulnerability handling requirements that span the product lifecycle, and reporting obligations under Article 14. CE marking — the same conformity stamp Europeans associate with electrical safety — extends into cybersecurity territory. Once full application takes effect on 11 December 2027, no in-scope product can be placed on the EU market without it.
What makes 2026 the inflection year is the staged commencement schedule. The framework on conformity assessment bodies activates on 11 June 2026, allowing notified bodies to begin certifying products. The reporting obligations follow on 11 September 2026. The first standardisation deliverables are expected in Q3 2026, including vertical product-specific standards for browsers, password managers, antivirus software, VPNs, network management systems, SIEMs, and boot managers. Member States are then required to ensure sufficient notified body coverage by 11 December 2026. The architecture is being assembled in real time.
A widely held belief among engineering teams is that CRA compliance can wait until December 2027. That assumption is wrong. Article 14 reporting becomes enforceable 15 months earlier, on 11 September 2026, and applies retroactively to products already on the EU market. A device shipped in 2018 with an OpenSSL vulnerability that hits the CISA Known Exploited Vulnerabilities catalog in October 2026 triggers the reporting obligation if the product is still being made available.
Article 14 and the 24/72/14 Reporting Cascade
Article 14 is the operational heart of the early-phase CRA. It establishes a three-stage reporting cascade with cumulative deadlines that begin running from the moment a manufacturer becomes “aware” of a qualifying event. The triggering event is narrow: not every CVE qualifies. Article 14 only applies when there is reliable evidence that a malicious actor has exploited the vulnerability without permission, or when a severe incident affects the security of the product itself.
The cascade has three discrete obligations layered on top of one another. The early warning notification is due within 24 hours of awareness and must indicate, where applicable, the Member States in which the affected product has been made available. The full vulnerability notification is due within 72 hours and must include general information about the product, the nature of the exploit, the corrective or mitigating measures the manufacturer has taken, and any actions users can take to protect themselves. The final report is due no later than 14 days after a corrective or mitigating measure becomes available, and must include a description of the vulnerability with severity and impact, information about the malicious actor where known, and the corrective measures applied.
For severe incidents — defined separately from actively exploited vulnerabilities — the same 24-hour and 72-hour deadlines apply, but the final report deadline extends to one month after the initial notification. A severe incident under Article 14(5) is one that negatively affects or is capable of negatively affecting the ability of the product to protect availability, authenticity, integrity or confidentiality of sensitive data or functions, or that has led or could lead to the introduction of malicious code in the product or in user systems.
Three provisions sit alongside the cascade that most summaries skip over. They change how manufacturers should design their incident runbook, not just their reporting workflow.
First, Article 14(8) creates a parallel obligation to inform affected users of the product — not only authorities. After becoming aware of an actively exploited vulnerability or a severe incident, the manufacturer must inform impacted users (and, where appropriate, all users) about the issue and any mitigations they can deploy, ideally in a structured, machine-readable format. If the manufacturer fails to inform users in a timely manner, the designated CSIRT is empowered to do so directly. This is a teeth-bearing provision: the manufacturer’s control over public messaging about a vulnerability in its own product disappears the moment the CSIRT judges the delay disproportionate. Timing is not prescribed numerically, but in practice it runs alongside the regulatory cascade. User notification should be part of the 72-hour workflow at the latest, not a separate downstream task.
Second, under Article 14(4) the CSIRT designated as coordinator has the power to request an intermediate report on status updates between the 72-hour notification and the 14-day final report. This is important for incident-response planning: even if a fix is weeks away, the manufacturer should expect to be compelled to provide structured status updates on demand. Runbooks need a template for the intermediate update, not only for the three named deadlines.
Third, the 24-hour early warning is a confidential notification to ENISA and the relevant CSIRTs — it is not a public disclosure. ENISA is required to protect the confidentiality and security of notifications, and the Commission’s December 2025 delegated act narrows the circumstances under which dissemination can be withheld even from other CSIRTs. The regulation contemplates public user-facing communication under Article 14(8) (where risk to users requires it), and the EUVD will eventually list fixed vulnerabilities, but the 24-hour notification is not a trigger for public disclosure in its own right. Manufacturers worried that filing will publicize an unpatched flaw should read this closely: the regime is designed to keep unpatched technical detail confined to CSIRTs and ENISA until a fix is ready, with narrow exceptions only where user safety compels earlier communication.
When the 24-Hour Clock Starts
The definition of “awareness” is the single most consequential ambiguity in the entire Article 14 regime. A strict reading would start the clock the moment any employee sees a credible exploit report. A lenient reading would wait for formal confirmation by the security team. Neither extreme matches what the Commission appears to intend.
On 3 March 2026, the Commission published draft Communication Ares(2026)2319816, a roughly 70-page interpretive document covering the questions that have kept compliance teams awake. The consultation window closed on 31 March 2026; the Commission is now reviewing stakeholder feedback and the guidance has not yet been formally adopted — final language versions are pending. While not legally binding in its current form, it represents the Commission's stated interpretation and is expected to become the primary reference for market surveillance authorities once finalised. On the awareness question, the guidance confirms a middle position: the clock starts at reasonable certainty after an initial assessment, not at full forensic confirmation. In practice this means that the moment a credible indicator of active exploitation survives a first-level triage — a reproducible proof-of-concept, a ransom demand referencing the vulnerability, a CSIRT report that the CVE is being weaponized — the 24-hour clock begins. Waiting for a lab-grade root-cause analysis before starting the clock is not a defensible position under the guidance.
This reading carries immediate operational consequences. Manufacturers cannot design their triage process to delay the awareness timestamp as a legal strategy. Internal emails, support-ticket timestamps, and monitoring-alert receipt times are all potential evidence of awareness, and market surveillance authorities can request them. Regulators will read “without undue delay” as an independent obligation overlaid on the 24-hour ceiling; even inside 24 hours, a manufacturer who sat on credible evidence for 20 hours while triaging would be on shaky ground.
Ingestion timestamps from vulnerability intake channels (security.txt email, CVD portal, PSIRT tickets); timestamps of the first monitoring alert correlating the CVE to an in-market product; the named decision-maker who authorised the triage classification; the version of the exploit evidence reviewed. These artefacts become the manufacturer’s defense if a market surveillance authority later alleges a late filing.
The 24-hour clock is also unforgiving for a reason most engineering teams overlook: incidents do not respect business hours. A credible exploit report that lands in the PSIRT inbox at 23:00 on a Friday of a public holiday weekend still starts the clock. That means the workflow needs a 24/7 on-call path to at least one authorised SRP reporter, a rotation for Member State holiday coverage, and a pre-signed authority for that on-call reporter to submit the early warning without further escalation. Building that workflow after the deadline arrives is not a viable plan.
The Single Reporting Platform: How Notifications Flow
Article 16 of the CRA establishes the Single Reporting Platform (SRP), a centralized infrastructure that ENISA is building specifically to handle CRA notifications. The platform consolidates what would otherwise have been a coordination nightmare: 27 Member States with their own CSIRTs, each potentially expecting notifications in their own format. Manufacturers report once, and the routing happens behind the scenes.
According to ENISA’s official guidance, the SRP is scheduled to be operational by 11 September 2026, the same day reporting becomes mandatory. A testing period is expected before that date. ENISA awarded the build contract through a public tender (reference ENISA/2025/OP/0001), and the architecture is designed to support future integration with NIS2 and DORA reporting flows so that organizations subject to multiple regimes do not have to file the same incident through multiple channels.
ENISA has positioned the platform as one pillar of a broader push to strengthen European vulnerability coordination. In November 2025, Executive Director Juhan Lepassaar said on the agency’s elevation to CVE Program Root that the move extends ENISA’s capacity to support “the EU’s ability to manage and coordinate cybersecurity vulnerabilities.” Read alongside the SRP timetable, the direction of travel is clear: ENISA intends to be both the front door for CRA notifications and the authoritative hub for EU vulnerability coordination, with the European Vulnerability Database (EUVD) sitting behind the SRP as the long-term public record of fixed issues.
Notifications are submitted to the CSIRT designated as coordinator in the Member State where the manufacturer has its main establishment in the Union. The information is made available simultaneously to ENISA. The receiving CSIRT then disseminates the notification to the CSIRTs of every other Member State where the affected product has been made available. There is a narrow exception: where dissemination would itself create cybersecurity risk — for example, by tipping off attackers to a vulnerability that is being actively exploited but not yet patched — a CSIRT may temporarily withhold dissemination, with justification provided to ENISA. The terms and conditions for this delay were further specified in Commission Delegated Regulation C(2025)8407, adopted on 11 December 2025, which narrows the circumstances under which ENISA can be kept in the dark about the full technical content of a notification.
For manufacturers established outside the EU, the regulation provides a fallback hierarchy under Article 14(7). The notification goes to the CSIRT in the Member State where the authorised representative is established, then by descending priority to the Member State of the importer placing the highest number of products on the market, then the distributor, and finally the Member State with the highest number of users. This hierarchy matters for compliance planning: a US-based vendor with no EU establishment needs to identify in advance which Member State will receive its notifications, because that decision cannot be made for the first time at hour 23 of an active incident.
Which Products Are In Scope (And Why Yours Probably Is)
The scope of "products with digital elements" is intentionally broad. The European Commission has confirmed that approximately 90 percent of all networked products fall within the default category eligible for self-assessment. The remaining 10 percent are designated as "important" or "critical" products requiring third-party conformity assessment by a notified body. On 1 December 2025 the Commission published Implementing Regulation (EU) 2025/2392, which provides technical descriptions clarifying which products fall into the higher-risk categories.
Annex III of the CRA lists important products in two classes. Class I includes identity management software, password managers, browsers, antivirus, VPNs, network management systems, SIEMs, boot managers, public key infrastructure software, and physical network interfaces. Class II includes hypervisors, container runtime systems, firewalls intended for industrial use, intrusion detection and prevention systems, and tamper-resistant microprocessors. Annex IV designates critical products including hardware devices with security boxes, smart meter gateways, and smartcards or similar devices including secure elements.
For reporting purposes, the classification matters less than people often assume. The Article 14 obligation applies uniformly to every in-scope product regardless of its conformity assessment category. A firewall vendor and a smart light bulb manufacturer face the same 24-hour clock, the same SRP, and the same penalty exposure. Classification affects what manufacturers must do for CE marking by December 2027, not what they must do for vulnerability reporting in September 2026.
Open-source software occupies a special carve-out. Free and open-source software developed and supplied outside commercial activity is exempt. The CRA introduces a new legal category of "open-source software steward" — typically foundations like Eclipse, Apache, or the Python Software Foundation — that face a lighter-touch regime: documenting cybersecurity policy, reporting actively exploited vulnerabilities affecting their projects, and cooperating with market surveillance authorities. Critically, open-source stewards are exempt from administrative fines for any CRA infringement under Article 64(10)(b). Microenterprises and small enterprises also receive an exemption from fines specifically for missing the 24-hour early-warning deadline under Article 64(10)(a) — the exemption covers only that single deadline; the 72-hour notification and 14-day final report remain enforceable against them, and all other CRA obligations apply in full.
The Legacy Product Trap Most Vendors Are Missing
Article 69(3) of the regulation contains a derogation that has caught engineering teams off guard. While the main CRA obligations apply to products placed on the market from 11 December 2027 onward, the Article 14 reporting obligations apply to all products with digital elements that have been placed on the market before 11 December 2027. The regulation is explicit on the point, stating that the Article 14 obligations “shall apply to all products with digital elements” placed on the market before that December 2027 cutoff. The retroactive scope is the source of the September 2026 trap.
The practical implication is consequential. A manufacturer that shipped an industrial gateway in 2019 with an embedded OpenSSL library has a continuing reporting obligation if that gateway is still being made available on the EU market and if a vulnerability in its OpenSSL component becomes actively exploited. The vendor cannot report what it does not know about, but the regulation does not accept ignorance as a defense. To meet Article 14 timelines, manufacturers need component-level visibility into legacy fleets — the kind of visibility that only software bills of materials and continuous vulnerability monitoring can provide.
Industry practitioners have been direct about how hard this will be. Jan Wendenburg, Managing Director of the product security firm ONEKEY, said in a February 2026 statement that in real product tests “gaps often emerge, and many of them are difficult to resolve.” He warned that manufacturers should plan for substantial time, budget, and personnel investment, citing three specific failure patterns he sees repeatedly: vulnerabilities in code from partners outside the EU with no CRA awareness, purchased components with incomplete documentation, and open-source software with no maintained SBOM.
This creates an implicit dependency. Although the CRA’s explicit SBOM requirements under Annex I do not become enforceable until December 2027, the practical reality is that manufacturers cannot meet the September 2026 reporting obligation for legacy products without SBOMs and automated vulnerability tracking already in place. The deadline for SBOM readiness is, in effect, 15 months earlier than the formal SBOM deadline. A concrete scenario illustrates the point: if CISA adds a critical OpenSSL vulnerability to its Known Exploited Vulnerabilities catalog on 15 October 2026 and the affected version is embedded in an IoT gateway a vendor shipped in 2019, the vendor has 24 hours from the moment it becomes aware that the vulnerability is exploitable in its product. Without an SBOM, the vendor may never achieve that awareness in the first place, and cannot rely on ignorance as a defense against enforcement.
# SBOM-based readiness — why each item matters under the CRA
#
# WHY SBOMs at build time: Annex I Part II requires manufacturers to
# identify and document components. Without a build-time SBOM, the only
# way to discover a KEV hit in a legacy product is manual — too slow
# for a 24-hour clock. Automate SBOM generation in CI so the artefact
# ships with every release and is queryable after the fact.
#
# WHY continuous KEV/NVD/OSV matching: CISA KEV entries are the primary
# signal that a vulnerability is "actively exploited" in the wild. The
# moment a KEV entry lands for a CVE present in your SBOM, the Article
# 14 clock may already be running. Polling KEV daily is insufficient
# for after-hours events; subscribe to CISA KEV RSS/STIX feeds and
# trigger alerts via your SIEM or vulnerability management tooling.
#
# WHY immutable timestamps: Commission guidance Ares(2026)2319816 puts
# "awareness" at reasonable certainty after initial triage, not forensic
# closure. If a market surveillance authority later disputes the T0
# timestamp, your immutable intake log is the evidence. Use an
# append-only store (e.g., hash-chained log, SIEM with tamper protection,
# or a signed git commit trail) — never a mutable ticket field.
#
# WHY multiple authorised SRP reporters: Article 14(7) ties reporting to
# the manufacturer's main establishment, but a single named reporter is
# a single point of failure. If that person is unreachable at 03:00 on
# a Friday public holiday, the 24-hour clock does not pause. Designate
# at least three authorised reporters per main establishment and test
# the rotation quarterly.
SBOM_GENERATION: automated at build time (CycloneDX 1.6 or SPDX 2.3)
SBOM_COVERAGE: all in-market products including legacy fleet
KEV_MONITORING: CISA KEV STIX feed + NVD + OSV — alert SLA ≤ 1 hour
NVD_FEED: https://nvd.nist.gov/feeds/json/cve/1.1/ (or NVD 2.0 API)
OSV_FEED: https://osv.dev/ (aggregates GitHub, PyPI, npm, Maven, Go)
AWARENESS_LOG: append-only, hashed, stored outside ticketing system
SRP_REPORTERS: ≥ 3 named, per main establishment, rotation tested
ON_CALL_PATH: 24/7, Member State holiday coverage confirmed
REPORTING_TEMPLATES: 24h / 72h / 14d / intermediate — pre-approved, filed in runbook
CSIRT_CONTACT: coordinator CSIRT identified, helpdesk contact saved offline
TABLETOP_COMPLETED: Friday 03:00 scenario, end-to-end SRP submission tested
The Open Letter the EU Overruled
One angle usually missing from compliance coverage: the CRA’s 24-hour reporting rule was built over active, public objection from a sizable coalition of the security industry’s most-cited voices. In October 2023, during final trilogue negotiations, 50 cybersecurity experts published an open letter addressed to then-Commissioner Breton, the Spanish Presidency of the Council, and CRA rapporteur Nicola Danti. Signatories included representatives from Google, the Electronic Frontier Foundation, the CyberPeace Institute, ESET, Rapid7, Bugcrowd, and Trend Micro.
The letter’s central warning, aimed at what was then Article 11 of the draft (today’s Article 14), was stark. The signatories argued the regime would give “dozens of government agencies… a real-time database of software with unmitigated vulnerabilities.” They raised three concrete concerns that deserve attention on their own merits, separate from whether the EU’s final choice was right.
The first was a misuse risk. The letter argued that the absence of explicit restrictions on offensive use of CRA-disclosed vulnerabilities, combined with a lack of transparent oversight in most Member States, opens the door to intelligence, surveillance, and offensive-operations misuse by the agencies that receive the notifications. The second was a chilling effect on good-faith research: rushing disclosure before coordinated patching risks discouraging researchers from reporting to vendors at all if they believe their reports will immediately cascade into dozens of government notifications. The third was a data-security concern in its own right: even a leaked list of unpatched vulnerabilities, without technical detail, gives a capable attacker enough to reconstruct exploits — a risk already documented in how commercial surveillance vendors build and weaponize exploit chains from partial public information.
The final CRA text addresses some of these concerns partially. Recital 35a of the regulation excludes vulnerabilities identified by good-faith security research from the reporting requirement. The 24-hour early warning was scoped down to a minimum-information notification rather than a full technical write-up. The Commission’s December 2025 delegated act on dissemination delays (C(2025)8407) gave CSIRTs more room to withhold technical details where dissemination itself would create risk. Open-source stewards were excluded from administrative fines for any CRA infringement under Article 64(10)(b). Microenterprises and small enterprises were excluded from fines specifically for missing the 24-hour early-warning deadline under Article 64(10)(a) — the 72-hour and 14-day deadlines remain enforceable against them.
What the final text did not do is accept the letter’s core structural proposal: that Article 11/14 should apply only to mitigatable vulnerabilities within 72 hours of a patch becoming available. The EU retained the 24-hour early-warning rule over that objection. Manufacturers preparing for September 2026 should read the open letter not as a political artefact but as a predictive document. It describes the specific operational risks — pressure on CVD programmes, government aggregation of unpatched vulnerability data, the surveillance-adjacent uses of centralised notification platforms — that will shape real incidents under the new regime. Those risks do not disappear because the law has been enacted; they become the manufacturer’s risks to manage.
How the CRA Rewires Coordinated Vulnerability Disclosure
The CRA does not repeal coordinated vulnerability disclosure, but it changes the rules of engagement enough that most existing CVD and bug-bounty programmes will need to be adjusted before September 2026. Three questions matter: when does a researcher’s report trigger Article 14 reporting, how does the regulation treat good-faith research, and what is the voluntary reporting channel that sits alongside the mandatory regime.
On the first question, the trigger is exploitation in the wild, not the researcher’s report itself. A researcher submitting a proof-of-concept through a bug-bounty programme does not, by that submission alone, create an Article 14 reporting obligation. The reporting obligation attaches to vulnerabilities for which there is reliable evidence that a malicious actor has exploited them without permission. A researcher demonstrating exploitability in a controlled environment under a responsible-disclosure agreement is not a malicious actor, and Recital 35a of the regulation explicitly excludes vulnerabilities identified by good-faith security research from the reporting requirement. The operational risk is a mismatch between the researcher’s report and the evidence available to the manufacturer: if the bug-bounty submission arrives alongside indicators that the same vulnerability is already being exploited in production, the manufacturer is now on the 24-hour clock whether the researcher intended that or not.
On the second question, the regulation pushes Member States in a research-friendly direction. Recital 35i encourages Member States to adopt guidelines for non-prosecution of security researchers and to reduce their exposure to civil liability, modelled on comparable US Department of Justice guidance. The CRA does not itself mandate bug bounty programmes, but it explicitly encourages manufacturers to consider them. For manufacturers, the practical implication is that a published CVD policy — usually a security.txt file at /.well-known/security.txt, a monitored intake address, and a clear safe-harbour statement — is close to mandatory in practice, because it is the mechanism that channels researcher reports into a controlled workflow rather than a chaotic public disclosure.
On the third question, Article 15 of the CRA creates a voluntary reporting channel that operates through the same Single Reporting Platform as the mandatory regime. Any natural or legal person — including security researchers, downstream users, integrators, and third parties — may voluntarily notify vulnerabilities, cyber threats, incidents, and near misses through the SRP. CSIRTs and ENISA are required to protect the confidentiality of voluntary notifications, and voluntary reporting cannot impose additional obligations on the reporter beyond what would otherwise apply. This is the CRA’s structural answer to one of the open-letter concerns: the platform is designed to accept researcher input, not only manufacturer filings, and to keep that input confidential. Manufacturers should treat the voluntary channel as a real signal during triage. A CSIRT that has received independent voluntary reports about a vulnerability in a product will be less patient with a late filing from the product’s manufacturer.
Update the published CVD policy with a safe-harbour clause referencing Recital 35i. Ensure the bug-bounty intake pipeline captures exploitation evidence separately from the vulnerability report itself — the two trigger different clocks. Adjust researcher agreements so that public-disclosure timelines account for the possibility that evidence of active exploitation may force a filing earlier than the coordinated-disclosure date. Test the path from bug-bounty triage to SRP filing in a tabletop exercise before September.
Components, Third-Party Code, and Substantial Modification
Most products in scope of the CRA are not built entirely by the manufacturer placing them on the market. They include operating systems, cryptographic libraries, communication stacks, container runtimes, AI models, and dozens of open-source libraries. Two questions dominate the operational picture: who has the reporting obligation when a vulnerability sits in a component rather than in the manufacturer’s own code, and when does a software update count as a “substantial modification” that restarts the CRA compliance clock. The stakes are not theoretical — supply chain backdoors embedded in firmware have demonstrated exactly how far downstream a component vulnerability can travel before the manufacturer of the finished product becomes aware.
On the component question, the CRA takes a clear position. The manufacturer of the finished product placed on the market is responsible for the cybersecurity of that product in its entirety, including all integrated components. Vulnerability handling obligations apply to the product as a whole, not just to code the manufacturer wrote. A manufacturer that identifies a vulnerability in an integrated component is required under Article 13(6) to report it to the person or entity maintaining the component — which is how upstream open-source maintainers get pulled into the downstream notification loop. If the upstream maintainer is unresponsive or the component is unmaintained, the integrating manufacturer is expected to address the vulnerability by other means: disabling compromised functions, swapping the component, or developing and upstreaming a patch.
The reporting obligation under Article 14 does not split between the manufacturer and the component maintainer in any simple way. If an actively exploited vulnerability in an embedded OpenSSL version is being used against customers of a gateway product, the gateway manufacturer has the Article 14 obligation to ENISA and the designated CSIRT, regardless of who wrote the vulnerable code. OpenSSL itself, as an open-source project without commercial placing-on-the-market activity, sits outside the CRA’s manufacturer definition; the Eclipse/Apache/Python-style foundations that qualify as open-source stewards face a lighter-touch regime. The net effect is that commercial manufacturers bear the reporting burden for the entire stack they ship — an argument the industry has been making for years about why SBOMs and continuous upstream monitoring are non-optional.
On substantial modification, the CRA aligns with the broader EU product-law tradition rather than inventing a new test. A substantial modification is a change that either alters the product’s intended purpose beyond what was foreseen in the initial risk assessment or changes the nature of the hazard or the level of cybersecurity risk. Routine security patches, bug fixes, and minor functional changes are not substantial modifications. The March 2026 draft guidance Ares(2026)2319816 proposed a four-question test that manufacturers should run against every release: does the update introduce new threat vectors, does it change the intended purpose, does it materially alter the product’s risk profile, and does it create new cybersecurity requirements not covered by the original assessment. An answer of yes to any of these triggers a new conformity assessment and, for products placed on the market before December 2027, pulls them into the full CRA regime from the date of the substantial modification.
The practical consequence is a governance burden on the release process. Every release needs a documented substantial-modification determination. That determination is what separates a routine patch from a regulatory trigger, and it is the evidence regulators will ask for if a modified legacy product is later the subject of enforcement action. Building the determination into the release checklist, rather than performing it ad hoc, is the right approach.
When One Incident Triggers Four Regulators
A single exploited vulnerability in an EU-market product rarely triggers only the CRA. Depending on the product and the affected data, the same incident may generate parallel obligations under NIS2, GDPR, DORA, the AI Act, and sector-specific regimes — each with its own clock, its own authority, and its own format. The CRA does not replace any of these; it layers on top of them. Article 14(8) explicitly states that CRA notifications do not displace obligations under other Union law.
The practical consequence is that a manufacturer’s incident-response runbook needs a decision branch for each regime, not a single “file the incident report” checkbox. Consider a ransomware incident against a financial-services SaaS platform that leaks customer personal data and relies on an AI-driven fraud-detection component. Within 4 hours, the operator may owe a DORA major-ICT-incident notification to its competent financial authority. Within 24 hours, the manufacturer of the product owes a CRA early warning via the SRP; the operator owes a NIS2 early warning to the national CSIRT. Within 72 hours, GDPR Article 33 requires a personal-data breach notification to the competent data protection authority, the CRA 72-hour notification is due, and NIS2 requires its substantive update. Within 14 days, the CRA final report is due if a fix is available. Within 30 days, the NIS2 final report and the CRA severe-incident final report close out the regulatory record.
Engineering teams should design for the tightest applicable clock and layer the others on top. For a financial-services SaaS operator, that means building an incident-response runbook around the DORA 4-hour major-incident trigger; for every other manufacturer, the CRA and NIS2 24-hour clock is the binding constraint. The ENISA SRP architecture is designed to reduce, but not eliminate, this burden. ENISA has stated that the platform is future-proofed for integration with NIS2 and DORA reporting. In 2026, however, the three regimes still require separate submissions to separate authorities; a unified portal is a roadmap item, not a current capability. The Commission published its Digital Omnibus package on 19 November 2025, followed by a standalone cybersecurity package (COM(2026)13) on 20 January 2026, which together propose a single entry point for incident reporting across NIS2, DORA, eIDAS, CRA, and GDPR. These proposals would, if adopted, allow CRA severe incident reports to satisfy NIS2 obligations where they contain the required information, and would extend the GDPR breach notification window from 72 to 96 hours. However, the Digital Omnibus and cybersecurity package remain proposals under legislative review by the European Parliament and Council, and organisations must not delay CRA compliance planning on the assumption that the rules will be simplified before September 2026.
Penalty Structure and Enforcement Powers
Article 64 of the CRA establishes a three-tier penalty framework that draws direct comparison to GDPR enforcement. The highest tier applies to non-compliance with the essential cybersecurity requirements in Annex I and the core manufacturer obligations in Articles 13 and 14. Failure to comply with vulnerability reporting falls squarely in this top tier. The administrative fine ceiling is €15 million or 2.5 percent of total worldwide annual turnover for the preceding financial year, whichever is higher. For a vendor with €1 billion in global revenue, the percentage calculation produces a €25 million ceiling, well above the fixed amount.
The second tier covers other CRA obligations including importer and distributor duties, conformity assessment requirements, and obligations relating to the EU Declaration of Conformity. The ceiling for tier two is €10 million or 2 percent of global annual turnover. The third tier applies to providing incorrect, incomplete, or misleading information to authorities or notified bodies. The ceiling for tier three is €5 million or 1 percent of global annual turnover. In every case the higher of the two amounts applies.
Fines are not the only enforcement tool. Market surveillance authorities operate under the existing horizontal framework of Regulation (EU) 2019/1020 and have powers to order corrective measures, withdraw products from the market, prohibit further sales, or require product recalls. Authorities can run joint cross-border sweeps targeting specific product categories. For physical products, the CE mark itself becomes a customs checkpoint — products lacking valid conformity cannot clear EU borders. Reputational consequences compound the financial ones: enforcement actions are public, and losing market access in a 27-country bloc is often more damaging than the fine itself.
Enforcement begins at the Member State level, not in Brussels. Each Member State designates at least one market surveillance authority for the CRA, and it is that national authority — not ENISA, not the Commission — that investigates suspected breaches and issues fines. For multinational manufacturers, the determining factor is the “main establishment” in the Union, which the regulation defines as the Member State where decisions related to the cybersecurity of products are taken, not necessarily the legal headquarters. A vendor with a holding company in Luxembourg, engineering in Germany, and product-security leadership in the Netherlands will have its main establishment in the Netherlands for CRA purposes, and the Dutch authority will lead enforcement. Fines imposed by a national authority are subject to Member State administrative-law procedures for appeal, with ultimate recourse to the Court of Justice of the European Union for points of EU law interpretation. Cross-border coordination happens through the Administrative Cooperation Group (ADCO) and the CSIRTs network, which means that an enforcement action in one Member State can cascade into coordinated sweeps across the Union if the product category warrants it.
How Manufacturers Should Prepare Before September
The window between now and 11 September 2026 is shorter than it appears. The platform itself is still being built, the implementing acts are still being finalized, and the first standardisation deliverables are not expected until Q3 2026. Manufacturers cannot wait for full clarity before beginning preparation, because the operational changes required take months to implement and test.
Practical readiness has several layers. The intake layer requires a clear vulnerability disclosure channel — typically a published security.txt file at /.well-known/security.txt, a monitored security email address, and a coordinated vulnerability disclosure (CVD) policy that the public can find and use. The triage layer requires documented criteria for what counts as “actively exploited” and what counts as a “severe incident,” with named decision-makers and documented escalation paths that include after-hours coverage. The reporting layer requires pre-approved templates for the 24-hour, 72-hour, and 14-day submissions, plus identification of the CSIRT designated as coordinator in the manufacturer’s main establishment Member State.
The evidence layer is often overlooked but matters most when an incident occurs. Regulators will expect to see immutable timestamps showing when the manufacturer became aware, what triage decisions were made, who authorised the report, and what was filed. Tabletop exercises before the deadline test whether the workflow holds up under pressure — and whether the on-call engineer at 03:00 knows who to contact and how to file. Manufacturers should track the Commission’s draft guidance Ares(2026)2319816 closely. Although not legally binding (authoritative interpretation of EU law rests with the Court of Justice of the European Union), it signals how market surveillance authorities are likely to interpret grey areas including remote data processing solutions, the scope of free and open-source software, the determination of support periods, and the interplay between the CRA and other EU rules.
Two concrete artefacts are worth building before September 2026: a published security.txt file that satisfies the Article 13(17) single-point-of-contact requirement, and an internal triage evidence record that creates the timestamped audit trail regulators will ask for. The security.txt format is standardised in RFC 9116 and is the established convention for machine-readable vulnerability disclosure channels. Publishing it at the canonical path signals an operational CVD programme and is the mechanism through which researchers — and CSIRTs — know where to send reports. The triage evidence record is not explicitly mandated by any single article, but it is what gives the manufacturer a concrete defence if the coordinator CSIRT or a market surveillance authority later disputes when the 24-hour clock started.
# security.txt — publish at https://yourdomain.com/.well-known/security.txt
# RFC 9116 compliant. "Contact" and "Expires" are the only required fields.
# Satisfies the Article 13(17) single-point-of-contact requirement: the
# regulation specifies the contact must allow users to choose their preferred
# means of communication and cannot be limited to automated tools, so include
# at least one human-monitored email address alongside any web form.
# "Preferred-Languages" and "Policy" are strongly recommended: they tell
# researchers what safe-harbour protections exist, which is material under
# CRA Recital 35i (encourages Member States to protect good-faith researchers
# from criminal and civil liability, modelled on US DOJ guidance).
# Set "Expires" no more than one year out and update it before it lapses —
# an expired security.txt signals an unmaintained disclosure programme.
Contact: mailto:psirt@yourcompany.com
Contact: https://yourcompany.com/security/report
Expires: 2027-09-11T00:00:00.000Z
Preferred-Languages: en, de, fr
Policy: https://yourcompany.com/security/vulnerability-disclosure-policy
Canonical: https://yourcompany.com/.well-known/security.txt
Acknowledgments: https://yourcompany.com/security/hall-of-thanks
# -----------------------------------------------------------------------
# Article 14 triage evidence record — one file per qualifying event
# -----------------------------------------------------------------------
# WHY: Commission guidance Ares(2026)2319816 defines "awareness" as
# reasonable certainty after initial triage, not forensic closure.
# If a market surveillance authority later argues the clock started
# earlier than you claim, this timestamped record is the defence.
# Store it in an immutable log (append-only, hash-chained, or a signed
# commit trail). Never alter an existing entry; add corrections as new
# entries with a reference to the original.
INCIDENT_ID: INC-2026-XXXX
PRODUCT: [Name and all affected version strings]
CVE_OR_INTERNAL_ID: CVE-YYYY-XXXXX (or INTERNAL-YYYY-XXXXX if unassigned)
# T0 — intake timestamp: the moment evidence first arrived in any channel.
# This is NOT the awareness moment; it anchors the chain of evidence.
T0_INTAKE: 2026-09-XX 23:14 UTC
T0_SOURCE: [security.txt email / bug bounty / CSIRT notification / SIEM alert]
T0_RECEIVED_BY: [Name or team alias]
# T1 — awareness timestamp: when a named decision-maker concluded that
# active exploitation was credible. This is the CRA Article 14 clock start.
# The "evidence reviewed" field must be specific enough to be verifiable.
T1_AWARENESS: 2026-09-XX 23:47 UTC
T1_DECISION_MAKER: [Full name and job title]
T1_EVIDENCE_REVIEWED: [PoC URL / CISA KEV entry ID / ransom note reference / CSIRT advisory]
T1_CONCLUSION: Actively exploited / Severe incident / Not yet confirmed
# 24-hour early warning — confidential to ENISA + coordinator CSIRT.
# WHY confidential: Article 14(1) and Recital 35c keep unpatched technical
# detail confined to CSIRTs and ENISA until a fix is available. Filing
# the 24-hour report does NOT trigger public disclosure.
DEADLINE_24H: [T1_AWARENESS + 24 hours]
FILED_24H: [timestamp — must precede DEADLINE_24H]
FILED_BY: [Authorised SRP reporter name and backup]
SRP_REF_24H: [Platform-assigned reference number]
# 72-hour full notification — general nature of exploit, measures taken,
# user mitigations available. Indicate sensitivity level where applicable
# (Article 14(2)(b) explicitly requires this assessment).
DEADLINE_72H: [T1_AWARENESS + 72 hours]
FILED_72H: [timestamp]
SRP_REF_72H: [reference]
EXPLOIT_NATURE: [e.g., unauthenticated remote code execution via HTTP endpoint]
MEASURES_TAKEN: [Mitigating controls deployed; patch status]
USER_GUIDANCE_URL: [Published advisory URL, or "pending" with expected date]
SENSITIVITY_LEVEL: [High / Medium / Low — assess whether broader dissemination
to other CSIRTs should be delayed under C(2025)8407]
# 14-day final report — from the date a corrective measure becomes available.
# For severe incidents the window is 1 month from the 72-hour filing.
FIX_AVAILABLE: [Date patch or accepted workaround published]
DEADLINE_FINAL: [FIX_AVAILABLE + 14 days] (or 72H filing + 30 days for severe)
FILED_FINAL: [timestamp]
SRP_REF_FINAL: [reference]
VULNERABILITY_DETAIL: [Severity/CVSS, attack vector, impact scope]
ACTOR_ATTRIBUTION: [If known; "unknown" is acceptable]
CORRECTIVE_MEASURES: [Patch version, advisory URL, SBOM update published]
# Article 14(8) user notification — parallel to the cascade, not sequential.
# WHY it matters: if the manufacturer fails to notify users in a timely
# manner, the coordinator CSIRT may notify them directly, removing the
# manufacturer's control over public messaging entirely.
USER_NOTIFIED: [timestamp — target: within the 72-hour workflow]
NOTIFICATION_CHANNEL: [Published advisory / email to registered users / in-product alert]
NOTIFICATION_FORMAT: [Structured / machine-readable CSAF? Yes/No]
# Retention: Article 13(21) requires documentation accessible for 10 years
# after placement on market, or the support period, whichever is longer.
RETENTION_DEADLINE: [Product placed on market date + support period, or + 10 years]
Finally, manufacturers selling into the EU from outside the Union face one preparation task that cannot be delayed: appointment of an authorised representative. Without one, there is no designated CSIRT, and without a designated CSIRT there is no operational route to file the 24-hour early warning. Several Member State CSIRTs, including Germany’s BSI and France’s ANSSI, are issuing early guidance specific to their jurisdictions, and the CSIRTs designated as coordinators are expected to provide helpdesk support especially for micro and small enterprises once the SRP comes online. Engaging now with the prospective coordinator CSIRT is cheaper than discovering its contact path during an incident.
One common question deserves a clear answer: can manufacturers rely on existing cybersecurity standards to demonstrate compliance? The answer is qualified. The CRA operates on a “presumption of conformity” model familiar from other New Legislative Framework regulations. Products that conform to harmonised standards listed in the Official Journal are presumed to meet the corresponding essential cybersecurity requirements. The first CRA harmonised standards are expected in Q3 2026, with the EN 40000 series under development for vertical product categories including browsers, password managers, antivirus, VPNs, network management systems, SIEMs, and boot managers. Until those standards are published, manufacturers can still rely on existing references as engineering guidance: Germany’s BSI TR-03183 series (Parts 1, 2, and 3) is ENISA-aligned and widely regarded as the best available technical checklist; IEC 62443 for industrial products; ISO/IEC 27034 for application security; and the EUCC certification scheme at “substantial” or “high” assurance levels. These do not confer a legal presumption of conformity today, but building to them now makes transition to the eventual harmonised standards substantially less painful.
Twelve Precise Details Even Veterans Miss
The Article 14 cascade and the €15 million ceiling capture the attention, but the regulation’s hardest obligations are scattered across recitals, individual paragraphs, and Commission guidance. What follows is a collection of twelve specific provisions that consistently surprise experienced compliance teams, legal counsel, and product-security leads. Each is cited to its source in Regulation (EU) 2024/2847 or the Commission’s draft guidance Ares(2026)2319816.
Key Takeaways
- The September 2026 deadline is the real first deadline: Article 14 reporting obligations become enforceable on 11 September 2026, 15 months before the often-cited December 2027 full-application date. Many engineering teams treat this as a 2027 problem and will be non-compliant for over a year by the time they begin preparing.
- The 24/72/14 cascade is non-negotiable: Early warning within 24 hours of awareness, full vulnerability notification within 72 hours, final report within 14 days of a corrective measure being available. Severe incidents follow the same 24h/72h schedule with a 30-day final report. The clock starts at awareness, not at confirmation.
- Awareness begins at reasonable certainty, not forensic closure: The Commission’s draft guidance Ares(2026)2319816 of 3 March 2026 settles the contested definition. Once an initial triage determines that active exploitation is credible, the 24-hour clock is running, whether or not the manufacturer has completed a full technical assessment.
- Legacy products are in scope: Article 69(3) extends Article 14 reporting to every product with digital elements already on the EU market before December 2027. A device shipped in 2018 still triggers the obligation if it is still being made available and a vulnerability becomes actively exploited.
- The 24-hour rule was enacted over a 50-expert industry objection: An open letter signed by representatives from Google, EFF, Rapid7, Bugcrowd, ESET, and others warned that the rule would aggregate unpatched-vulnerability data in government hands. The concerns were partly addressed by the final text but not resolved; the operational risks they identified remain live risks for manufacturers to manage.
- Penalties mirror GDPR scale: Up to €15 million or 2.5 percent of global annual turnover, whichever is higher, for failures to comply with reporting obligations or essential cybersecurity requirements. Market withdrawal, recalls, and CE mark loss compound the financial exposure.
- SBOMs are functionally mandatory by September 2026: Although explicit SBOM requirements activate in December 2027, a manufacturer cannot meet the 24-hour reporting clock for legacy fleets without component-level visibility already in production.
- Reporting flows through ENISA’s Single Reporting Platform: One submission reaches the designated CSIRT and ENISA simultaneously, with downstream dissemination to Member State CSIRTs handled by the platform. Identify the right CSIRT in advance based on the main establishment.
- Article 14(8) creates a separate user-notification obligation: Beyond the 24/72/14 regulatory cascade, manufacturers must inform impacted users about the vulnerability and any mitigations they can deploy. If the manufacturer delays, the designated CSIRT can publish that information directly, so user communication should be built into the 72-hour workflow.
- The manufacturer reports the whole stack, not just its own code: Article 14 attaches to the product in its entirety. A vulnerability in an integrated open-source component is the finished-product manufacturer’s obligation to notify under Article 14, even if the upstream maintainer is unresponsive. SBOMs and continuous upstream monitoring are the only realistic way to meet this.
- CVD programmes need a September 2026 refresh: Bug-bounty and coordinated-disclosure workflows should separate the researcher’s report from exploitation evidence, include a Recital 35i safe-harbour clause, and have a tested path from triage to SRP filing. Voluntary third-party reporting under Article 15 means a late manufacturer filing will often compete with earlier reports already sitting in the CSIRT’s queue.
- One incident can trigger four regimes: CRA, NIS2, GDPR, DORA, and the AI Act each have their own clocks and authorities. The tightest applicable clock (DORA’s 4-hour major-ICT-incident trigger, or the CRA/NIS2 24-hour early warning for everyone else) should drive the design of the incident-response runbook.
- Security updates outlive the product: Under Article 13(9), every security update must remain available for at least 10 years after it has been issued, or for the remainder of the support period, whichever is longer. A patch released in year 5 of a 5-year support period must be accessible until year 15. For industrial products with longer support periods, patches can outlive the product itself by a decade.
- The CRA forces downstream-to-upstream information flow: Under Article 13(6), a manufacturer that finds a vulnerability in an integrated component must report it to the component’s maintainer, and where the manufacturer develops a fix, must share the code or documentation back upstream in machine-readable format. A silent patch is out of compliance.
- "Known exploitable vulnerability" is a shipping prohibition: Annex I Part I (1)(a) requires that products are made available on the EU market without known exploitable vulnerabilities. From 11 December 2027, this converts the CISA KEV catalogue into a shipping gate, not just a benchmark — and it sits in Article 64 tier one for penalty purposes.
The Cyber Resilience Act represents the most significant shift in product cybersecurity regulation Europe has produced. The September 2026 reporting deadline is the moment that shift becomes operationally real. Vendors that wait for full clarity, finalised standards, or the December 2027 deadline will arrive at the moment of obligation without the SBOMs, the workflows, the templates, or the CSIRT contacts they need. The regulatory infrastructure — ENISA’s platform, the CSIRT designations, the conformity assessment bodies, the market surveillance authorities — is being assembled in real time around them. The 50-expert open letter, the delegated acts, the March 2026 draft guidance, and the Digital Omnibus proposals are all signals that the CRA is still a regime under construction. But the 24-hour clock is already set, and it starts ticking on 11 September 2026. The work to be ready needs to be happening now.
Frequently Asked Questions
- When does EU Cyber Resilience Act Article 14 become enforceable?
- Article 14 reporting obligations become enforceable on 11 September 2026 — 15 months before the often-cited December 2027 full-application deadline. This is the first CRA deadline that carries administrative penalties, affecting every manufacturer that sells products with digital elements on the EU market.
- What is the 24/72/14 reporting cascade under CRA Article 14?
- The cascade has three deadlines, each running from the moment of awareness: an early warning notification within 24 hours, a full vulnerability notification within 72 hours, and a final report within 14 days of a corrective or mitigating measure becoming available. For severe incidents, the final report is due within 30 days. All deadlines run from awareness, not from one another.
- Does the EU Cyber Resilience Act apply to products sold before December 2027?
- Yes. Article 69(3) extends Article 14 reporting to all products with digital elements already on the EU market before 11 December 2027. A device shipped in 2018 that contains a vulnerability added to the CISA KEV catalog after September 2026 triggers the obligation if the product is still being made available.
- What are the penalties for failing to comply with CRA Article 14?
- Non-compliance with Article 14 falls under the top tier of Article 64: up to €15 million or 2.5 percent of global annual turnover, whichever is higher. Market surveillance authorities can additionally order product withdrawal, market recalls, and loss of the CE mark.
- Where do manufacturers submit CRA Article 14 vulnerability notifications?
- Notifications go through ENISA’s Single Reporting Platform (SRP). A single submission simultaneously reaches the manufacturer’s coordinator CSIRT and ENISA. The coordinator CSIRT is determined by the manufacturer’s main establishment — the Member State where product cybersecurity decisions are taken, not necessarily the legal headquarters.
- Does a security researcher’s bug bounty report trigger CRA Article 14?
- No. The Article 14 trigger is active exploitation by a malicious actor without permission. Recital 35a explicitly excludes vulnerabilities identified by good-faith security research. A proof-of-concept submitted through a bug bounty programme does not start the 24-hour clock — unless the manufacturer also has evidence that the same vulnerability is being exploited in the wild.
- Are SBOMs required under the EU Cyber Resilience Act?
- Explicit SBOM requirements activate in December 2027 under Annex I. However, a Software Bill of Materials is functionally mandatory by September 2026: without component-level visibility, a manufacturer cannot detect that an actively exploited library version is embedded in a shipped product, and cannot meet the 24-hour reporting clock for legacy product fleets.