Global and Regional Cyber Security Regulations 2024-2025: Comprehensive review and impact analysis

Cyber Security Landscape 2024-2025: global and regional cyber security regulations

Write to us

The cybersecurity regulatory landscape is evolving rapidly around the world, a direct and increasingly forceful response by lawmakers to the rapidly increasing scale, complexity and severity of global cyber threats. There is a clear and continuing trend of global proliferation and, in many cases, progressive fragmentation of regulatory requirements for digital security, data protection and risk management. For organizations, especially those that operate in multiple international markets or are part of complex global supply chains, this presents an increasingly serious challenge and a significant operational, financial and strategic burden. As recent surveys indicate, more than 76% of Chief Information Security Officers (CISOs) at large enterprises report that it is this fragmentation and sometimes conflicting regulatory requirements between different jurisdictions that negatively impacts their organization’s ability to effectively build consistent global cyber resilience and optimize security investments.

What are the key and most influential cybersecurity regulations introduced or coming into force in the European Union between 2024 and 2025, and what obligations do they impose on companies?

The European Union has consistently positioned itself as one of the world’s most active and influential regulators in the fields of cyber security, data protection and digital regulation. In recent years, the EU has introduced or is finalizing the implementation of a series of comprehensive, often highly ambitious pieces of legislation aimed at systemically strengthening member states’ digital resilience, protecting citizens and ensuring a safe and trusted digital single market. The following are the most important of these regulations, which will shape the obligations of businesses in 2024-2025 and beyond.

Artificial Intelligence Act (AI Act): This landmark piece of legislation, which is the world’s first comprehensive regulation of artificial intelligence, formally went into effect on August 1, 2024. However, it should be noted that most of its key provisions and obligations will be enforced gradually, in successive phases staggered over 2025 and 2026, giving entities time to adapt. The AI Act introduces an innovative, risk-based approach, classifying AI systems into different categories depending on the potential threat they may pose to fundamental rights, health, security or democracy. The most stringent requirements and restrictions (including bans in some cases) apply to AI systems considered “high-risk” (high-risk AI systems), such as those used in critical infrastructure management, healthcare diagnostics and treatment, education and recruitment systems, or law enforcement and justice operations. Key responsibilities for suppliers and users of high-risk AI systems include ensuring that they are technically sound, highly accurate and reliable, appropriately manage the quality of training data (including minimizing bias), ensure transparency of operations, enable human oversight, implement appropriate cybersecurity measures (in accordance with the principles of “privacy by design and by default” and “security by design”), have an obligation to report serious incidents, and maintain detailed technical documentation and compliance assessments. The AI Act interacts closely with other key EU regulations, such as the NIS2 Directive, the Data Act and the Cybersecurity Act, creating a coherent legal framework for the development and implementation of secure and trustworthy artificial intelligence in the EU. The European Cyber Security Agency (ENISA) has already published the Framework for AI Cybersecurity Practices (FAICP) document as practical, non-binding guidance to support the implementation of the Act’s cybersecurity requirements. Crucially, the AI Act is extraterritorial in scope, meaning that it applies not only to EU entities, but also to those outside the EU who offer AI systems in the EU market or whose AI systems affect individuals located in the EU. It also provides for very high financial penalties for non-compliance, reaching up to €35 million or 7% of a company’s total annual worldwide turnover from the previous fiscal year (whichever is higher) in the most serious cases.

Network and Information Systems Directive 2 (NIS2): European Union member states were formally required to transpose this key directive into their national law by October 17, 2024. However, as of February 2025, as many as 19 member states (including, unfortunately, Poland) have not yet notified the European Commission of the full and correct transposition of this directive, which could lead to infringement proceedings against EU law. The NIS Directive2 significantly expands the personal and material scope of the previous 2016 NIS Directive, covering as many as 18 sectors of the economy and public services deemed essential or important for the smooth functioning of the EU economy and society. These sectors include energy, transportation (all branches), banking and financial markets infrastructure, healthcare (including manufacturers of medicines and medical devices), digital infrastructure (cloud providers, data centers, telecommunications networks), production and distribution of key products (e.g., chemicals, food), water and wastewater management, waste management, postal and courier services, as well as public administration at the central and regional levels. NIS2 imposes a number of specific obligations on covered entities (both in the public and private sectors), including, most notably, the obligation to implement appropriate, risk-proportionate technical, operational and organizational measures to manage cyber risks. These measures must include, among other things, risk analysis, information security policies, incident management, business continuity assurance, supply chain security (including relationships with IT service providers), human resources security, use of cryptography, and the obligation to report serious security incidents to the competent national authorities (CSIRT – Computer Security Incident Response Team) and, in some cases, to service recipients – usually within 24 hours of the incident being detected (initial notification), followed by the provision of more detailed information. In the Netherlands, for example, new national legislation based on NIS2 is not expected to come into force until the third quarter of 2025. There is also considerable variation in the approach to the detailed implementation of NIS2 requirements across member countries, such as the classification of specific entities into particular sectors, the procedures for registering those entities, or the amount of sanctions for non-compliance.

DORA (Digital Operational Resilience Act): This comprehensive regulation, unlike directives, is directly applicable in all EU member states and comes into force on January 17, 2025. DORA specifically targets the entire European Union financial sector (covering banks, insurance and reinsurance companies, investment firms, payment institutions, exchanges, clearing houses, etc.) and, very importantly, key providers of external ICT services to the sector (e.g., providers of cloud services, trading systems, analytics platforms). The main objective of DORA is to harmonize and strengthen the digital operational resilience of financial institutions across the EU. It imposes a number of stringent obligations on these entities under five key pillars:

  1. Comprehensive ICT Risk Management: Requires the establishment of a robust ICT risk management framework, regular identification and assessment of risks, implementation of appropriate security measures.
  2. Reporting of serious ICT incidents: Unified and harmonized rules for classifying and reporting serious ICT incidents to competent supervisory authorities.
  3. Digital Operational Resilience Testing: Regular resilience testing, including, for the most critical institutions, an obligation to conduct advanced Threat-Led Penetration Testing (TLPT) at least once every three years.
  4. Third-Party Risk Management (TPRM) from external ICT service providers: Detailed requirements for due diligence in selecting suppliers, monitoring their performance, contract management and contingency plans in case of supplier problems.
  5. Sharing threat information and intelligence: Encourage voluntary sharing of cyber threat and incident information within trusted communities. Implementing all of the DORA requirements is currently one of the biggest challenges for financial institutions in the EU, especially in the areas of building a comprehensive TPRM framework, understanding and conducting complex TLPT tests, and effectively integrating the new, highly detailed requirements into organizations’ existing operational risk management and information security frameworks.

The Cyber Resilience Act (CRA): This is another important regulation that formally went into effect on December 10, 2024. However, as with the AI Act, most of its key obligations for manufacturers and others will not take effect until after a transition period, i.e., December 11, 2027. The CRA aims to establish horizontal cybersecurity requirements for a very wide range of products with digital elements (both hardware and software, including software components) entering the European Union market for the first time. It imposes a number of significant new obligations primarily on manufacturers of these products, but also on importers and distributors. Key requirements include:

  • The obligation to ensure an adequate level of safety of products “by design” and “by default” already at the stage of their design and development.
  • The obligation to effectively manage vulnerabilities throughout the defined product lifecycle, including the obligation to report actively exploited vulnerabilities in their products to ENISA within 24 hours of becoming aware of them (this particular requirement will come into effect much earlier, as early as September 11, 2026).
  • Obligation to provide users with regular security updates (patches) for at least 5 years after product launch or for the expected life of the product (whichever is longer).
  • The need to carry out assessments of compliance with CRA requirements (in some cases with the participation of notified bodies) and to affix the CE marking on products confirming this compliance.
  • The obligation to provide users with detailed technical documentation on product security, including, very importantly, the Software Bill of Materials (SBOM), which is expected to increase transparency and facilitate the management of vulnerabilities in dependencies.

GDPR (RODO – General Data Protection Regulation): Although RODO has already been in effect since 2018, its enforcement by national Data Protection Authorities (DPAs) has been increasingly stringent and consistent between 2024 and 2025, with fines imposed for violations reaching record levels. For example, in January 2025, Meta was fined as much as €1.2 billion by the Irish supervisory authority (as the lead authority for Meta in the EU) for transfers of European users’ personal data to the United States in violation of EU law. In September 2024, the TikTok platform received a €345 million fine from the same authority for violations related to the processing of children’s personal data and the lack of adequate safeguards in this regard. There is also increasing and more detailed scrutiny by regulators of the use of artificial intelligence systems and biometric data in the context of compliance with RODO principles (e.g., data minimization, purposefulness of processing, legal basis). Rules on cross-border transfer of personal data outside the European Economic Area (EEA) are also being tightened – the new EU-U.S. Data Privacy Framework, which came into force in 2023, is set to replace the Privacy Shield mechanism previously invalidated by the EU Court of Justice, but its legal stability is still subject to discussion and potential future legal challenges. Throughout 2024, a total of more than €1.2 billion in fines were imposed in Europe for RODO violations.

What are the most important federal and state legislative initiatives and regulatory changes in the United States related to cybersecurity and data protection between 2024 and 2025?

In the United States, as in the European Union, we are seeing a dynamic development and evolution of cyber security, privacy and technology risk management regulations between 2024 and 2025. These efforts are taking place both at the federal level, by Congress and government agencies, and at the level of individual states, which often enact their own, often more stringent, regulations.

CIRCIA (Cyber Incident Reporting for Critical Infrastructure Act of 2022): This important federal law, enacted in March 2022, requires US critical infrastructure operators to report major cyber incidents to the federal Cyber and Infrastructure Security Agency (CISA). Currently, CISA is working hard to develop detailed implementing regulations (final rule) for this law. According to the plan, the final version of these rules is expected to be published at the end of 2025, and the incident reporting requirements themselves will most likely go into effect in 2026, after a preparatory period for regulated entities. CISA’s proposed regulations, submitted for public consultation in April 2024, include an obligation to report “major cyber incidents” (the definition of which is under discussion) within 72 hours of detection and, particularly controversially, an obligation to report ransomware payments in ransomware attacks within just 24 hours of such payment. The CISA proposal has received some criticism from industry and some members of Congress, mainly due to concerns about the overly broad scope of entities and types of incidents covered by potential regulation, as well as the possible administrative burden on companies. It is estimated that CIRCIA’s regulations could cover as many as more than 300,000 entities in 16 sectors considered critical infrastructure in the US.

Health Insurance Portability and Accountability Act (HIPAA) Regulatory Updates: In response to exponentially increasing threats to the security and privacy of electronic Protected Health Information (ePHI), the federal Department of Health and Human Services (HHS) has proposed major updates to the so-called HIPAA Security Rule in December 2024/January 2025. The main proposed changes are aimed at raising the minimum security standards for HIPAA covered entities (Covered Entities and their Business Associates). They include, among others. Introducing mandatory multi-factor authentication (MFA) for any access to systems storing or processing ePHI, strengthening and clarifying the requirements for encryption of ePHI (both data stored at rest – data at rest and data in transit – data in transit), unifying the implementation rules for all security controls (eliminating the current often confusing distinction between “required” and “addressable” controls – all would become required), requiring a detailed inventory of technology assets and network maps, conducting annual comprehensive HIPAA Security Rule compliance audits, and requiring regular vulnerability scans (at least every six months) and penetration testing (at least annually ) by all covered entities.

Artificial Intelligence (AI) regulation: At both the federal and state levels, there are a growing number of new legislative and regulatory initiatives addressing various aspects of the development, deployment and use of artificial intelligence systems. At the federal level, it is worth mentioning, for example, the TAKE IT DOWN Act (passed in April 2025), which criminalizes the nonconsensual dissemination of intimate AI-generated images (e.g., so-called deepfake pornography) and orders online platforms to remove them quickly. Also moving through Congress is the CREATE AI Act, a bill to create a National AI Research Resource (NAIRR) that would provide U.S. researchers with broad access to the computational resources, training data and tools needed to conduct innovative AI research. The National Institute of Standards and Technology (NIST) plays a key role in developing voluntary frameworks and guidelines for secure and trustworthy AI. In March 2025, NIST released a major report, the Trustworthy and Responsible AI Report, which includes a set of voluntary guidelines, standards and methodologies for, among other things, securing AI systems from tampering, attacks and minimizing the risk of bias. NIST is also working on the Cyber AI Profile concept (announced in April 2025), which aims to develop a specific set of cyber risk management approaches and controls to protect AI systems from existing and new AI-specific threats. In parallel, individual states, such as California (e.g., AB 2655 to regulate deepfakes in the context of election campaigns) and New York (e.g., AI Act Bill to mandate bias audits of AI tools used in recruitment processes), are enacting their own, often very specific laws to regulate various aspects of AI use.

Change in Federal Administration Approach and Potential Deregulation: It is worth noting a certain change in the direction of U.S. federal policy on the regulation of new technologies, including AI, which may be related to administrative changes. In January 2025, an Executive Order (EO) was issued that repealed some of the previous directives of the previous Biden administration and placed a much greater emphasis on removing regulatory barriers to U.S. AI leadership and innovation. The subsequent EO, issued in February 2025, requires independent federal agencies (such as the Federal Trade Commission – FTC, Securities and Exchange Commission – SEC, Federal Communications Commission – FCC) to first consult with the White House (specifically, the Office of Information and Regulatory Affairs – OIRA) on their regulatory plans for the technology sector. In practice, this could lead to an overall slowdown in the pace of new regulations or even some deregulation in some areas, including the technology sector. And the March 2025 EO, titled “Achieving Efficiency Through State and Local Preparedness,” seems to suggest an attempt to shift some of the responsibility for building resilience and preparedness against cyber attacks to the state and local government level, which could lead to further fragmentation of the regulatory approach in the US.

Product security regulations (including IoT devices): The United States is also taking steps to strengthen the security of digital products, including Internet of Things (IoT) devices. PADFAA (Protecting Americans’ Data from Foreign Adversaries Act) is a federal law that was enacted in April 2024 and went into effect in June 2024. It prohibits data brokers from selling, transferring or otherwise disclosing sensitive personal data of U.S. citizens (such as geolocation data, biometric data, health information) to countries deemed adversaries by the U.S. government (currently China, Russia, Iran and North Korea) and entities controlled by those countries. In turn, the IoT Cybersecurity Improvement Act of 2020 required NIST to develop and regularly update federal guidelines for minimum cybersecurity standards for IoT devices acquired and used by federal government agencies. NIST is currently (in 2025) in the midst of a five-year revision of these key guidelines (including NIST documents IR 8259, NIST SP 800-213), and plans for the update include consideration of the specifics of industrial IoT (IIoT) security and analysis of the complex relationship between privacy and cybersecurity of IoT devices. Another important initiative is the voluntary cybersecurity labeling program for consumer IoT devices, known as the US Cyber Trust Mark, launched in January 2025. This program is administered by the Federal Communications Commission (FCC) and is based on security standards developed by NIST. It is designed to help consumers identify and choose safer IoT products available on the market. There is also the possibility of extending this labeling program to other device categories in the future.

CISA Directives and other federal regulations for the public sector: CISA regularly issues so-called Binding Operational Directives (BODs) and, in emergency situations, Emergency Directives (EDs). These directives impose specific, often very detailed and time-bound obligations on U.S. federal agencies to implement certain cyber security measures, patch vulnerabilities or respond to specific threats. One example is BOD Directive 25-01, dated December 2024, for implementing a set of secure configuration practices for cloud services used by federal agencies. In addition, the White House Office of Management and Budget (OMB) annually publishes detailed implementation guidance for the Federal Information Security Modernization Act (FISMA), which outlines reporting and auditing requirements for all federal agencies regarding information security.


Key Takeaways:

The need for proactive compliance and risk management: In the face of such dynamic and complex regulatory changes, it becomes critical for all organizations not only to stay abreast of evolving regulations across jurisdictions, but more importantly to proactively manage cyber and privacy risks and implement appropriate, integrated technical, organizational and procedural measures to ensure sustainable compliance.

Global trend toward tighter and fragmented regulation: Both the EU and the U.S. are intensifying legislative efforts in response to growing cyber threats, leading to an increasing number of requirements and potential challenges for organizations operating globally.

European Union leading the way in creating a comprehensive regulatory framework: The EU is implementing landmark regulations such as the AI Act (risk-based approach for AI), NIS2 (expanded obligations for critical sectors), DORA (financial sector operational resilience) and CRA (digital product security), while continuing to rigorously enforce GDPR, with significant impact on global standards.

Dynamic regulatory developments in the United States: The U.S. is introducing new federal regulations (e.g., CIRCIA for incident reporting, HIPAA updates for the health sector) and numerous state initiatives, particularly in the area of artificial intelligence; there are also some changes in the federal administration’s approach, which may lead to deregulation in some areas.

Growing emphasis on product and IoT security: There is a strong trend in both the EU (through the Cyber Resilience Act – CRA) and the U.S. (through the IoT Cybersecurity Improvement Act and the U.S. Cyber Trust Mark program) toward regulating the security of products with digital elements, including Internet of Things devices, from the design stage and throughout their life cycle.

Extraterritoriality of regulations and high financial penalties: Many new and existing regulations (e.g., the AI Act in the EU, GDPR) are characterized by extraterritorial reach, affecting organizations around the world, and provide for very severe financial penalties for non-compliance with imposed obligations, forcing companies to take a global approach to compliance.

About the author:
Łukasz Szymański

Łukasz is an experienced professional with a long-standing career in the IT industry. As Chief Operating Officer, he focuses on optimizing business processes, managing operations, and supporting the long-term growth of the company. His versatile skills encompass both technical and business aspects, as evidenced by his educational background in computer science and management.

In his work, Łukasz adheres to the principles of efficiency, innovation, and continuous improvement. His approach to operational management is grounded in strategic thinking and leveraging the latest technologies to streamline company operations. He is known for effectively aligning business goals with technological capabilities.

Łukasz is, above all, a practitioner. He built his expertise from the ground up, starting his career as a UNIX/AIX systems administrator. This hands-on technical knowledge serves as a solid foundation for his current role, enabling him to deeply understand the technical aspects of IT projects.

He is particularly interested in business process automation, cloud technology development, and implementing advanced analytics solutions. Łukasz focuses on utilizing these technologies to enhance operational efficiency and drive innovation within the company.

He is actively involved in team development, fostering a culture of continuous learning and adaptation to changing market conditions. Łukasz believes that the key to success in the dynamic IT world lies in flexibility, agility, and the ability to anticipate and respond to future client needs.