Nieuws

ENISA publiceert AI gerelateerde cybersecurity risico's

FRAMEWORK FOR GOOD CYBERSECURITY PRACTICES FOR AI OVERVIEW OF THE FRAMEWORK 

The proposed FAICP framework is a simple approach to guide NCAs, individual AI stakeholders and the research community on how they can use the existing cybersecurity practices, what additional cybersecurity activities are needed to address the specificities of AI and the additional practices required when AI systems are employed in specific sectors (e.g., health, energy, telecom). 
The framework was developed using the following principles:

•    Inclusive. Uses past experience and builds upon it.
•    Holistic. Considers the AI systems within the ICT infrastructure and embraces all cybersecurity practices needed around and within the AI systems and their individual components.
•    Expandable. Its generic and yet embracing structure can include future developments in all three layers.
•    Multi-use. Useful to AI stakeholders independently of the sector.
•    International. Includes European and international efforts, standards and recommendations.

Layer I (cybersecurity foundations). The basic cybersecurity knowledge and practices that need to be applied to all ICT environments that
host/operate/develop/integrate/maintain/supply/provide AI systems. Existing cybersecurity good practices presented in this layer can be used to ensure the security of the ICT environment that hosts the AI systems.

Layer II (AI-specific). Cybersecurity practices needed for addressing the specificities of the AI components with a view on their life cycle, properties, threats and security controls, which would be applicable regardless of the industry sector.

Layer III (Sectoral AI). Various best practices that can be used by the sectoral stakeholders to secure their AI systems. High-risk AI systems (i.e. those that process personal data) have been identified in the AI Act and they are listed in this layer to raise the awareness of operators to adopt good cybersecurity practices.

Security management 
Risk management is the basic cybersecurity practice for ensuring that an enterprise is secure, by identifying and evaluating threats and vulnerabilities, potential impacts and by measuring risks. According to the NIS and NIS 2 directives, all essential entities important for the functioning of society need to assess and mitigate their risks. Therefore, the first step in the security of AI systems and the security of their life cycle is to operate in a secure environment, i.e. to secure the ICT infrastructure that hosts the AI systems. 
The various types of threats to ICT infrastructures are listed below. 

•    Adversarial threats. These pose malicious intentions (e.g. denial of service attacks, non-authorised access, masquerading of identity) to individuals, groups, organisations or nations.
•    Accidental threats. These are caused accidentally or through legitimate components. Human errors are a typical accidental threat. Usually, they occur during the configuration or operation of devices or information systems, or the execution of processes.
•    Environmental threats. These include natural disasters (floods, earthquakes), human-caused disasters (fire, explosions) and failures of supporting infrastructures
(power outage, communication loss).
•    Vulnerability. This is an existing weakness that might be exploited by an attacker.

Wilt u verder lezen? Download dan het gehele document op de website van ENISA:  https://www.enisa.europa.eu/publications/multilayer-framework-for-good-cybersecurity-practices-for-ai 

Reacties

Log in om de reacties te lezen en te plaatsen