AI Act
Your path to secure and legally compliant AI usage
We help you effectively implement the requirements of the EU AI Act in your company.

Key Facts at a Glance
The AI Act (EU AI Act) is the first comprehensive legislation by the European Union to regulate artificial intelligence.
It sets binding rules for the development, use, and provision of AI systems – aiming to ensure safety, transparency, and the protection of fundamental rights.
The regulation applies to companies within the EU, as well as to providers and users of AI outside the EU, if their systems are used within the EU.
The AI Act classifies AI systems according to their risk potential. This classification is crucial, as it determines which legal requirements a company must meet.
The higher the risk to safety, fundamental rights, or society, the stricter the requirements.
The aim is to ensure that AI applications are trustworthy, operate transparently, and do not make unlawful intrusions into people’s rights.
- Prohibited AI: Systems that use, for example, manipulative techniques or social scoring of individuals.
- High-risk AI: Applications in sensitive areas such as critical infrastructure, healthcare, or the judiciary, which are subject to strict requirements.
- Limited risk: Systems with transparency obligations, such as chatbots or deepfakes (labeling required).
- Minimal risk: AI applications without specific legal requirements, e.g., spam filters.
This is how the implementation of this regulation works
Implementing the AI Act is not a one-time project, but a continuous process. Companies must first identify, document, and classify all AI systems they use based on their risk level. A thorough assessment at the beginning saves significant effort later during audits and market supervision checks.
The next step is to determine which technical, organizational, and legal measures are necessary – from ensuring data quality and meeting transparency obligations to establishing effective monitoring processes.
It is equally important to define responsible roles within the company and assign internal responsibilities clearly. This also includes targeted training for all affected employees so they understand the requirements and risks involved in using AI.
-
Practical implementation takes place in three core steps:
-
AI Inventory & Classification
- Recording all AI systems in use
- Classification according to the EU AI Act risk categories
-
Risk Management & Documentation
- Preparation of a risk assessment and technical documentation
-
Technical and Organizational Measures & Training
- Implementation of transparency obligations and monitoring processes
- Targeted employee training – for example, via our own secopan e-learning platform with a specially developed course on the AI Act
-
Our Support for You
We support you from the analysis of your AI applications to the full implementation of the AI Act:
- Gap Analysis: Identifying which requirements are already met and where action is needed
- Risk Classification & Documentation: Assistance with the legally compliant classification of your AI systems
- Technical Implementation: Consulting on transparency obligations, data quality, and security measures
- Training: Raising employee awareness for the responsible use of AI
- Audit and Review Support: Preparation for external inspections
Your benefit: We combine expertise in data protection, IT security, and compliance – for trustworthy and legally compliant use of AI.