{"id":290589,"date":"2026-01-17T13:04:10","date_gmt":"2026-01-17T12:04:10","guid":{"rendered":"https:\/\/lawandmore.eu\/?p=198615"},"modified":"2026-03-10T22:20:25","modified_gmt":"2026-03-10T21:20:25","slug":"prohibited-ai-practices-from-february-2025-what-dutch-businesses-must-know-2","status":"publish","type":"post","link":"https:\/\/highpowerlasertherapy.com\/law\/prohibited-ai-practices-from-february-2025-what-dutch-businesses-must-know-2\/","title":{"rendered":"Prohibited AI Practices: What Dutch Businesses Must Know"},"content":{"rendered":"<p>The European AI Act introduced major changes on 2 February 2025, making certain <a href=\"https:\/\/lawandmore.eu\/blog\/the-legal-side-of-artificial-intelligence-in-the-eu-ai-act-2025\/\" target=\"_blank\" rel=\"noopener\">AI practices<\/a> illegal across the EU. <a href=\"https:\/\/lawandmore.eu\/blog\/legal-advice-dutch-business-law-complete-guide\/\" target=\"_blank\" rel=\"noopener\">Dutch businesses<\/a> must now comply with strict rules about which AI systems they can use.<\/p>\n<p>Companies that fail to remove banned AI tools face heavy fines and potential legal action from anyone who suffers harm.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/lawandmore.eu\/wp-content\/uploads\/2026\/01\/v2-161uwc-9uxxt.jpg\" alt=\"A group of business professionals in a modern office having a focused discussion around a conference table with digital screens and city views in the background.\" title=\"\"><\/p>\n<p><strong>As of February 2025, AI systems that manipulate people, use social scoring, categorise individuals based on biometric data, or employ real-time facial recognition in public spaces are now prohibited in the Netherlands.<\/strong> These bans also cover emotion recognition technology in workplaces and schools, along with the untargeted collection of facial images from the internet to build recognition databases.<\/p>\n<p>The rules apply immediately, giving businesses no grace period to phase out these systems. Understanding which AI practices are banned and what steps your organisation must take is essential to avoid penalties.<\/p>\n<p>This guide explains the specific prohibitions, outlines your <a href=\"https:\/\/lawandmore.eu\/blog\/navigating-the-eu-ai-act-a-guide-for-your-business\/\" target=\"_blank\" rel=\"noopener\">compliance duties<\/a> as a Dutch business, and covers the training requirements needed to keep your operations legal under the new law.<\/p>\n<h2>Overview of the February 2025 Prohibitions<\/h2>\n<p><img decoding=\"async\" src=\"https:\/\/lawandmore.eu\/wp-content\/uploads\/2026\/01\/v2-161uwx-tuixr.jpg\" alt=\"A group of business professionals in a modern office discussing AI regulations with laptops and documents on a conference table, with a cityscape featuring Dutch architecture visible through large windows.\" title=\"\"><\/p>\n<p>Regulation (EU) 2024\/1689, known as the <a href=\"https:\/\/lawandmore.eu\/blog\/eu-artificial-intelligence-act-ai-act\/\" target=\"_blank\" rel=\"noopener\">EU AI Act<\/a>, introduced specific prohibitions on certain artificial intelligence practices that became enforceable on 2 February 2025. These rules apply to your Dutch business if you develop, deploy, or use <a href=\"https:\/\/lawandmore.eu\/blog\/ai-in-practice-who-is-liable-for-errors-made-by-artificial-intelligence\/\" target=\"_blank\" rel=\"noopener\">AI system<\/a>s within the Netherlands or the broader European Union.<\/p>\n<h3>Scope of Prohibited AI Practices<\/h3>\n<p>The AI Act bans specific AI practices that pose unacceptable risks to fundamental rights and European values. Article 5 of Regulation (EU) 2024\/1689 identifies these prohibited practices based on their potential harm rather than the technology itself.<\/p>\n<p>The main prohibited categories include:<\/p>\n<ul>\n<li><strong>Manipulative AI systems<\/strong> that exploit vulnerabilities of specific groups<\/li>\n<li><strong>Social scoring systems<\/strong> by public authorities<\/li>\n<li><strong>Real-time remote biometric identification<\/strong> in publicly accessible spaces for <a class=\"wpil_keyword_link\" href=\"https:\/\/lawandmore.eu\/\" title=\"law\" data-wpil-keyword-link=\"linked\" data-wpil-monitor-id=\"1154\" target=\"_blank\" rel=\"noopener\">law<\/a> enforcement (with limited exceptions)<\/li>\n<li><strong>Biometric categorisation systems<\/strong> that infer sensitive attributes<\/li>\n<li><strong>Untargeted scraping<\/strong> of facial images from the internet or CCTV footage<\/li>\n<\/ul>\n<p>Your business falls under these prohibitions whether you operate as an AI provider, deployer, or distributor in the Netherlands. The Dutch Data Protection Authority works alongside other European Union enforcement bodies to ensure compliance.<\/p>\n<p>Non-compliance carries fines up to 7% of your worldwide annual turnover.<\/p>\n<h3>Effective Dates and Transition Periods<\/h3>\n<p>The prohibitions under Article 5 became applicable on 2 February 2025. This means your business must comply with these rules immediately if you use or develop relevant AI systems.<\/p>\n<p>The enforcement framework follows a staggered timeline. Full penalties, governance structures, and confidentiality provisions take effect from 2 August 2025.<\/p>\n<p>This gives your organisation a six-month window where the rules apply but the complete enforcement mechanisms are still being established. The European Commission published official guidelines on 4 February 2025 to clarify how these prohibitions work in practice.<\/p>\n<p>Market surveillance authorities, including the Dutch Data Protection Authority, gained responsibility for monitoring and enforcing these rules from the February start date.<\/p>\n<h3>Key Definitions: AI Systems and Practices<\/h3>\n<p>Under the AI Act, an <strong>AI system<\/strong> means software that uses machine learning, logic-based approaches, or statistical methods to generate outputs like predictions, recommendations, or decisions. This definition is deliberately broad to cover current and emerging technologies.<\/p>\n<p>A <strong>prohibited AI practice<\/strong> refers to specific ways of deploying or using artificial intelligence that the European Union has deemed unacceptable. The prohibition applies to the practice itself, not the underlying technology.<\/p>\n<p>This means you could use the same AI system for permitted purposes whilst being banned from certain applications. Your business must distinguish between the AI system (the technology) and its practice (how you use it).<\/p>\n<p>An AI system for facial recognition might be lawful for building access control but prohibited for real-time surveillance in public spaces. The context and application determine legality under Regulation (EU) 2024\/1689.<\/p>\n<h2>Detailed List of Banned AI Practices<\/h2>\n<p><img decoding=\"async\" src=\"https:\/\/lawandmore.eu\/wp-content\/uploads\/2026\/01\/v2-161uxr-p9tbn.jpg\" alt=\"A group of businesspeople in a modern office discussing AI regulations around a table with digital devices and a large screen displaying AI-related graphics.\" title=\"\"><\/p>\n<p>The EU AI Act prohibits specific AI applications that pose unacceptable risks to fundamental rights and safety. These banned practices fall into three main categories: systems that manipulate human behaviour, those that <a href=\"https:\/\/lawandmore.eu\/blog\/unfair-commercial-practices-explained\/\" target=\"_blank\" rel=\"noopener\">exploit vulnerable groups<\/a>, and mechanisms that enable mass surveillance through social scoring.<\/p>\n<h3>Manipulative and Deceptive Systems<\/h3>\n<p>The AI Act bans any AI system that uses subliminal techniques to materially distort human behaviour in ways that cause harm. These are methods that work beyond your conscious awareness to influence your decisions or actions.<\/p>\n<p><strong>Prohibited manipulative practices include:<\/strong><\/p>\n<ul>\n<li>AI systems that deploy subliminal components you cannot detect<\/li>\n<li>Systems designed to exploit age-related vulnerabilities in children<\/li>\n<li>Technologies that manipulate behaviour to cause physical or psychological harm<\/li>\n<\/ul>\n<p>You cannot use AI to deliberately <a href=\"https:\/\/lawandmore.eu\/blog\/chatbots-copyright-and-compliance-the-legal-future-of-ai-tools\/\" target=\"_blank\" rel=\"noopener\">deceive people<\/a> in ways that lead to <a href=\"https:\/\/lawandmore.eu\/blog\/ai-and-criminal-law-can-an-algorithm-be-partly-responsible\/\" target=\"_blank\" rel=\"noopener\">significant harm<\/a>. This means your business cannot deploy chatbots or virtual assistants that mislead users about their AI nature when this deception could result in damage.<\/p>\n<p>The ban extends to AI systems that use dark patterns or psychological manipulation to push users towards harmful choices. The <a class=\"wpil_keyword_link\" href=\"https:\/\/highpowerlasertherapy.com\/law\/\" title=\"law\" data-wpil-keyword-link=\"linked\" data-wpil-monitor-id=\"1616\">law<\/a> protects individuals from AI practices that bypass rational decision-making.<\/p>\n<p>If your AI system influences behaviour through hidden manipulation rather than transparent information, it violates the Act.<\/p>\n<h3>Exploitation of Vulnerable Individuals<\/h3>\n<p>AI systems that exploit vulnerabilities related to age, disability, or social or economic circumstances are strictly prohibited. You cannot use AI to take advantage of people who lack the capacity to understand or resist manipulation.<\/p>\n<p>This ban covers AI that targets children&#8217;s inexperience or elderly individuals&#8217; cognitive decline. Your business cannot deploy systems that exploit physical or mental disabilities to manipulate behaviour.<\/p>\n<p>AI that preys on economic desperation or social isolation also falls under this prohibition.<\/p>\n<p><strong>Protected vulnerable groups include:<\/strong><\/p>\n<ul>\n<li>Children and minors<\/li>\n<li>Elderly persons with cognitive impairments<\/li>\n<li>People with physical or mental disabilities<\/li>\n<li>Individuals facing economic hardship<\/li>\n<\/ul>\n<p>The prohibition applies when your AI system causes or is likely to cause significant harm through exploitation. You must assess whether your AI applications could unfairly target vulnerable populations.<\/p>\n<h3>Social Scoring and Behavioural Profiling<\/h3>\n<p>The AI Act bans AI systems that evaluate or classify people based on their <a href=\"https:\/\/lawandmore.eu\/blog\/ai-and-criminal-law-who-is-responsible-when-a-machine-commits-a-crime\/\" target=\"_blank\" rel=\"noopener\">social behaviour<\/a> or personal characteristics. You cannot use social scoring mechanisms that lead to unfair treatment or harm.<\/p>\n<p>Social scoring involves assessing individuals based on their conduct, relationships, or predicted behaviour. This creates profiles that determine access to services, opportunities, or rights.<\/p>\n<p><strong>Banned social scoring activities include:<\/strong><\/p>\n<ul>\n<li>Rating citizens based on social or professional behaviour<\/li>\n<li>Creating trustworthiness scores from personal data<\/li>\n<li>Predicting criminal behaviour based on profiling<\/li>\n<li>Denying services based on behavioural assessments<\/li>\n<\/ul>\n<p>You cannot implement AI systems that assign scores determining whether someone receives healthcare, education, employment, or financial services. The prohibition applies regardless of whether you operate these systems as a public authority or private business.<\/p>\n<p>Real-time remote <a href=\"https:\/\/lawandmore.eu\/blog\/processing-biometric-data-explained\/\" target=\"_blank\" rel=\"noopener\">biometric identification<\/a> in public spaces also falls under strict prohibitions with limited exceptions for law enforcement.<\/p>\n<h2>Prohibitions on Biometric, Facial Recognition, and Emotion AI<\/h2>\n<p>The EU AI Act bans several types of biometric and facial recognition systems that pose risks to <a href=\"https:\/\/lawandmore.eu\/blog\/data-privacy-in-2025-how-the-gdpr-is-evolving-with-ai-and-big-data\/\" target=\"_blank\" rel=\"noopener\">fundamental rights<\/a>. These prohibitions affect how Dutch businesses can use <a href=\"https:\/\/lawandmore.eu\/blog\/understanding-dutch-data-privacy-laws\/\" target=\"_blank\" rel=\"noopener\">biometric data<\/a>, facial recognition databases, emotion recognition tools, and real-time identification systems in public spaces.<\/p>\n<h3>Biometric Categorisation and Data Use<\/h3>\n<p>You cannot use AI systems that categorise people based on their biometric data to infer their race, political opinions, trade union membership, religious beliefs, or sexual orientation. This ban applies when you place these systems on the market, put them into service, or use them for this specific purpose.<\/p>\n<p>The prohibition targets systems that make <a href=\"https:\/\/lawandmore.eu\/blog\/understanding-general-data-protection-law\/\" target=\"_blank\" rel=\"noopener\">sensitive inferences<\/a> about individuals through biometric analysis. Your business faces fines of up to 7% of worldwide annual turnover for non-compliance.<\/p>\n<p>There is one exception to this rule. You can label or filter lawfully acquired biometric datasets, such as images, based on biometric data in law enforcement contexts.<\/p>\n<p>Outside of law enforcement, these categorisation practices remain completely banned.<\/p>\n<h3>Restrictions on Facial Recognition Technologies<\/h3>\n<p>You cannot create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage. This prohibition became enforceable on 2 February 2025.<\/p>\n<p>Four conditions must exist for this ban to apply:<\/p>\n<ul>\n<li>The practice involves placing, servicing, or using an AI system<\/li>\n<li>The purpose is to create or expand facial recognition databases<\/li>\n<li>The method uses untargeted scraping of facial images<\/li>\n<li>The data sources are the internet or CCTV footage<\/li>\n<\/ul>\n<p>The key word is &#8220;untargeted&#8221;. You cannot collect facial images in a broad, indiscriminate manner.<\/p>\n<p>This means you cannot deploy cameras in supermarkets or public areas that automatically scrape faces to build recognition databases without specific targeting criteria.<\/p>\n<h3>Emotion Recognition in Employment and Education<\/h3>\n<p>You cannot use AI systems to infer emotions of people in workplaces and educational institutions. This ban directly affects HR departments, schools, and training facilities that might consider emotion detection tools.<\/p>\n<p>The prohibition covers any AI system designed to read or interpret emotional states in these specific settings. Whether you use facial analysis, voice pattern recognition, or other biometric indicators makes no difference.<\/p>\n<p>There is one narrow exception. You may use emotion recognition AI if it serves medical or safety purposes.<\/p>\n<p>For example, monitoring driver alertness for safety reasons or detecting patient distress in healthcare settings may qualify under this exception.<\/p>\n<h3>Limitations on Remote Biometric Identification<\/h3>\n<p>You cannot use real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in strictly defined circumstances. These exceptions include searching for victims of abduction or trafficking, preventing imminent threats to life or terrorist attacks, and locating suspects of serious crimes punishable by at least four years imprisonment.<\/p>\n<p>Each use requires <a href=\"https:\/\/lawandmore.eu\/blog\/police-interrogation-know-your-rights-in-nl\/\" target=\"_blank\" rel=\"noopener\">prior authorisation<\/a> from a <a href=\"https:\/\/lawandmore.eu\/news-en\/dutch-judicial-system-innovating\/\" target=\"_blank\" rel=\"noopener\">judicial authority<\/a> or independent administrative body. In urgent situations, you may commence use without authorisation, but you must request approval within 24 hours.<\/p>\n<p>If authorities reject the authorisation, you must immediately stop using the system and delete all data. The Netherlands may establish its own detailed rules for these exceptions within national law.<\/p>\n<p>Your business must notify the relevant market surveillance authority and national data protection authority of each use. Member States submit annual reports on real-time remote biometric identification usage to the European Commission.<\/p>\n<h2>Compliance Obligations for Dutch Businesses<\/h2>\n<p>Dutch businesses must take concrete steps to achieve AI compliance by identifying banned systems, removing them from operations, and maintaining proper documentation. The <a href=\"https:\/\/lawandmore.eu\/blog\/dutch-data-protection-authority\/\" target=\"_blank\" rel=\"noopener\">Dutch DPA<\/a> and Dutch Authority for Digital Infrastructure will enforce these requirements with significant penalties for violations.<\/p>\n<h3>Identifying Prohibited AI Within Operations<\/h3>\n<p>You need to conduct thorough risk assessments across your organisation to locate AI systems that might violate the new rules. This means examining any technology that uses manipulation techniques, exploits vulnerabilities, implements social scoring, or makes real-time biometric identifications in public spaces.<\/p>\n<p>Start by creating an inventory of all AI systems currently in use. Document what each system does, how it processes data, and who it affects.<\/p>\n<p>Pay special attention to systems that interact with customers, employees, or the public. Your risk assessments should evaluate whether any system could harm fundamental rights.<\/p>\n<p>This includes checking if AI systems make decisions about people based on behaviour, personal characteristics, or social connections. Systems that track or profile individuals without transparency require immediate review.<\/p>\n<p>Work with your technical teams to understand how each AI system operates. You cannot rely solely on vendor descriptions.<\/p>\n<p>The Dutch DPA has made clear that you hold responsibility for compliance regardless of whether you built the system or purchased it from a third party.<\/p>\n<h3>Phasing Out or Removing Banned AI Systems<\/h3>\n<p>You must stop using prohibited AI practices immediately. The law allows no grace period for banned systems.<\/p>\n<p>If your risk assessment identifies a prohibited practice, you need to deactivate it straight away. Create a removal plan that addresses both technical and operational impacts.<\/p>\n<p>Some systems might be embedded in larger platforms or connected to multiple processes. You&#8217;ll need to ensure that removing the prohibited elements doesn&#8217;t disrupt essential business functions.<\/p>\n<p>Before shutting down a system, identify how you&#8217;ll handle the tasks it previously performed. This might mean reverting to manual processes, implementing different technology, or redesigning workflows entirely.<\/p>\n<p>Document every step of the removal process. Record when you discovered the prohibited practice, what actions you took, and when the system was fully deactivated.<\/p>\n<p>Regulators will expect clear evidence that you acted promptly once you identified the violation.<\/p>\n<h3>Record-Keeping and Documentation<\/h3>\n<p>You must maintain technical documentation that proves compliance with the prohibition rules. This includes records of your risk assessments, decisions about AI systems, and any changes you made to achieve compliance.<\/p>\n<p>Your documentation should cover:<\/p>\n<ul>\n<li><strong>System inventories<\/strong> with descriptions of each AI application<\/li>\n<li><strong>Risk assessment reports<\/strong> showing your analysis of potential violations<\/li>\n<li><strong>Removal records<\/strong> detailing when and how you deactivated prohibited systems<\/li>\n<li><strong>Fundamental rights impact assessments<\/strong> for any borderline cases<\/li>\n<\/ul>\n<p>Keep these records for at least five years. The Dutch DPA can request them during inspections or investigations.<\/p>\n<p>Missing or incomplete documentation can result in fines up to \u20ac7,500,000 or 1% of annual global turnover. Your technical documentation must be detailed enough that regulators can understand what your AI systems do and why they comply with the law.<\/p>\n<p>Generic descriptions or marketing materials don&#8217;t meet this requirement. Include technical specifications, data flows, and decision-making processes.<\/p>\n<p>Update your documentation whenever you modify AI systems or deploy new ones. You need ongoing processes to ensure that future systems don&#8217;t introduce prohibited practices.<\/p>\n<h2>AI Literacy Requirements and Staff Training<\/h2>\n<p>As of 2 February 2025, Dutch businesses that provide or deploy AI systems must ensure their staff possess adequate AI literacy. This obligation applies regardless of whether you use high-risk or lower-risk AI systems.<\/p>\n<p>Documented training programmes must be tailored to your employees&#8217; roles and technical backgrounds.<\/p>\n<h3>The AI Literacy Obligation Explained<\/h3>\n<p>Article 4 of the EU AI Act imposes a <a href=\"https:\/\/lawandmore.eu\/blog\/legal-compliance-requirements\/\" target=\"_blank\" rel=\"noopener\">legal requirement<\/a> on all AI providers and deployers to ensure sufficient AI literacy amongst their personnel. You must take measures to guarantee that staff and contractors dealing with AI systems understand the technology they work with.<\/p>\n<p>The AI literacy obligation covers anyone who operates, uses, or makes decisions about AI systems on your behalf. This includes technical staff who build or maintain AI systems, as well as non-technical employees who use AI tools in their daily work.<\/p>\n<p>You must tailor AI literacy training to each person&#8217;s role, technical knowledge, experience, education, and the specific context in which they use AI systems. A software developer implementing AI models requires different training than a customer service representative using AI-powered chatbots.<\/p>\n<p>The obligation requires you to make your &#8220;best efforts&#8221; to ensure adequate literacy levels. You cannot simply offer token training and claim compliance.<\/p>\n<h3>Developing Effective Training Programmes<\/h3>\n<p>Your AI literacy programme should address the specific AI systems your organisation uses rather than providing generic information. Staff need to understand how the particular AI tools they work with function, their limitations, and potential risks.<\/p>\n<p>Training content should cover:<\/p>\n<ul>\n<li>Basic AI concepts relevant to your systems<\/li>\n<li>How to operate AI tools correctly<\/li>\n<li>Potential biases and errors in AI outputs<\/li>\n<li>When human oversight is necessary<\/li>\n<li>Your organisation&#8217;s AI policies and procedures<\/li>\n<\/ul>\n<p>You must consider each employee&#8217;s starting knowledge level. Technical staff may need advanced training on model development and testing, whilst general users require practical guidance on interpreting AI outputs and recognising when results seem incorrect.<\/p>\n<p>Training delivery can take various forms, including workshops, online courses, on-the-job coaching, or external programmes. The format matters less than ensuring staff actually acquire the necessary knowledge and skills.<\/p>\n<h3>Documenting AI Literacy Activities<\/h3>\n<p>You must maintain records of your AI literacy training activities to demonstrate compliance with the EU AI Act. This documentation serves as evidence that you have fulfilled your <a href=\"https:\/\/lawandmore.eu\/blog\/liability-in-dutch-law-key-considerations-for-2025\/\" target=\"_blank\" rel=\"noopener\">legal obligations<\/a>.<\/p>\n<p>Your records should include:<\/p>\n<ul>\n<li>Training programme content and materials<\/li>\n<li>Lists of employees who completed training<\/li>\n<li>Dates and duration of training sessions<\/li>\n<li>Assessment results (if applicable)<\/li>\n<li>How you determined appropriate literacy levels for different roles<\/li>\n<\/ul>\n<p>Document your process for identifying AI literacy needs across your organisation. This includes how you assessed which staff require training and what specific knowledge they need based on their interaction with AI systems.<\/p>\n<p>Keep records of how you update training as your AI systems change or as new staff join your organisation. AI literacy is not a one-time requirement but an ongoing obligation that must adapt to your evolving use of AI technology.<\/p>\n<h2>Supervision, Enforcement, and Penalties<\/h2>\n<p>The EU AI Act establishes a multi-layered enforcement structure with significant <a href=\"https:\/\/lawandmore.eu\/blog\/intellectual-property-enforcement-netherlands-2025\/\" target=\"_blank\" rel=\"noopener\">financial penalties<\/a> for non-compliance. Dutch businesses face oversight from both national and European authorities, with fines reaching up to 7% of worldwide annual turnover for prohibited AI practices.<\/p>\n<h3>Dutch and EU Supervisory Authorities<\/h3>\n<p>The Dutch Data Protection Authority serves as the primary national supervisory body for AI Act enforcement in the Netherlands. This authority works alongside other designated Dutch market surveillance bodies to monitor compliance with the regulation.<\/p>\n<p>At the European level, the European Commission oversees the AI Act&#8217;s implementation across all member states. The European AI Office, established within the Commission, coordinates enforcement activities and provides technical expertise.<\/p>\n<p>The European Data Protection Board supports consistent application of the Act, particularly where AI systems involve personal data processing. These authorities collaborate to ensure uniform enforcement across the European Union.<\/p>\n<p>If you operate AI systems in multiple EU countries, you may face scrutiny from supervisory bodies in each jurisdiction where your systems are deployed or have effect.<\/p>\n<h3>Sanctions and Civil Liability<\/h3>\n<p>Violations of Article 5&#8217;s prohibited practices carry penalties of up to \u20ac35 million or 7% of your total worldwide annual turnover, whichever is higher. These fines apply from 2 February 2025.<\/p>\n<p>The Court of Justice of the EU (CJEU) holds ultimate authority to interpret the AI Act&#8217;s provisions. National courts may refer questions to the CJEU when disputes arise about what constitutes a prohibited practice.<\/p>\n<p>Beyond administrative fines, you may face civil liability claims from individuals or organisations harmed by non-compliant AI systems. Member states can establish additional penalties under their national law.<\/p>\n<h3>Guidance from European and National Bodies<\/h3>\n<p>The European Commission published <a href=\"https:\/\/lawandmore.eu\/blog\/european-commission-demands\/\" target=\"_blank\" rel=\"noopener\">guidelines on prohibited<\/a> AI practices on 4 February 2025. These documents explain how authorities will interpret Article 5&#8217;s prohibitions and provide practical examples for compliance.<\/p>\n<p>Whilst these guidelines are non-binding, they offer valuable insight into enforcement priorities. Only the CJEU can provide authoritative legal interpretations.<\/p>\n<p>Dutch supervisory authorities may issue additional national guidance tailored to local business contexts. You should monitor updates from both European and Dutch bodies as enforcement approaches evolve.<\/p>\n<p>The guidelines are available in all EU languages to support cross-border understanding.<\/p>\n<h2>Frequently Asked Questions<\/h2>\n<p>The European AI Act brings specific requirements and restrictions that affect Dutch businesses starting February 2025. These regulations ban certain AI practices, require <a href=\"https:\/\/lawandmore.eu\/blog\/legal-compliance-risk-avoid-costly-mistakes\/\" target=\"_blank\" rel=\"noopener\">compliance measures<\/a>, and establish penalties for violations.<\/p>\n<h3>What are the new regulations concerning artificial intelligence use in Dutch companies as of February 2025?<\/h3>\n<p>The European AI Act entered into force on 1 August 2024, with the first set of prohibitions taking effect on 2 February 2025. These regulations apply to all Dutch businesses that develop, deploy, or use AI systems within the European Union.<\/p>\n<p>The Act bans manipulative AI systems that use subliminal techniques to influence decisions people wouldn&#8217;t normally make. AI systems that exploit vulnerabilities based on age, disability, or socioeconomic status are also prohibited.<\/p>\n<p>Social scoring systems that judge individuals based on their social behaviour or personal characteristics face a complete ban. The regulations also prohibit AI-based risk assessments that predict crimes solely through profiling or personality trait analysis rather than facts related to criminal activity.<\/p>\n<p>Biometric categorisation systems that infer race, political views, union membership, religious beliefs, or sexual orientation from facial images or fingerprints are not allowed. Real-time remote biometric identification in public spaces is banned except in strictly defined situations with specific <a href=\"https:\/\/lawandmore.eu\/blog\/how-art-law-is-evolving-in-the-dutch-market-2025\/\" target=\"_blank\" rel=\"noopener\">legal authorisation<\/a>.<\/p>\n<h3>How do the February 2025 Dutch AI guidelines impact data privacy and protection for consumers?<\/h3>\n<p>The AI Act works alongside existing data protection regulations, including the GDPR. The Dutch Data Protection Authority provides guidance on how AI systems must comply with data protection rules, particularly around automated decision-making.<\/p>\n<p>Untargeted scraping of facial images from the internet or camera footage to create or expand facial recognition databases is prohibited. This restriction protects individuals&#8217; privacy rights and prevents the unauthorised collection of biometric data.<\/p>\n<p>AI systems must respect fundamental rights throughout their lifecycle. Public authorities and entities providing public services must conduct a fundamental rights impact assessment when using high-risk AI systems.<\/p>\n<h3>Which AI-based activities are Dutch businesses no longer permitted to engage in due to the recent legislative changes?<\/h3>\n<p>Your business cannot use emotion recognition systems in workplace or educational settings. These systems are banned regardless of the claimed benefits or use cases.<\/p>\n<p>You cannot deploy AI systems that manipulate people through subliminal techniques or exploit their vulnerabilities. This includes systems that target individuals based on their disability, social status, or age to influence their behaviour.<\/p>\n<p>Biometric categorisation systems that classify people based on sensitive characteristics are prohibited. You cannot use AI to infer someone&#8217;s race, political opinions, religious beliefs, or sexual orientation from their biometric data.<\/p>\n<p>Social scoring systems are banned entirely. You cannot implement AI that evaluates or classifies people based on their social behaviour or personal characteristics.<\/p>\n<p>Real-time remote biometric identification in publicly accessible spaces is prohibited for commercial use. Law enforcement can only use such systems in narrowly defined circumstances with proper legal authorisation.<\/p>\n<h3>What steps must Dutch corporations take to align their AI systems with the compliance standards set in February 2025?<\/h3>\n<p>You must identify all AI systems your organisation currently uses or deploys. This inventory should categorise each system according to the risk classifications defined in the AI Act.<\/p>\n<p>Review your AI systems against the list of prohibited practices. If you use any banned systems, you must phase them out immediately to avoid regulatory penalties.<\/p>\n<p>Document your AI systems&#8217; purposes, data sources, and decision-making processes. This documentation helps demonstrate compliance and supports any required impact assessments.<\/p>\n<p>For high-risk AI systems, you need additional compliance measures. Public authorities must register high-risk systems in the European database by 2 August 2026 and conduct fundamental rights impact assessments.<\/p>\n<p>Train your staff on the new requirements and establish internal processes for ongoing compliance monitoring. Your compliance programme should include regular reviews of AI systems to ensure they remain within regulatory boundaries.<\/p>\n<h3>Are there any sector-specific prohibitions on AI that Dutch firms should be aware of following the new regulations?<\/h3>\n<p>Education sector organisations face specific restrictions. You cannot use emotion recognition systems to monitor students or assess their emotional states, regardless of the intended educational benefit.<\/p>\n<p>Workplace environments have similar limitations. Employers cannot deploy emotion recognition AI to monitor employees&#8217; emotional states or reactions during work activities.<\/p>\n<p>Law enforcement agencies have limited exceptions for biometric identification systems. These exceptions require strict legal authorisation and apply only to specific situations such as searching for kidnapping victims, preventing imminent threats, or tracking suspects of serious crimes.<\/p>\n<p>Public authorities providing public services face additional requirements beyond the general prohibitions. You must conduct fundamental rights impact assessments for all high-risk AI systems and register these systems in the European database.<\/p>\n<h3>What are the penalties for non-compliance with the Dutch AI restrictions introduced in February 2025?<\/h3>\n<p>Regulators can impose substantial fines for non-compliance with the AI Act. The penalty structure follows a tiered approach based on the severity of the violation.<\/p>\n<p>Using prohibited AI systems represents the most serious violation category. Your organisation risks significant financial penalties if you continue operating banned systems after the February 2025 deadline.<\/p>\n<p>The exact penalty amounts depend on whether your organisation qualifies as a small and medium-sized enterprise or a larger company. Regulatory authorities consider factors such as the nature of the violation, its duration, and any harm caused.<\/p>\n<p>Beyond financial penalties, non-compliance can damage your organisation&#8217;s reputation. Individuals harmed by prohibited AI systems may pursue additional remedies through civil proceedings.<\/p>\n<p>The Dutch Data Protection Authority and other regulatory bodies have enforcement powers. They can investigate complaints, conduct audits, and require immediate cessation of prohibited practices.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The European AI Act introduced major changes on 2 February 2025, making certain AI practices illegal across the EU. Dutch businesses must now comply with strict rules about which AI systems they can use. Companies that fail to remove banned AI tools face heavy fines and potential legal action from anyone who suffers harm. As [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"set","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[50],"tags":[],"class_list":["post-290589","post","type-post","status-publish","format-standard","hentry","category-blog"],"_links":{"self":[{"href":"https:\/\/highpowerlasertherapy.com\/law\/wp-json\/wp\/v2\/posts\/290589","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/highpowerlasertherapy.com\/law\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/highpowerlasertherapy.com\/law\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/highpowerlasertherapy.com\/law\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/highpowerlasertherapy.com\/law\/wp-json\/wp\/v2\/comments?post=290589"}],"version-history":[{"count":1,"href":"https:\/\/highpowerlasertherapy.com\/law\/wp-json\/wp\/v2\/posts\/290589\/revisions"}],"predecessor-version":[{"id":290590,"href":"https:\/\/highpowerlasertherapy.com\/law\/wp-json\/wp\/v2\/posts\/290589\/revisions\/290590"}],"wp:attachment":[{"href":"https:\/\/highpowerlasertherapy.com\/law\/wp-json\/wp\/v2\/media?parent=290589"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/highpowerlasertherapy.com\/law\/wp-json\/wp\/v2\/categories?post=290589"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/highpowerlasertherapy.com\/law\/wp-json\/wp\/v2\/tags?post=290589"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}