What the law says

The AI Act does not speak about liberal professions. But liberal professions should deal with it.

Artificial Intelligence is reshaping how liberal professions operate — from law and healthcare to engineering and consulting. While AI offers unprecedented opportunities to enhance efficiency and productivity, it also raises serious legal, ethical, and professional challenges.

The EU AI Act introduces a risk-based framework that doesn’t only target AI developers: users — or “deployers” — are also subject to specific obligations, especially when using high-risk systems. Liberal professionals, often working autonomously and handling sensitive data, are particularly exposed to these new requirements.

A key challenge lies in the identification and classification of risks under the Act, which lacks tools tailored to small entities or independent professionals.

Moreover, the FRIA (Fundamental Rights Impact Assessment) becomes mandatory from 2 August 2026 for certain professionals acting as providers of public services (e.g. healthcare, justice, education), according to Recital 96.

Yet beyond compliance, there’s a broader mission: preserving professional secrecy, ensuring human oversight, and maintaining the public’s trust in liberal professions in an era of algorithmic decision-making.

To support this transition, we propose the AI-VIALP — a Voluntary AI Impact Assessment method designed for liberal professionals. Inspired by the DPIA model under the GDPR, AI-VIALP provides a seven-step methodology to assess and mitigate legal, ethical, and reputational risks in a clear, structured way — without waiting for obligations to become enforceable.
Search