High risk or not? This is how you correctly classify your AI-supported customer service.
Does your AI chatbot fall under "high risk"? Learn about the risk categories of the AI Act and how to properly classify your AI customer service to avoid unnecessary effort.
Legal Notice
The content of this article is for general informational purposes only and does not constitute legal advice. Although we create the information with the utmost care, we do not guarantee its accuracy, completeness, or timeliness. For binding advice regarding your specific situation, please contact a qualified attorney.
The Most Important Question: High Risk or Not?
When introducing an AI system in a company, one question is more crucial than all others: Does the system fall into the "high-risk" category? The answer to this question determines the effort, costs, and your legal risk. A high-risk system is subject to extremely strict and expensive requirements. In contrast, a system with limited risk only has manageable transparency obligations.
The good news first: Simple chat and voice assistants, whose main purpose is to answer customer inquiries, are generally classified as "limited risk" under the EU AI Act.
However, complexity begins when more advanced functions come into play. Could sentiment analysis or automatic urgency detection turn your AI assistant into a high-risk system?
When Does a Communication Tool Become a High-Risk System?
The AI Act lists in its Annex III the areas where AI systems are potentially considered high-risk. For customer service, two points are particularly relevant: systems that decide on access to essential services or systems that conduct profiling of natural persons.
Here, the typical worries arise:
Is a sentiment analysis ("caller sounds angry") considered a prohibited "emotion recognition" or risky "profiling"?
If the AI detects urgency ("urgent matter"), does it then decide on access to an essential service?
These concerns are understandable, but a nuanced view shows why they are unfounded in most cases.
The Key: Is the AI Just an Assistant for Humans?
The AI Act provides targeted exceptions for when a system, despite potential listing in Annex III, is not considered high-risk. Two of these exceptions are crucial for customer service:
The AI is meant to enhance the outcome of a previously completed human activity.
The AI performs a preparatory task for an evaluation that is ultimately made by a human.
This is precisely where the key lies. The functions of a well-designed AI assistant are tailored specifically to these exceptions. Sentiment analysis does not make an autonomous decision. It performs a preparatory task by providing a human employee with an additional data point (e.g., "caller seems upset"). The employee can use this information to conduct the conversation better. The AI thus enhances the outcome of human activity without taking control.
This principle of "Human-in-the-Loop Augmentation" is the decisive factor in avoiding high-risk classification. As long as the AI supports and does not replace humans, it generally remains a system with limited risk.
Consciously Stay on the Safe Side
The conclusion for you as an entrepreneur is clear: You can use innovative AI features in customer service without having to bear the massive burden of high-risk compliance.
The decisive factor is the choice of a provider whose product is designed from the ground up as an assistance system for humans. An AI assistant like Safina, which processes information and presents it to humans for decision-making, is intentionally designed to meet the criteria for a high-risk system. This way, you can benefit from AI while staying on the legally safe side.