Legal Notice
The contents of this article are for general informational purposes only and do not constitute legal advice. Although we prepare this information with the utmost care, we make no guarantees regarding its accuracy, completeness, or currency. For binding advice on your specific situation, please consult a qualified attorney.
The Key Question: High-Risk or Not?
When introducing an AI system in your business, one question matters more than any other: does the system fall into the “high-risk” category? The answer determines the effort, cost, and legal risk involved. A high-risk system is subject to extremely strict and expensive requirements. A limited-risk system, on the other hand, only has manageable transparency obligations.
The good news first: simple chat and voice assistants whose primary purpose is answering customer inquiries are generally classified as “limited risk” under the EU AI Act.
However, complexity arises when more advanced features come into play. Could sentiment analysis or automatic urgency detection turn your AI assistant into a high-risk system?
When Does a Communication Tool Become a High-Risk System?
The AI Act lists the areas in which AI systems are potentially considered high-risk in its Annex III. For customer service, two points are particularly relevant: systems that decide on access to essential services, or systems that perform profiling of natural persons.
This is where the typical concerns arise:
- Is sentiment analysis (“caller sounds upset”) considered prohibited “emotion recognition” or risky “profiling”?
- If the AI detects an urgency (“urgent matter”), does it then decide on access to an essential service?
These concerns are understandable, but a closer look shows why they are unfounded in most cases.
The Key: Is the AI Just an Assistant to Humans?
The AI Act provides specific exceptions for when a system is not considered high-risk despite being potentially listed in Annex III. Two of these exceptions are crucial for customer service:
- The AI serves to improve the outcome of a previously completed human activity.
- The AI performs a preparatory task for an assessment that is ultimately carried out by a human.
This is exactly where the key lies. The features of a well-designed AI assistant are specifically aligned with these exceptions. Sentiment analysis does not make an autonomous decision. It performs a preparatory task by providing a human agent with an additional data point (e.g., “caller seems upset”). The agent can use this information to handle the conversation more effectively. The AI thus improves the outcome of the human activity without taking over control.
This principle of “human-in-the-loop augmentation” is the decisive factor in avoiding high-risk classification. As long as the AI supports humans rather than replacing them, it generally remains a limited-risk system.
Staying on the Safe Side by Design
The takeaway for you as a business owner is clear: you can use innovative AI features in customer service without taking on the massive burden of high-risk compliance.
The decisive factor is choosing a provider whose product is designed from the ground up as an assistance system for humans. An AI assistant like Safina, for example, which prepares information and presents it to a human for decision-making, is deliberately designed so that it does not meet the criteria for a high-risk system. This way, you can leverage the benefits of AI while staying on the legally safe side.