High-Risk or Not? How to Correctly Classify Your AI-Powered Customer Service

Does your AI chatbot fall under 'high-risk'? Learn about the AI Act's risk categories and how to correctly classify your AI customer service to avoid unnecessary effort.

High-Risk or Not? How to Correctly Classify Your AI-Powered Customer Service Guides
Karsten Kreh Karsten Kreh

The contents of this article are for general informational purposes only and do not constitute legal advice. Although we prepare this information with the utmost care, we make no guarantees regarding its accuracy, completeness, or currency. For binding advice on your specific situation, please consult a qualified attorney.

The Key Question: High-Risk or Not?

When introducing an AI system in your business, one question matters more than any other: does the system fall into the “high-risk” category? The answer determines the effort, cost, and legal risk involved. A high-risk system is subject to extremely strict and expensive requirements. A limited-risk system, on the other hand, only has manageable transparency obligations.

The good news first: simple chat and voice assistants whose primary purpose is answering customer inquiries are generally classified as “limited risk” under the EU AI Act.

However, complexity arises when more advanced features come into play. Could sentiment analysis or automatic urgency detection turn your AI assistant into a high-risk system?

When Does a Communication Tool Become a High-Risk System?

The AI Act lists the areas in which AI systems are potentially considered high-risk in its Annex III. For customer service, two points are particularly relevant: systems that decide on access to essential services, or systems that perform profiling of natural persons.

This is where the typical concerns arise:

  • Is sentiment analysis (“caller sounds upset”) considered prohibited “emotion recognition” or risky “profiling”?
  • If the AI detects an urgency (“urgent matter”), does it then decide on access to an essential service?

These concerns are understandable, but a closer look shows why they are unfounded in most cases.

The Key: Is the AI Just an Assistant to Humans?

The AI Act provides specific exceptions for when a system is not considered high-risk despite being potentially listed in Annex III. Two of these exceptions are crucial for customer service:

  1. The AI serves to improve the outcome of a previously completed human activity.
  2. The AI performs a preparatory task for an assessment that is ultimately carried out by a human.

This is exactly where the key lies. The features of a well-designed AI assistant are specifically aligned with these exceptions. Sentiment analysis does not make an autonomous decision. It performs a preparatory task by providing a human agent with an additional data point (e.g., “caller seems upset”). The agent can use this information to handle the conversation more effectively. The AI thus improves the outcome of the human activity without taking over control.

This principle of “human-in-the-loop augmentation” is the decisive factor in avoiding high-risk classification. As long as the AI supports humans rather than replacing them, it generally remains a limited-risk system.

The EU AI Act: A Practical Guide for German Companies

Staying on the Safe Side by Design

The takeaway for you as a business owner is clear: you can use innovative AI features in customer service without taking on the massive burden of high-risk compliance.

The decisive factor is choosing a provider whose product is designed from the ground up as an assistance system for humans. An AI assistant like Safina, for example, which prepares information and presents it to a human for decision-making, is deliberately designed so that it does not meet the criteria for a high-risk system. This way, you can leverage the benefits of AI while staying on the legally safe side.

9:41

Safina handled 51 calls this week

46

Trustworthy

4

Suspicious

1

Dangerous

Last 7 days
Filter
EM
Emma Martin 67s 15:30

Wants to discuss the offer for the new campaign and has questions about the timeline.

LS
Laura Smith 54s 14:45

Asking about the order status and when the delivery arrives.

TH
Tim Miller 34s 13:10

Schedule a meeting for the project discussion next week.

Unknown 44s 11:30

Prize promise – probably spam.

SK
Sarah King 10s 09:15

Complaint about the last order, asks for a callback.

MM
Mike Mitchell 95s Dec 13

Wants to discuss a potential collaboration.

AR
Amy Roberts 85s Dec 13

Is your colleague and wants to discuss the project.

JK
Jack Kennedy 42s Dec 12

Asking about available appointments next week.

LB
Lisa Brown 68s Dec 12

Has questions about the invoice and asks for clarification.

Calls
Safina
Contacts
Profile
9:41
Call from Emma Martin
Dec 12
11:30
67s

Wants to discuss the offer for the new campaign and has questions about the timeline.

Key points

  • Call back Emma Martin
  • Clarify timeline & pricing questions
Call back
Edit contact

AI Insights

Caller mood Very good

The caller was cooperative and provided the needed information.

Urgency Low

The caller can wait for a response.

Audio & Transcript

0:16

Hello, this is Safina AI, Peter's digital assistant. How can I help you?

Hi Safina, this is Emma Martin. I wanted to discuss the offer and the timeline.

Thanks, Emma. Are you mainly deciding between the Standard and Pro package for the launch?

Exactly. We need the Pro package and would like to start next month if onboarding is possible in week one.

Say goodbye to your old-fashioned voicemail.

Try Safina for free and start managing your calls intelligently.

Start Your Free Trial