Portfolio

comevis
Thinking
Wir hören in Ihre Marke rein: Von der Exploration bis zum Tracking.
__________________________________
Methodik: comevis Sonic Profiling

comevis
Design
Wir gestalten Ihren Klang:
Vom Audio Branding über die Klangarchitektur bis zur Corporate Voice.
__________________________________
Methodik: comevis Sonic Coding

comevis
Make it Real
Wir gehen weiter:
In unseren Science Labs und unseren Studios.
__________________________________
Methodik: comevis Sonic Producing

comevis Care
Performance Corporate Voice & Soundservices
Licensing (GEMA-frei), Services, Produktionsmanagement, QS, Joure Fix, Check, Report









Why “Tech is OK” is Not Enough
Platform reports and technical KPIs only show part of the picture. The success of your voicebot is determined by the user experience.
Good configuration ≠ good experience
Technical KPIs are good, but users abandon the process due to detours, repetitions, or unclear guidance.
AI testing AI = Blind spots
Automated tests miss real user logic, dialects, and the typical impatience of real callers.
Audio & timing kill UX
Latency, artifacts, and poor intelligibility in noisy environments lead to frustration and abandonment.
Methodology That Reflects User Reality
Platform-independent UX quality assurance that not only tests your bot, but also systematically improves it.
Overview of the voicebot experience – understandable and relevant for decision-making.
-
Results-oriented assessment
-
Prioritized next steps
-
Identified quick wins
Comprehensibility where it counts: in the office, at the train station, in the car.
-
Ambient noise
-
Hands-free & echo
-
Speech intelligibility
Real people test like real users—with impatience, dialect, and context changes.
-
Persona-based testing
-
Edge cases & stress situations
-
Dialect & tempo variations

From Check to Improvement
1
Kickoff & Target Vision
Joint definition of critical journeys, user groups, and success criteria.
2
Baseline Check
Initial measurement using human tests, real-world intelligibility, and experience metrics.
3
Monitoring Cycle
Regular checks with trend analysis, regression testing, and prioritized recommendations.
4
Implement Quick Wins
Implementation of the most effective improvements and measurement of the impact in the next cycle.
A structured process instead of reactive individual measures.
Suitable For Management & Operationally Usable
Clear results instead of lengthy PDFs: scorecards, receipts, backlogs—
so that decisions can be made quickly and measures can be implemented efficiently.
Experience Scorecard
-
Traffic light status of the overall experience
-
Key points of friction in the dialogue
-
Audio and timing risks
-
Prioritized measures (impact × effort)
Real User Tests
-
Reproducible test scenarios
-
Transcript and call examples
-
“This is how it feels” moments
Prioritized Measures
Impact-first:
Measures with the greatest effect on user and service KPIs.

Plannable in Retainer
The scope depends on journeys, languages, and relevance.
We will find the right setup during the initial consultation.
Monitoring Light
For stable systems that need to remain clean
-
Monthly quick check
-
Core journeys covered
-
Experience scorecard
-
1 review call per month
Monitoring Pro
Standard for continuous improvement
-
Everything from Monitoring Light
-
Regular journey checks
-
Real-world tests
-
Trend & regression analysis
-
Prioritized backlog
-
Bi-weekly review calls
Monitoring Enterprise
For critical hotlines & high volumes
-
Everything from Monitoring Pro
-
Release gates & more frequent checks
-
Trust failure deep dives
-
Close stakeholder rhythm
-
Custom reporting
Best Practices
A selection for inspiration























