QBox customers now have access to the latest addition to the QBox chatbot testing solution. This new ability solves the challenge of managing large-scale conversational platforms that handle thousands of ‘chats’ daily. QBox Enterprise tags and ranks all responses, identifying those that have likely not performed well.
This dramatically reduces the complexity and time required by data scientists and NLP data model analysts to decipher where the training priorities lie and pinpoint areas of immediate or urgent attention.
You can set QBox to monitor live models and customer interactions from your preferred NLP service provider
Automatic sampling for Data Model Managers using QBox’s proprietary algorithm: uses ‘scoring’ and classification to select an optimal list of unbiased customer interactions that are flagged for review.
QBox uses an intelligent self-learning scoring algorithm that takes each interaction, scores and automatically classifies them, marking them as ‘correct’, ‘likely correct’, ‘likely incorrect’ and ‘incorrect’.
Automatic processing to measure, compare and help fix wrong predictions: QBox isolates interactions marked ‘incorrect’ and runs a benchmark test, alerting where fixes are required in the model. QBox will then compare the corrected model against the incorrect marked interaction, whilst also checking for any other instances and occurrences of global regression.
If you work with natural-language data models and you’re looking to quickly and easily understand, analyse, and improve the performance and results of chatbots and conversational AI platforms, register for a free QBox trial today.