🎙️
Pronunciation is screened, not assessed
Cocovox checks pronunciation using transcription confidence and grapheme-substitution heuristics. It is a practice check, not a clinical articulation assessment. It cannot hear phoneme-level errors the way a speech-language pathologist can, and speech-to-text sometimes normalizes errors away (for example, "wabbit" may be transcribed as "rabbit").
🗣️
Fluency profiles are unvalidated estimates
Fluency severity bands are built from algorithm screening estimates — they have not been validated against clinician-rated fluency samples. They may misclassify bilingual code-switching, dialectal features, or second-language learners. Use them as a nudge to look closer, never as a diagnosis.
🌍
Accents and dialects
Speech models were trained on mainstream American and British English. They are less accurate for regional dialects, African American English, non-native accents, and code-switching across languages. A lower score does not mean worse speech — it can mean the model was not trained on voices like yours.
🔊
Background noise and microphone quality
Transcription confidence drops sharply with background noise, echo, low-quality microphones, or distance from the device. If a score feels wrong, that is often why. We never penalize a student for a bad microphone — but we cannot always tell when it is happening.
👶
Young children and emerging speech
Transcription models are trained on adult speech. They are less reliable for children under six, emerging speakers, and anyone whose articulation is in active development. Practice scores for these users should be weighted toward encouragement, not measurement.
🧠
AI-drafted narratives can be wrong
Weekly parent digests, conversation starters, celebration messages, and IEP goal suggestions may be AI-drafted. A real person should read them before treating them as truth. Every AI-drafted surface is marked with a badge — look for it, and trust your own judgment over the draft.
📏
Stars are not clinical accuracy
The star ratings you see on pronunciation practice are designed to reward effort. They are not the same as a clinical accuracy score, and we intentionally do not show raw percentage numbers to children, parents, or teachers. Speech-language pathologists working inside Cocovox see the underlying numbers, along with a reminder that they are algorithm-estimated.
🧪
No published validation study — yet
We do not currently cite a peer-reviewed study validating Cocovox against clinician judgment. This is something we are working on. Until then, Cocovox should complement — not replace — the work of a qualified educator or clinician.
⚠️
Known failure modes
AI features can go down, slow down, or return nothing. When that happens we show a plain message rather than fake a result. If you see "Not enough data yet" or a similar state, that is honesty, not a bug.