How Cocovox Protects Your Child
What we filter
- Profanity and inappropriate language
- Slurs and hate speech
- Personal information (social security numbers, email addresses, phone numbers)
- Self-harm and violence encouragement
What we can't filter
- Misinformation phrased in clean language
- Emotionally manipulative content
- Age-inappropriate complexity
- Subtle bias or stereotypes
How it works
- Every AI response is scanned before your child sees it
- If something is caught, the AI rephrases with an age-appropriate message
- Safety events are logged and visible in your parent dashboard
- Safety features are always ON by default -- they can only be changed by an administrator
What you can do
- Check the Safety section in your dashboard for event history
- Contact us if you see something that should have been filtered
- Review your child's conversation history at any time
For clinical professionals
Your clinical vocabulary allowlist automatically includes diagnostic terminology (e.g., spastic, flaccid, retardation). Violence-related language and personal information remain filtered in all contexts -- therapeutic discussions involving these topics should use your clinical documentation tools rather than Cocovox.