Back to blog
|5 min read

Understanding AI Safety: How We Protect Our Users

AI chatbots are powerful tools, but with that power comes real responsibility. At Domkop AI, we take user safety seriously. Here is how we protect the people who use our platform.

Crisis Detection

Sometimes people reach out to AI when they are in distress. They might be experiencing a mental health crisis, feeling unsafe, or dealing with a difficult situation. Our AI includes crisis detection that identifies when a user might need immediate help.

When crisis signals are detected, Domkop AI responds with empathy and provides local South African helpline numbers. This includes the South African Depression and Anxiety Group (SADAG) at 0800 567 567, Lifeline South Africa, and other relevant resources.

Crucially, these resources are provided in the language the user is communicating in. If someone reaches out in Afrikaans, they get help information in Afrikaans. If they are chatting in English, the response is in English. Because in a crisis, language barriers should not stand between someone and help.

Content Safety

Domkop AI has built-in content filters that prevent the generation of harmful content. This includes filtering for dangerous instructions, harmful misinformation, and content that could put users at risk.

We are not talking about censorship. Our AI is happy to discuss difficult topics, answer hard questions, and engage in robust debate. But there are clear lines around content that could cause real harm, and our safety systems enforce those lines.

Privacy First

Your conversations with Domkop AI are your conversations. Here is our straightforward privacy approach:

We do not sell your data to advertisers. We do not share your conversations with third parties. We do not use your personal conversations to train our models without explicit consent. Your data stays secure.

We use encryption for data in transit and at rest. We follow responsible data practices and are transparent about how your information is handled.

Bilingual Safety

South Africa's multilingual reality creates unique safety challenges. Harmful content or crisis signals might appear in English, Afrikaans, isiZulu, or a mix of languages. Our safety systems work across languages, not just in English.

This is a genuine differentiator. Most international AI platforms have safety systems trained primarily on English content. They might miss warning signs expressed in other languages. Domkop AI's safety features understand the languages our users actually speak.

Age Considerations

AI tools are increasingly used by young people for homework help, creative writing, and general questions. Domkop AI's content safety features are designed to be appropriate for users of all ages while still being genuinely useful.

We do not dumb down the AI for younger users, but we do ensure it does not produce content that is inappropriate for them.

Transparency

We believe in being honest about what AI can and cannot do. Domkop AI will tell you when it is uncertain about something. It will not make up facts with confidence. And when it gets something wrong, we want users to be able to flag it so we can improve.

Continuous Improvement

Safety is not a one-time implementation. It is an ongoing process. We regularly review and update our safety features based on user feedback, new research, and evolving best practices. Our team monitors for new types of harmful content and updates our filters accordingly.

The Balance

Good AI safety is about finding the right balance. Too restrictive and the AI becomes useless. Too permissive and users could be harmed. We aim for a balance that keeps users safe while letting them get genuine value from the technology.

If you ever encounter something concerning while using Domkop AI, contact us at info@creativecrewstudio.co.uk. We take every report seriously.

Ready to try Domkop AI?

Start chatting for free. No credit card needed.

Get Started Free