How we handle your data
Verification infrastructure sits between your most sensitive documents and the decisions that depend on them. The governance model has to be at least as rigorous as the verification itself. These are not aspirational principles. They are how the platform operates today.
Operating principles
Data never trains a model
Your documents are processed for verification only. No client data is used to train, fine-tune, or improve any language model. This is a contractual commitment, not a setting.
Full audit trail on every deliverable
Every claim, every source, every probability calculation is logged and attached to the output. If a regulator, auditor, or board member asks how a conclusion was reached, the answer is in the trail.
Source provenance, not model confidence
We do not rely on a model's self-reported confidence. Every verified claim is traced to a retrievable, citable source: a DOI-resolved publication, a reported judgment, an audited filing, or a documented data point.
Separation of generation and verification
The model that generates content is never the same process that verifies it. Verification uses independent evidence retrieval, separate source resolution, and deterministic validation logic in Python. The verification layer cannot be overridden by the generation layer.
No black-box outputs
Every deliverable includes a methodology disclosure: which models were used, which databases were searched, which verification steps were applied, and what the limitations are. Clients see the working, not just the answer.
Human review at every decision point
AI outputs are reviewed by a human before delivery. High-risk actions, such as publishing, changing assumptions, or signing off on a regulatory claim, require named human approval. The model proposes. A person decides.
Regulatory alignment
brfcase operates across multiple regulated industries. The platform is designed to sit within existing compliance frameworks, not to create new ones.
POPIA (South Africa)
Personal information is minimised, encrypted at rest and in transit, and processed only for the stated verification purpose. No cross-border transfer without appropriate safeguards.
GDPR (EU/UK)
Data processing is lawful, purpose-limited, and minimised. Data subjects retain their rights. Processing records are maintained.
FSCA / financial services
Outputs used in regulated financial contexts carry audit trails sufficient for regulatory inspection. Assumptions and source data are traceable.
HPCSA / clinical research
Clinical documents are verified against peer-reviewed literature. Ethical clearance numbers, informed consent status, and institutional approvals are validated and disclosed.
AI disclosure
brfcase uses large language models (primarily Anthropic Claude) for evidence synthesis and prose generation. Statistical analysis is performed in Python and R. Literature retrieval uses OpenAlex, PubMed, legal databases, financial reporting platforms, and Perplexity. Verification logic is deterministic and rule-based, not model-generated.
Every deliverable includes a disclosure of which tools and models were used. We recommend that clients include an appropriate AI-use disclosure in their own submissions where required by their institution, journal, or regulator.
If your organisation has specific compliance requirements beyond what is described here, we are happy to discuss them.
Get in touch