We treat AI infrastructure as a conduit, not a datastore. We transmit your data, process it, and do not retain it. This document explains exactly how that works in practice.
BareAI operates a zero-retention policy for all inference data. Prompts, responses, and uploaded documents are processed in memory and discarded when the request concludes. No conversation content is ever written to a database or log file on our infrastructure.
A redaction layer runs inside the Cloudflare Worker. If the API encounters an error, sensitive fields — including message content and access credentials — are replaced with [REDACTED] before any error log is written. Your conversation content never enters our error reporting pipeline.
Identity management and inference are architecturally separated. Cloudflare KV tracks your account identifier and token balance for rate-limiting purposes. This datastore has no technical connection to the inference path — your identity is never joined to your conversation content.
We retain minimal technical metadata to maintain service reliability and enforce fair-use limits. The following data is stored:
We do not use your data to train, fine-tune, or evaluate any AI model — our own or any third party's. Your inputs belong to you.
The BareAI API runs on Cloudflare Workers. Inference requests are routed to upstream model providers for execution. Where configurable, upstream providers are set to non-retention mode for training purposes.
We do not sell, share, or monetise user data in any form.
Users are solely responsible for the content of their inputs and the use of resulting outputs. BareAI provides direct model access and does not apply post-processing or content filtering to API responses.