Priv Documentation

Everything you need to know about using Priv for private, encrypted AI conversations and image generation.

Overview

Priv is a multi-modal AI platform built on a simple premise: your AI conversations are your business, not ours. While other platforms ask you to trust that they handle your data responsibly, Priv uses client-side encryption to make it mathematically impossible for us to read your prompts, conversations, or generated content.

The platform supports text chat with leading language models from OpenAI, Anthropic, and DeepSeek, as well as image generation through DALL-E 3 and Flux Pro. Every conversation title, every message, and every image prompt is encrypted in your browser using AES-256-GCM before being sent to our servers. We store only ciphertext.

Priv also provides an OpenAI-compatible REST API, meaning any application or library that works with OpenAI can work with Priv by changing a single URL. You get the same models, the same streaming support, and the same response format — with encrypted storage on the backend.

What makes Priv different

  • Real encryption, not promises. Your data is encrypted in your browser before it ever reaches our servers. We literally cannot read it.
  • Multi-model access. Nine models across text, image, and code categories. Switch between them freely.
  • No API key management. We hold the API keys for all providers. You just sign in and start using AI.
  • OpenAI-compatible API. Drop-in replacement for any application that uses the OpenAI SDK or API format.
  • Free tier included. 10 text messages and 5 image generations per day, no credit card required.

How It Works

When you create an account on Priv, you choose a password. This password never leaves your browser — instead, it's used to derive an encryption key using PBKDF2 with 600,000 iterations of SHA-256. The resulting 256-bit key is stored in memory as a non-extractable CryptoKey object. It cannot be read, copied, or exported — even by JavaScript running on the page.

Every time you create a conversation, send a message, or write an image prompt, the content is encrypted in your browser using AES-256-GCM with a random 96-bit initialization vector. The encrypted data is then sent to our server, which stores it without ever seeing the plaintext.

When you open a conversation, the process reverses: encrypted data is fetched from the server, decrypted in your browser using your key, and displayed on screen.

What the server sees

Our server sees exactly two things: your email address (for authentication) and encrypted blobs of data. A conversation title looks like kF8x2Q==:mN3vR7yBw... to us — a base64-encoded IV followed by the ciphertext. We have no way to turn this back into "Trip planning ideas."

What happens when you send a message to AI

This is the one point where plaintext exists transiently. Your browser decrypts your message history, sends the plaintext messages to our server, which forwards them to the AI provider (OpenAI, Anthropic, etc.) for a response. The AI's response streams back to your browser, where it's displayed and then encrypted before being saved. The plaintext exists only in server memory during the API call — it is never written to disk or logged.

What happens when you close the tab

Your encryption key exists only in browser memory. When you close the tab, navigate away, or lock your vault, the key reference is set to null and garbage collected. There is nothing stored in localStorage, cookies, or IndexedDB that could be used to recover it. To decrypt your data again, you need to enter your password, which re-derives the key from scratch.

Getting Started

Getting started with Priv takes about 30 seconds.

  1. Go to app.privai.net and click "Create Account."
  2. Enter your email and choose a password. Your password is your encryption key — it's used to derive the AES-256-GCM key that protects all your data. Choose something strong. If you lose it, we cannot recover your data.
  3. Start chatting. Select a model from the dropdown and type your first message. Your conversation will be encrypted and stored automatically.
Important: Your password is your encryption key. We do not store it and cannot reset it. If you forget your password, your encrypted data is permanently inaccessible. This is by design — it's what makes the system zero-knowledge.

Switching models

You can switch between models at any time using the dropdown at the top of the chat view or on the welcome screen. Each model has different strengths — GPT-4o is great for general conversation, Claude excels at nuanced analysis, and DeepSeek offers strong performance at lower cost.

Text Chat

Text chat is the core of Priv. You can have multiple conversations, each encrypted independently, with full streaming support.

Available models

ModelProviderContextBest For
GPT-4oOpenAI128KGeneral purpose, reasoning
GPT-4o MiniOpenAI128KFast responses, simple tasks
Claude Sonnet 4Anthropic200KAnalysis, writing, nuance
Claude Haiku 4.5Anthropic200KQuick tasks, large context
DeepSeek V3DeepSeek64KCost-effective, multilingual
GPT-4o (Code)OpenAI128KProgramming, debugging
DeepSeek CoderDeepSeek64KCode generation

Streaming

All text models support real-time streaming via Server-Sent Events (SSE). Responses appear token-by-token as they're generated, so you don't have to wait for the full response before reading. An animated cursor indicates when the model is still generating.

Conversation management

Conversations are listed in the sidebar, sorted by most recently updated. You can search conversations by title, delete them with a two-click confirmation, and navigate between them freely. All conversation titles and messages are encrypted — the sidebar displays decrypted titles in your browser.

Image Generation

Priv supports image generation through two models, accessible from the Images page in the sidebar.

Available models

ModelProviderSizesNotes
DALL-E 3OpenAI1024x1024, 1792x1024, 1024x1792High quality, prompt adherence
Flux Profal.ai1024x1024, 1792x1024, 1024x1792Fast generation, artistic style

Generated images appear in a gallery grid below the prompt area. Hover over any image to see the original prompt. Click "Open" to view the full-resolution image in a new tab.

Image prompts are encrypted at rest on our server. The images themselves are hosted by the AI provider (OpenAI or fal.ai) and are subject to their retention policies.

API Reference

Priv exposes an OpenAI-compatible REST API that works with any client library designed for OpenAI. Change the base URL and API key, and everything else stays the same.

Base URL

https://app.privai.net/api/v1

Authentication

All API requests require a Bearer token in the Authorization header. API keys use the format priv-xxxx... and are authenticated via SHA-256 hash lookup.

Authorization: Bearer priv-xxxx...

Endpoints

MethodPathDescription
POST/chat/completionsChat completion (streaming + non-streaming)
POST/images/generationsGenerate images
GET/modelsList available models
POST/embeddingsGenerate text embeddings

Chat completion example

# Non-streaming curl https://app.privai.net/api/v1/chat/completions \ -H "Authorization: Bearer priv-xxxx..." \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4o-mini", "messages": [ {"role": "user", "content": "What is zero-knowledge encryption?"} ] }' # Streaming curl https://app.privai.net/api/v1/chat/completions \ -H "Authorization: Bearer priv-xxxx..." \ -H "Content-Type: application/json" \ -d '{ "model": "claude-sonnet-4-20250514", "messages": [ {"role": "user", "content": "Explain PBKDF2"} ], "stream": true }'

SDK compatibility

The API is compatible with the OpenAI Python and Node.js SDKs, LangChain, LlamaIndex, and any other library that targets the OpenAI API format. Simply set the base URL to https://app.privai.net/api/v1 and use your Priv API key.

Rate limits

TierText/DayImages/DayMax Context
Free10532K tokens
Pro ($15/mo)Unlimited500200K tokens

Privacy Architecture

This section describes the technical details of how Priv's encryption works. It's written for developers and security-minded users who want to verify our claims.

Key derivation

When you enter your password, the browser uses the Web Crypto API to run PBKDF2 with the following parameters:

  • Hash function: SHA-256
  • Iterations: 600,000
  • Salt: 32 random bytes (generated on registration, stored on server)
  • Output: 256-bit key, imported as a non-extractable CryptoKey for AES-256-GCM

The extractable flag is explicitly set to false when importing the derived key. This means even JavaScript running on the page cannot read the key material — it can only use the key for encrypt and decrypt operations through the Web Crypto API.

Encryption

Each piece of data is encrypted independently using AES-256-GCM with a fresh 96-bit (12-byte) initialization vector. The IV is generated using crypto.getRandomValues() and prepended to the ciphertext. The stored format is:

base64(iv) + ":" + base64(ciphertext)

AES-256-GCM provides both confidentiality and authentication. If anyone tampers with the ciphertext, decryption will fail rather than producing corrupted plaintext.

What's encrypted

  • Conversation titles
  • All message content (user and assistant messages)
  • Image generation prompts
  • Conversation metadata

What's not encrypted

  • Your email address (needed for authentication)
  • Timestamps (needed for sorting)
  • Model identifiers (needed for routing)
  • Generated image URLs (hosted by the AI provider)

The one trade-off

To generate AI responses, your messages must be sent to the AI provider in plaintext. The flow is: browser decrypts messages locally, sends plaintext to Priv server, which proxies to OpenAI/Anthropic/DeepSeek. The response streams back and is encrypted in your browser before storage. The plaintext exists in server memory only during the API call — it is never written to disk, logged, or retained.

This is an inherent trade-off of any AI platform that uses third-party models. The alternative would be running models locally, which is not feasible for frontier models like GPT-4o or Claude. We chose to be transparent about this rather than pretend it doesn't happen.

FAQ

What happens if I forget my password?

Your data is permanently inaccessible. We do not store your password or encryption key, so there is no "reset password" flow that recovers your data. You can create a new account, but your old conversations cannot be decrypted. This is the fundamental trade-off of zero-knowledge encryption.

Can you read my conversations?

No. Our server stores only encrypted blobs. We don't have your password or encryption key, and AES-256-GCM cannot be broken with current technology. Even if our database were leaked, your conversations would be unintelligible ciphertext.

Is my data safe if your servers are compromised?

Yes. An attacker who gains full access to our database would see encrypted data, email addresses, and timestamps. Without your password to derive the decryption key, the conversation content is useless. Each user's data is encrypted with their own unique key.

Do you log my prompts?

No. Your prompts pass through our server transiently to reach the AI provider, but they are not written to disk, logged, or stored in plaintext. The only stored version is the encrypted ciphertext in the database.

Can I use Priv with the OpenAI SDK?

Yes. Set the base URL to https://app.privai.net/api/v1 and use your Priv API key. The API follows the OpenAI specification exactly, including streaming, function calls, and all response formats.

What models are available?

Nine models: GPT-4o, GPT-4o Mini, Claude Sonnet 4, Claude Haiku 4.5, DeepSeek V3, GPT-4o (Code), DeepSeek Coder, DALL-E 3, and Flux Pro. All are included in both free and pro tiers.

How is this different from Venice.ai?

Venice claims privacy but does not implement client-side encryption. Your data is decrypted on their servers and you must trust that they handle it properly. Priv uses AES-256-GCM encryption in your browser — our servers only ever see ciphertext. The difference is mathematical proof versus a promise.