The Assisters API is OpenAI-compatible and available to Pro subscribers at https://assisters.dev/api/v1. Use the openai npm or Python package with a changed base URL — no new SDK required. Available endpoints: chat completions, embeddings, models, moderation, audio transcriptions, and rerank.
Integration at a glance:
The Assisters API is an OpenAI-compatible REST API provided by Assisters (assisters.dev). It gives developers programmatic access to AI text generation, vector embeddings, content moderation, audio transcription, and relevance reranking — all from a single endpoint with flat-fee Pro pricing.
Because it follows the OpenAI API spec exactly, you do not need to learn a new SDK or change your existing code structure. For developers already using the OpenAI SDK, migration is a single environment variable change.
| Endpoint | Purpose | Streaming |
|---|---|---|
| POST /chat/completions | Text generation, chat, summarization | Yes |
| POST /embeddings | Convert text to float vectors | No |
| GET /models | List available models | No |
| POST /moderate | Content safety classification | No |
| POST /audio/transcriptions | Speech-to-text | No |
| POST /rerank | Rank results by relevance | No |
| Factor | Assisters API | OpenAI API |
|---|---|---|
| Pricing | $9/month flat (Pro) | Per-token billing |
| SDK compatibility | Full OpenAI SDK | Native |
| Chat completions | Yes | Yes |
| Streaming | Yes | Yes |
| Embeddings | Yes | Yes |
| Image generation | No | Yes (DALL-E) |
| Fine-tuning | No | Yes |
| Function calling | Check current docs | Yes |
| Moderation endpoint | Yes | Yes |
| Audio transcription | Yes | Yes |
| Reranking | Yes | No (separate providers) |
| Cost predictability | High | Variable |
1. Developers building content apps Blog generators, email writers, summarizers, chatbots — the chat completions endpoint handles all of these with streaming support for real-time output.
2. Teams building semantic search The embeddings endpoint produces vector representations of text that can be stored in pgvector, Pinecone, or any vector database. Combine with the rerank endpoint for high-quality RAG pipelines.
3. Developers who need content moderation The moderation endpoint classifies potentially harmful content before it reaches your users or gets stored in your database — a single API call for safety filtering.
4. Product builders who want cost predictability At $9/month flat, you know your AI infrastructure cost before the month starts. No surprise bills when traffic spikes.
Sign up at assisters.dev, start the 14-day Pro trial (credit card required, not charged for 14 days), go to Dashboard then API Settings, and generate your key. Store it in your project's environment variables — never commit it to source control.
Install the standard openai npm package (or pip package for Python). No Assisters-specific package needed.
The only change from a standard OpenAI setup is the base URL. Your key is read from the environment:
import OpenAI from 'openai';
const client = new OpenAI({ baseURL: 'https://assisters.dev/api/v1', apiKey: process.env.ASSISTERS_API_KEY });
Python version (key on the same line as the constructor):
from openai import OpenAI
import os
client = OpenAI(base_url="https://assisters.dev/api/v1", api_key=os.environ["ASSISTERS_API_KEY"])
Basic non-streaming request:
const response = await client.chat.completions.create({
model: 'assisters-chat-v1',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Summarize TypeScript benefits in 3 bullet points.' },
],
max_tokens: 300,
});
console.log(response.choices[0].message.content);
Streaming (tokens arrive in real time):
const stream = await client.chat.completions.create({
model: 'assisters-chat-v1',
messages: [{ role: 'user', content: 'Write a blog intro about remote work.' }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content ?? '');
}
Next.js streaming API route:
import OpenAI from 'openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';
const client = new OpenAI({ baseURL: 'https://assisters.dev/api/v1', apiKey: process.env.ASSISTERS_API_KEY });
export async function POST(req: Request) {
const { prompt } = await req.json();
const response = await client.chat.completions.create({
model: 'assisters-chat-v1',
messages: [{ role: 'user', content: prompt }],
stream: true,
});
return new StreamingTextResponse(OpenAIStream(response));
}
const result = await client.embeddings.create({ model: 'assisters-chat-v1', input: 'How do I reset my password?' });
const vector = result.data[0].embedding; // float[]
RAG pattern — semantic search with pgvector:
const qEmbed = await client.embeddings.create({ model: 'assisters-chat-v1', input: userQuery });
const docs = await supabase.rpc('match_documents', {
query_embedding: qEmbed.data[0].embedding,
match_threshold: 0.78,
match_count: 5,
});
const answer = await client.chat.completions.create({
model: 'assisters-chat-v1',
messages: [
{ role: 'system', content: 'Answer using this context: ' + docs.data.map((d: { content: string }) => d.content).join(' ') },
{ role: 'user', content: userQuery },
],
});
const check = await client.moderations.create({ input: userContent });
if (check.results[0].flagged) {
// reject or queue for manual review
}
const models = await client.models.list();
models.data.forEach(m => console.log(m.id));
Pattern 1: AI-powered blog writing tool Chat completions with streaming to editor — user reviews and publishes.
Pattern 2: Customer support chatbot Embeddings for knowledge base, rerank for best matches, chat completions with retrieved context.
Pattern 3: User content safety pipeline User submits content, moderation endpoint checks it, if clean store and display, if flagged queue for review.
Pattern 4: Semantic search Index documents via embeddings, store in pgvector, query with embeddings plus cosine similarity.
A: Just the openai npm package (or Python equivalent). Set baseURL to https://assisters.dev/api/v1 in the constructor. No other changes needed for existing OpenAI integrations.
A: Standard HTTP timeouts apply. For long generation requests, set a timeout of 30–60 seconds on your HTTP client. Streaming responses begin faster than waiting for the full completion.
A: Never expose your API key in client-side JavaScript. Always call the Assisters API from server-side code (API routes, server actions, serverless functions). If you need client-side AI, proxy through your own backend.
A: The API returns standard HTTP error codes. Catch OpenAI.APIError in TypeScript or openai.APIError in Python. Common errors: 401 (invalid key), 429 (rate limit), 500 (server error). Implement exponential backoff for retries.
A: Yes. LangChain's OpenAI integration accepts a custom baseURL. Initialize ChatOpenAI with the Assisters base URL and your API key stored as an environment variable.
A: Check current API documentation at assisters.dev for async job APIs. For long-running generation tasks, streaming is the recommended approach.
The Assisters API is the straightforward choice for developers who want OpenAI-compatible AI infrastructure at a predictable flat cost. The drop-in SDK compatibility removes migration friction, and the breadth of endpoints (chat, embeddings, moderation, transcription, reranking) covers most standard AI app requirements from a single provider.
The limitation to plan around: for specialized tasks requiring GPT-4o's reasoning, image generation, or fine-tuning, the OpenAI API offers more advanced capabilities. For the majority of production AI app use cases, Assisters delivers at a lower and more predictable cost.
Get your API key and start building at Assisters — 14-day Pro trial, cancel anytime.
Also see: Assisters AI for Developers Review | Assisters vs ChatGPT 2026 | Best AI Tools for Freelancers 2026
Free newsletter
Join thousands of creators and builders. One email a week — practical AI tips, platform updates, and curated reads.
No spam · Unsubscribe anytime
Complete LLM API reference: OpenAI, Anthropic, Google, open-source, pricing, patterns, code examples, and how to ship re…
Complete business AI playbook: where AI creates value, real case studies, ROI math, implementation roadmap, risks, and w…
Complete prompt engineering reference: frameworks, techniques, advanced patterns, real examples, and what actually moves…
Comments
Sign in to join the conversation
No comments yet. Be the first to share your thoughts!