Overview
The NanoGPT API allows you to generate text, images and video using any AI model available. Our implementation for text generation generally matches the OpenAI standards. We also support TEE-backed models with attestation/signature verification for stronger integrity and data-in-use protection. TEE does not by itself guarantee end-to-end confidentiality or zero logging across every network/provider hop; guarantees are provider/path specific. Depending on transport and provider architecture, plaintext may still exist at gateway/proxy layers outside the enclave. For verification details, see TEE Model Verification.All examples in this documentation also work on our alternative domains. Just replace the base URL
https://nano-gpt.com with your preferred domain: ai.bitcoin.com, bcashgpt.com, or cake.nano-gpt.com. Only the base URL changes; endpoints and request formats remain the same.Main API Endpoints
| Endpoint | Purpose |
|---|---|
POST /api/v1/chat/completions | OpenAI-compatible chat generation with optional web search via model suffixes like :online |
POST /api/web | Direct Web Search API with explicit query, filters, and output mode control |
Chat Completion example
Here’s a simple python example using our OpenAI-compatible chat completions endpoint:Quick Start
The quickest way to get started with our API is to explore our Endpoint Examples. Each endpoint page provides comprehensive documentation with request/response formats and example code. The Chat Completion endpoint is a great starting point for text generation.Documentation Sections
For detailed documentation on each feature, please refer to the following sections:- Text Generation - Complete guide to text generation APIs including OpenAI-compatible endpoints and legacy options
- Image Generation - Learn how to generate images using various models like Recraft, Flux, and Stable Diffusion.
- Video Generation - Create high-quality videos with our video generation API
- TEE Model Verification - Verify attestation and signatures for TEE-backed models.
What’s New
- Wavespeed video models now accept direct
imageUrlinputs, so you can reference publicly hosted images without converting them to base64 first.