Here’s an example using the OpenAI-compatible endpoint in Python:
import base64import requestsAPI_KEY = "YOUR_API_KEY"def generate_image(prompt, model="hidream", size="1024x1024"): response = requests.post( "https://nano-gpt.com/v1/images/generations", headers={ "Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json" }, json={ "model": model, "prompt": prompt, "n": 1, "size": size, "response_format": "b64_json" } ) response.raise_for_status() return response.json()# Example usageprompt = "A serene landscape with mountains and a lake at sunset, digital art style"result = generate_image(prompt)image_bytes = base64.b64decode(result["data"][0]["b64_json"])with open("generated_image.png", "wb") as f: f.write(image_bytes)print("Image generated successfully!")print("Image saved as 'generated_image.png'")
For more detailed examples and other image generation options, check out our Image Generation Guide.
Web Search
You can use web search in two ways:
POST /api/v1/chat/completions with suffixes like :online and :online/linkup-deep
POST /api/web for direct search control (query, output type, filters)
Enable real-time web search in chat completions by adding suffixes to the model name:
import requestsimport jsonBASE_URL = "https://nano-gpt.com/api/v1"API_KEY = "YOUR_API_KEY"headers = { "Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}# Standard web search ($0.006 per request)data = { "model": "openai/gpt-5.2:online", "messages": [ {"role": "user", "content": "What are the latest AI announcements this week?"} ], "stream": False}response = requests.post( f"{BASE_URL}/chat/completions", headers=headers, json=data)print("Response:", response.json()['choices'][0]['message']['content'])# Deep web search for comprehensive research ($0.06 per request)deep_data = { "model": "anthropic/claude-opus-4.5:online/linkup-deep", "messages": [ {"role": "user", "content": "Provide a detailed analysis of recent breakthroughs in quantum computing"} ]}deep_response = requests.post( f"{BASE_URL}/chat/completions", headers=headers, json=deep_data)
Web search works with all models and provides:
Access to real-time information (updated less than a minute ago)
10x improvement in factuality
Standard search: 10 results quickly
Deep search: Iterative searching for comprehensive information
For direct endpoint usage (including sourcedAnswer, structured, date/domain filters, and auth header variants), see Direct Web Search API.Check out our Chat Completion Guide for more examples.
TEE Model Verification
NanoGPT supports TEE-backed models with attestation/signature verification for stronger integrity and data-in-use isolation. Confidentiality/logging outcomes are provider-specific, and plaintext may still exist at gateway/proxy layers outside the enclave depending on transport and provider architecture. You can fetch attestation reports and signatures for chat completions made with these models.Here’s how to fetch an attestation report:
After making a chat request with a TEE model, you can get its signature:
# First, make a chat request (see Text Generation accordion or TEE Verification guide)# Then, use the request_id from the chat response:curl "https://nano-gpt.com/api/v1/tee/signature/YOUR_CHAT_REQUEST_ID?model=TEE/hermes-3-llama-3.1-70b&signing_algo=ecdsa" \ -H "Authorization: Bearer YOUR_API_KEY"