Here’s an example using our image generation endpoint with the Recraft model:
Copy
import requestsimport jsonimport base64from PIL import Imageimport ioBASE_URL = "https://nano-gpt.com/api"API_KEY = "YOUR_API_KEY"headers = { "x-api-key": API_KEY, "Content-Type": "application/json"}def generate_image(prompt, model="recraft-v3", width=1024, height=1024): """ Generate an image using the Recraft model. """ data = { "prompt": prompt, "model": model, "width": width, "height": height, "negative_prompt": "blurry, bad quality, distorted, deformed", "nImages": 1, "num_steps": 30, "resolution": "1024x1024", "sampler_name": "DPM++ 2M Karras", "scale": 7.5 } response = requests.post( f"{BASE_URL}/generate-image", headers=headers, json=data ) if response.status_code != 200: raise Exception(f"Error: {response.status_code}") result = response.json() # Decode and save the image image_data = base64.b64decode(result['image']) image = Image.open(io.BytesIO(image_data)) image.save("generated_image.png") return result# Example usageprompt = "A serene landscape with mountains and a lake at sunset, digital art style"try: result = generate_image(prompt) print("Image generated successfully!") print("Cost:", result.get('cost', 'N/A')) print("Image saved as 'generated_image.png'")except Exception as e: print(f"Error: {str(e)}")
For more detailed examples and other image generation options, check out our Image Generation Guide.
Web Search
Enable real-time web search for any model by adding suffixes to the model name:
Copy
import requestsimport jsonBASE_URL = "https://nano-gpt.com/api/v1"API_KEY = "YOUR_API_KEY"headers = { "Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"}# Standard web search ($0.006 per request)data = { "model": "chatgpt-4o-latest:online", "messages": [ {"role": "user", "content": "What are the latest AI announcements this week?"} ], "stream": False}response = requests.post( f"{BASE_URL}/chat/completions", headers=headers, json=data)print("Response:", response.json()['choices'][0]['message']['content'])# Deep web search for comprehensive research ($0.06 per request)deep_data = { "model": "claude-3-5-sonnet-20241022:online/linkup-deep", "messages": [ {"role": "user", "content": "Provide a detailed analysis of recent breakthroughs in quantum computing"} ]}deep_response = requests.post( f"{BASE_URL}/chat/completions", headers=headers, json=deep_data)
Web search works with all models and provides:
Access to real-time information (updated less than a minute ago)
10x improvement in factuality
Standard search: 10 results quickly
Deep search: Iterative searching for comprehensive information
For more advanced web search capabilities including structured output, domain filtering, and date filtering, see the Web Search API.Check out our Chat Completion Guide for more examples.
TEE Model Verification
NanoGPT supports TEE-backed models for verifiable privacy. You can fetch attestation reports and signatures for chat completions made with these models.Here’s how to fetch an attestation report:
After making a chat request with a TEE model, you can get its signature:
Copy
# First, make a chat request (see Text Generation accordion or TEE Verification guide)# Then, use the request_id from the chat response:curl "https://nano-gpt.com/api/v1/tee/signature/YOUR_CHAT_REQUEST_ID?model=TEE/hermes-3-llama-3.1-70b&signing_algo=ecdsa" \ -H "Authorization: Bearer YOUR_API_KEY"