Skip to main content
POST
/
generate-video
cURL
curl --request POST \
  --url https://nano-gpt.com/api/generate-video \
  --header 'Content-Type: application/json' \
  --header 'x-api-key: <api-key>' \
  --data '
{
  "model": "<string>",
  "prompt": "A serene lake at sunset with gentle ripples on the water",
  "script": "<string>",
  "conversationUUID": "<string>",
  "projectId": "<string>",
  "framework": "default",
  "shortRequestEnhancer": false,
  "targetLengthInWords": 70,
  "targetLengthInSeconds": 123,
  "directorNotes": "Warm, cozy lighting with focus on people interacting",
  "aspectRatio": "9:16",
  "scriptConfig": {
    "style": "default",
    "targetLengthInSeconds": 30
  },
  "imageConfig": {
    "model": "hidream_dev",
    "loraConfig": {
      "loraSlug": "ghibsky-comic-book"
    }
  },
  "videoConfig": {
    "enabled": true,
    "model": "kling_v2_1_std_5s"
  },
  "voiceoverConfig": {
    "enabled": true,
    "voiceId": "zWDA589rUKXuLnPRDtAG"
  },
  "captionsConfig": {
    "captionsEnabled": true,
    "captionsStyle": "tiktok"
  },
  "effectsConfig": {
    "transition": "fade",
    "floating": true
  },
  "musicConfig": {
    "enabled": true,
    "musicSlug": "gentle_ambient_loop",
    "volume": 0.3,
    "loop": true
  },
  "voice": "pNInz6obpgDQGcFmaJgB",
  "captionsShow": true,
  "captionsStyle": "default",
  "effects": {
    "transition": "fade",
    "floating": false
  },
  "quality": "medium",
  "motion": {
    "enabled": false,
    "strength": 3
  },
  "music": "video-creation/music/dramatic_cinematic_score.mp3",
  "duration": "5s",
  "aspect_ratio": "16:9",
  "negative_prompt": "blur, distort, and low quality",
  "cfg_scale": 0.5,
  "imageDataUrl": "data:image/jpeg;base64,/9j/4AAQ...",
  "imageUrl": "https://images.unsplash.com/photo-1504196606672-aef5c9cefc92?w=1024",
  "imageAttachmentId": "<string>",
  "videoUrl": "<string>",
  "videoDataUrl": "<string>",
  "video": "<string>",
  "videoAttachmentId": "<string>",
  "prompt_optimizer": true,
  "num_inference_steps": 30,
  "pro_mode": false,
  "resolution": "720p",
  "num_frames": 81,
  "frames_per_second": 16,
  "seed": 123,
  "enable_safety_checker": true,
  "showExplicitContent": false,
  "enable_prompt_expansion": true,
  "acceleration": true,
  "shift": 123,
  "age_slider": 18,
  "audioEnabled": false,
  "video_quality": "Standard",
  "aspect": "Portrait"
}
'
{
  "runId": "<string>",
  "status": "pending",
  "model": "<string>",
  "projectId": "<string>",
  "cost": 123,
  "paymentSource": "<string>",
  "remainingBalance": 123,
  "prechargeLabel": "<string>"
}
Image-conditioned models accept either imageDataUrl (base64) or a public imageUrl. The service uses the explicit value you provide before checking any saved attachments.

Overview

POST /generate-video submits an asynchronous job to create, extend, or edit a video. The endpoint responds immediately with runId, id, model, and status: "pending". runId and id are the same NanoGPT job identifier (format vid_...). Poll the unified Video Status endpoint with that job ID until you receive final assets. Duration-based billing is assessed after completion. Errors include descriptive JSON payloads. Surface the error.message (and HTTP status) to help users correct content-policy or validation issues.

Extend Workflows

  • Midjourney extend (task-based): use POST /api/generate-video/extend with runId (preferred) or taskId (legacy alias) plus index (0-3). This flow does not accept video, videoUrl, videoDataUrl, or videoAttachmentId.
  • Source video extend (extend models): use POST /api/generate-video with an extend model plus prompt and a source video (videoUrl, videoDataUrl, or videoAttachmentId). video is only accepted by select models (for example, wan-wavespeed-25-extend). Max source video length: 120 seconds.

Request Schema

Only include the fields required by your chosen model. Unknown keys are ignored, but some models fail when extra media fields are present.

Core Fields

fieldtyperequireddetails
modelstringyesVideo model ID. Model availability changes; discover models via GET /api/v1/models?detailed=true.
conversationUUIDstringnoAttach the request to a conversation thread.
promptstringconditionalRequired for text-to-video and edit models unless a structured script is supplied.
negative_promptstringnoSuppresses specific content. Respected by Veo, Wan, Runway, Pixverse, and other models noted below.
scriptstringconditionalLongStories models accept full scripts instead of relying on prompt.
storyConfigobjectconditionalLongStories structured payload (e.g. scenes, narration, voice).
animationbooleannoEnables animation for LongStories outputs.
languagestringnoOutput language for LongStories.
charactersarraynoCharacter definitions for LongStories.
durationstringconditionalSeconds as a string ("5", "8", "60"). Limits vary per model; see individual entries.
secondsstringconditionalSora-specific duration selector ("4", "8", "12").
aspect_ratiostringconditionalRatios such as 16:9, 9:16, 1:1, 3:4, 4:3, 21:9, auto.
orientationstringconditionallandscape or portrait for Sora and Wan text/image flows.
resolutionstringconditionalResolution tokens (480p, 580p, 720p, 1080p, 1792x1024, 2k, 4k).
sizestringconditionalOutput size preset (supported by select models).
modestringnoOperation mode: text-to-video, image-to-video, reference-to-video, video-edit.
generateAudiobooleannoAdds AI audio on Veo 3 and Lightricks models. Defaults to false.
enhancePromptbooleannoOptional Veo 3 prompt optimizer. Defaults to false.
pro_mode / probooleannoHigh-quality toggle for Sora and Hunyuan families. Defaults to false.
enable_prompt_expansionbooleannoPrompt booster for Wan/Seedance/Minimax variants. Disabled by default.
enable_safety_checkerbooleannoOptional safety checker toggle (supported by select models).
camera_fix / camera_fixed / cameraFixedbooleannoLocks the virtual camera for Seedance and Wan variants.
seednumber or stringnoDeterministic seed when supported (Veo, Wan, Pixverse).
voiceIdstringconditionalAlternate voice selector for lipsync models.
voice_idstringconditionalRequired by kling-lipsync-t2v.
voice_languagestringconditionalen or zh for kling-lipsync-t2v.
voice_speednumberconditionalRange 0.8-2.0 for kling-lipsync-t2v.
videoDuration / billedDurationnumbernoOptional overrides for upscaler billing calculations.
adjust_fps_for_interpolationbooleannoOptional toggle for interpolation-aware upscaling. Defaults to false.

Media Inputs

fieldtyperequireddetails
imageDataUrlstringconditionalBase64-encoded data URL. Recommended for private assets or files larger than 4 MB.
imageUrlstringconditionalHTTPS link to a source image.
imageAttachmentIdstringconditionalReference to a library-stored image.
imagestringconditionalAlternate image field accepted by select models. Prefer imageUrl unless the model explicitly requires image.
reference_imagestringconditionalOptional still image guiding runwayml-gen4-aleph.
referenceImagesarrayconditionalMultiple reference images for reference-to-video flows.
referenceVideosarrayconditionalMultiple reference videos.
audioDataUrlstringconditionalBase64 data URL for audio-driven models.
audioDurationnumberconditionalDuration of provided audio in seconds.
audioUrlstringconditionalHTTPS audio input.
audiostringconditionalAlternate audio field accepted by select models. Prefer audioUrl unless the model explicitly requires audio.
videoUrlstringconditionalHTTPS link to a source video (edit, extend, upscaler, or lipsync jobs).
videoDataUrlstringconditionalBase64 data URL for a source video.
videostringconditionalAlternate video field accepted by select models. Prefer videoUrl unless the model explicitly requires video.
videoAttachmentIdstringconditionalReference to a library-stored video.
swapImagestringconditionalSwap image (face-swap models).
targetVideostringconditionalTarget video (face-swap models).
targetFaceIndexnumbernoOptional face index (face-swap models).
Provide only the media fields that your target model expects. Extra media inputs often trigger validation errors. Prefer videoUrl (camelCase) for source videos; only send video when the model explicitly requires it.

Advanced Controls

fieldtypemodels
num_framesintegerWan 2.2 families, Seedance 22 5B, Wan image-to-video.
frames_per_secondintegerWan 2.2 5B.
num_inference_stepsintegerWan 2.2 families.
guidance_scalenumberWan 2.2 5B.
shiftnumberWan 2.2 5B.
interpolator_modelstringWan 2.2 5B.
num_interpolated_framesintegerWan 2.2 5B.
movementAmplitudestringSelect models (for example auto, small, medium, large).
motionstringSelect models (for example low, high).
stylestringSelect models (style/preset strings).
effectType, effect, cameraMovement, motionMode, soundEffectSwitch, soundEffectPromptvariesPixverse v4.5/v5.
modestringSelect models (for example animate, replace).
prompt_optimizerbooleanSelect models.

Model Discovery

Video model IDs and supported fields change over time. Use GET /api/v1/models?detailed=true to discover the current list and select a model intended for video generation. Notes:
  • Different models accept different media inputs (for example imageUrl vs a source videoUrl) and may support different duration / resolution options.
  • If you see validation errors, first retry with only the minimal required fields for your chosen model.

Async Processing & Status Polling

  • The submission response includes { runId, id, model, status: "pending" } where id and runId are identical.
  • Poll /api/video/status?requestId=<runId> (or runId) until the job reaches status: "COMPLETED" or status: "FAILED". The legacy /api/generate-video/status endpoint is deprecated.
  • Many jobs emit intermediate states (queued, processing, generating, delivering). Persist them if you need audit trails.
  • Failed jobs include an error object. Surface the message and adjust prompts or inputs before retrying.
  • Duration and resolution determine credit usage.

Response example

{
  "runId": "vid_m1abc123def456",
  "id": "vid_m1abc123def456",
  "status": "pending",
  "model": "veo2-video",
  "cost": 0.35,
  "paymentSource": "XNO",
  "remainingBalance": 12.5,
  "prechargeLabel": "string"
}

Content & Safety Notes

Some models may block prompts that violate content policies. Non-200 responses describe the violation reason; relay these messages verbatim to users or implement automated prompt adjustments.

Next Steps

  • Poll the Video Status endpoint after every submission to retrieve final assets.
  • Keep customer-facing pricing tables in sync with the API behavior you observe in production.

Authorizations

x-api-key
string
header
required

Body

application/json

Parameters for video generation across different models and providers

model
string
required

The video model to use for generation. See the docs for the current model list and required inputs.

prompt
string

Text prompt describing the video to generate

Example:

"A serene lake at sunset with gentle ripples on the water"

script
string

Fully-written script for LongStories models (takes precedence over prompt)

conversationUUID
string

UUID for conversation tracking

projectId
string

Project identifier for LongStories models

framework
enum<string>
default:default

Story framework for LongStories models

Available options:
default,
emotional_story,
product_showcase,
tutorial
shortRequestEnhancer
boolean
default:false

Smart Enhancement: if true, automatically choose better framework and add Director Notes if necessary

targetLengthInWords
integer
default:70

Target length in words for LongStories models (legacy parameter)

targetLengthInSeconds
integer

Target length in seconds (alternative to words)

directorNotes
string

Prompt for the image generation engine (LongStories). Example: 'Warm lighting' or 'Make the first image very impactful'

Example:

"Warm, cozy lighting with focus on people interacting"

aspectRatio
enum<string>
default:9:16

Video aspect ratio for LongStories

Available options:
9:16,
16:9
scriptConfig
object

Script generation configuration for LongStories

imageConfig
object

Image generation configuration for LongStories

videoConfig
object

Video generation configuration for LongStories

voiceoverConfig
object

Voiceover configuration for LongStories

captionsConfig
object

Captions configuration for LongStories

effectsConfig
object

Effects configuration for LongStories

musicConfig
object

Music configuration for LongStories

voice
string

Legacy: Voice ID for narration (use voiceoverConfig.voiceId instead)

Example:

"pNInz6obpgDQGcFmaJgB"

captionsShow
boolean
default:true

Legacy: Whether to show captions (use captionsConfig.captionsEnabled instead)

captionsStyle
enum<string>
default:default

Legacy: Style for captions (use captionsConfig.captionsStyle instead)

Available options:
default,
minimal,
neon,
cinematic,
fancy,
tiktok,
highlight,
gradient,
instagram,
vida,
manuscripts
effects
object

Legacy: Video effects configuration (use effectsConfig instead)

quality
enum<string>
default:medium

Legacy: Video quality (handled by videoConfig now)

Available options:
low,
medium,
high
motion
object

Legacy: Motion configuration (handled by videoConfig now)

music
string

Legacy: Music track (use musicConfig instead)

Example:

"video-creation/music/dramatic_cinematic_score.mp3"

duration

Video duration (format varies by model - '5s' for Veo2, '5' for Kling, etc.)

Example:

"5s"

aspect_ratio
enum<string>
default:16:9

Aspect ratio (supported by select models)

Available options:
16:9,
9:16,
1:1,
4:3,
3:4
negative_prompt
string

Negative prompt to avoid certain elements

Example:

"blur, distort, and low quality"

cfg_scale
number
default:0.5

Classifier-free guidance scale

Required range: 0 <= x <= 20
imageDataUrl
string

Base64 data URL of input image for image-to-video models. Aliases image_data_url and image are also accepted and normalized.

Example:

"data:image/jpeg;base64,/9j/4AAQ..."

imageUrl
string

Public HTTPS URL of the input image (interchangeable with imageDataUrl). The service will prioritize whichever field you supply before falling back to library attachments.

Example:

"https://images.unsplash.com/photo-1504196606672-aef5c9cefc92?w=1024"

imageAttachmentId
string

Library attachment ID for input image

videoUrl
string

Public HTTPS URL of the input video (extend/edit/upscale). Preferred field name for source videos.

videoDataUrl
string

Base64 data URL of the input video.

video
string

Alternate video field accepted by select providers.

videoAttachmentId
string

Library attachment ID for input video.

prompt_optimizer
boolean
default:true

Whether to optimize the prompt (MiniMax model)

num_inference_steps
integer
default:30

Number of inference steps

Required range: 1 <= x <= 50
pro_mode
boolean
default:false

Enable pro mode for Hunyuan Video

resolution
enum<string>
default:720p

Video resolution

Available options:
720p,
1080p,
540p
num_frames
default:81

Number of frames to generate

frames_per_second
integer
default:16

Frames per second

Required range: 5 <= x <= 24
seed
integer

Random seed for reproducible results

enable_safety_checker
boolean
default:true

Enable safety content filtering

showExplicitContent
boolean
default:false

Allow explicit content (inverse of safety checker)

enable_prompt_expansion
boolean

Enable automatic prompt expansion

acceleration
boolean

Enable acceleration for faster processing

shift
number

Shift parameter for certain models

age_slider
integer
default:18

Age setting for PromptChan model

Required range: 18 <= x <= 60
audioEnabled
boolean
default:false

Enable audio for PromptChan model

video_quality
enum<string>
default:Standard

Video quality for PromptChan model

Available options:
Standard,
High
aspect
enum<string>
default:Portrait

Aspect setting for PromptChan model

Available options:
Portrait,
Landscape,
Square

Response

Video generation request submitted successfully (asynchronous processing)

runId
string
required

Unique identifier for the video generation request

status
enum<string>
default:pending
required

Current status of the generation

Available options:
pending,
processing,
completed,
failed
model
string
required

The model used for generation

projectId
string

Project identifier (for LongStories models)

cost
number

Cost of the video generation

paymentSource
string

Payment source used (USD or XNO)

remainingBalance
number

Remaining balance after the generation

prechargeLabel
string

Provider label for the precharge