Extend the duration of an existing video using supported extend models.
wan-wavespeed-25-extendwan-wavespeed-22-spicy-extendveo3-1-extend/api/generate-video.video or videoUrl (same media inputs as /api/generate-video).runId, status, model, cost).Extension request payload (same shape as /generate-video for the chosen model).
The video model to use for generation
longstories, longstories-kids, kling-video, kling-video-v2, veo2-video, minimax-video, hunyuan-video, hunyuan-video-image-to-video, wan-video-image-to-video, kling-v21-standard, kling-v21-pro, kling-v21-master, promptchan-video Text prompt describing the video to generate
"A serene lake at sunset with gentle ripples on the water"
Fully-written script for LongStories models (takes precedence over prompt)
UUID for conversation tracking
Project identifier for LongStories models
Story framework for LongStories models
default, emotional_story, product_showcase, tutorial Smart Enhancement: if true, automatically choose better framework and add Director Notes if necessary
Target length in words for LongStories models (legacy parameter)
Target length in seconds (alternative to words)
Prompt for the image generation engine (LongStories). Example: 'Warm lighting' or 'Make the first image very impactful'
"Warm, cozy lighting with focus on people interacting"
Video aspect ratio for LongStories
9:16, 16:9 Script generation configuration for LongStories
Image generation configuration for LongStories
Video generation configuration for LongStories
Voiceover configuration for LongStories
Captions configuration for LongStories
Effects configuration for LongStories
Music configuration for LongStories
Legacy: Voice ID for narration (use voiceoverConfig.voiceId instead)
"pNInz6obpgDQGcFmaJgB"
Legacy: Whether to show captions (use captionsConfig.captionsEnabled instead)
Legacy: Style for captions (use captionsConfig.captionsStyle instead)
default, minimal, neon, cinematic, fancy, tiktok, highlight, gradient, instagram, vida, manuscripts Legacy: Video effects configuration (use effectsConfig instead)
Legacy: Video quality (handled by videoConfig now)
low, medium, high Legacy: Motion configuration (handled by videoConfig now)
Legacy: Music track (use musicConfig instead)
"video-creation/music/dramatic_cinematic_score.mp3"
Video duration (format varies by model - '5s' for Veo2, '5' for Kling, etc.)
"5s"
Aspect ratio for FAL models
16:9, 9:16, 1:1, 4:3, 3:4 Negative prompt to avoid certain elements
"blur, distort, and low quality"
Classifier-free guidance scale
0 <= x <= 20Base64 data URL of input image for image-to-video models. Aliases image_data_url and image are also accepted and normalized.
"data:image/jpeg;base64,/9j/4AAQ..."
Public HTTPS URL of the input image (interchangeable with imageDataUrl). The service will prioritize whichever field you supply before falling back to library attachments.
"https://images.unsplash.com/photo-1504196606672-aef5c9cefc92?w=1024"
Library attachment ID for input image
Whether to optimize the prompt (MiniMax model)
Number of inference steps
1 <= x <= 50Enable pro mode for Hunyuan Video
Video resolution
720p, 1080p, 540p Number of frames to generate
Frames per second
5 <= x <= 24Random seed for reproducible results
Enable safety content filtering
Allow explicit content (inverse of safety checker)
Enable automatic prompt expansion
Enable acceleration for faster processing
Shift parameter for certain models
Age setting for PromptChan model
18 <= x <= 60Enable audio for PromptChan model
Video quality for PromptChan model
Standard, High Aspect setting for PromptChan model
Portrait, Landscape, Square Video extension request submitted successfully (asynchronous processing)
Unique identifier for the video generation request
Current status of the generation
pending, processing, completed, failed The model used for generation
Project identifier (for LongStories models)
Cost of the video generation
Payment source used (USD or XNO)
Remaining balance after the generation
Provider label for the precharge