The main client class for connecting to Odyssey’s audio-visual intelligence platform.
Constructor
constructor ( config : ClientConfig )
Creates a new Odyssey client instance with the provided API key.
Parameter Type Description configClientConfigConfiguration with API key
import { Odyssey } from '@odysseyml/odyssey' ;
const client = new Odyssey ({ apiKey: 'ody_your_api_key_here' });
Methods
connect()
Connect to a streaming session. The Odyssey API automatically assigns an available session.
async connect ( handlers ?: OdysseyEventHandlers ): Promise < MediaStream >
Parameter Type Description handlersOdysseyEventHandlersOptional event handlers for callback style
Returns: Promise<MediaStream> - Resolves with the MediaStream when the connection is fully ready (including data channel). You can call startStream() immediately after this resolves.
The connect() method supports two usage patterns: Await Style for sequential code, and Callback Style for event-driven code. Both patterns wait for the data channel to be ready before proceeding.
Await Style
Callback Style
// Await style - use when you want sequential, Promise-based code
const mediaStream = await client . connect ();
videoElement . srcObject = mediaStream ;
// Connection is fully ready - no delay needed!
await client . startStream ({ prompt: 'A cat' });
await client . interact ({ prompt: 'Pet the cat' });
When to use each style
Style Best for Await Sequential operations, simpler code flow, when you control the timing Callback UI-driven interactions, reactive patterns, when you need to respond to events
Both styles properly wait for the data channel to be ready. You do not need to add any artificial delays between connect() and startStream().
disconnect()
Disconnect from the session and clean up resources.
startStream()
Start an interactive stream session.
startStream ( options ?: StartStreamOptions ): Promise < string >
Option Type Default Description promptstring''Initial prompt to generate video content portraitbooleantruetrue for portrait (704x1280), false for landscape (1280x704). Resolution may vary by model.image`File Blob` — Optional image for image-to-video generation
Returns: Promise<string> - Resolves with the stream ID when the stream is ready. Use this ID to retrieve recordings.
const streamId = await client . startStream ({ prompt: 'A cat' , portrait: true });
console . log ( 'Stream started:' , streamId );
Image-to-video requirements:
SDK version 1.0.0+
Max size: 25MB
Supported formats: JPEG, PNG, WebP, GIF, BMP, HEIC, HEIF, AVIF
Images are resized to 1280x704 (landscape) or 704x1280 (portrait)
// Image-to-video example
const mediaStream = await client . connect ();
const imageFile = fileInput . files [ 0 ];
const streamId = await client . startStream ({
prompt: 'A cat' ,
portrait: false ,
image: imageFile
});
interact()
Send an interaction prompt to update the video content.
interact ( options : InteractOptions ): Promise < string >
Option Type Description promptstringThe interaction prompt
Returns: Promise<string> - Resolves with the acknowledged prompt when processed.
const ackPrompt = await client . interact ({ prompt: 'Pet the cat' });
console . log ( 'Interaction acknowledged:' , ackPrompt );
endStream()
End the current interactive stream session.
endStream (): Promise < void >
Returns: Promise<void> - Resolves when the stream has ended.
await client . endStream ();
attachToVideo()
Attach the media stream to a video element.
attachToVideo ( videoElement : HTMLVideoElement | null ): HTMLVideoElement | null
Parameter Type Description videoElement`HTMLVideoElement null` The video element to attach the stream to
Returns: The video element for chaining, or null if no element provided.
const videoEl = document . querySelector ( 'video' );
client . attachToVideo ( videoEl );
getRecording()
Get recording URLs for a completed stream.
getRecording ( streamId : string ): Promise < Recording >
Parameter Type Description streamIdstringThe stream ID to get recording for
Returns: Promise<Recording> - Recording data with presigned URLs.
const recording = await client . getRecording ( 'abc-123-def' );
console . log ( 'Video URL:' , recording . video_url );
console . log ( 'Duration:' , recording . duration_seconds , 'seconds' );
listStreamRecordings()
List the user’s stream recordings. Only returns streams that have recordings.
listStreamRecordings ( options ?: ListStreamRecordingsOptions ): Promise < StreamRecordingsListResponse >
Parameter Type Description optionsListStreamRecordingsOptionsOptional pagination options
Returns: Promise<StreamRecordingsListResponse> - Paginated list of stream recordings.
// Get recent recordings
const { recordings , total } = await client . listStreamRecordings ({ limit: 20 });
// Paginate
const page2 = await client . listStreamRecordings ({ limit: 20 , offset: 20 });
Simulate API Methods
Simulate API methods were added in v1.0.0
The Simulate API allows you to run scripted interactions asynchronously. Unlike the Interactive API, simulations execute in the background and produce recordings you can retrieve when complete.
simulate()
Create a new simulation job.
simulate ( options : SimulateOptions ): Promise < SimulationJob >
Parameter Type Description optionsSimulateOptionsSimulation options with script
Returns: Promise<SimulationJob> - The created simulation job with ID and initial status.
const job = await client . simulate ({
script: [
{ timestamp_ms: 0 , start: { prompt: 'A cat sitting on a windowsill' } },
{ timestamp_ms: 3000 , interact: { prompt: 'The cat stretches' } },
{ timestamp_ms: 6000 , interact: { prompt: 'The cat yawns' } },
{ timestamp_ms: 9000 , end: {} }
],
portrait: true
});
console . log ( 'Simulation started:' , job . job_id );
getSimulateStatus()
Get the current status of a simulation job.
getSimulateStatus ( simulationId : string ): Promise < SimulationJobDetail >
Parameter Type Description simulationIdstringThe simulation ID to check
Returns: Promise<SimulationJobDetail> - Detailed status including streams created.
const status = await client . getSimulateStatus ( job . job_id );
console . log ( 'Status:' , status . status );
if ( status . status === 'completed' ) {
for ( const stream of status . streams ) {
console . log ( 'Stream:' , stream . stream_id );
}
}
listSimulations()
List simulation jobs for the authenticated user.
listSimulations ( options ?: ListSimulationsOptions ): Promise < SimulationJobsList >
Parameter Type Description optionsListSimulationsOptionsOptional pagination options
Returns: Promise<SimulationJobsList> - Paginated list of simulation jobs.
const { jobs , total } = await client . listSimulations ({ limit: 10 });
for ( const sim of jobs ) {
console . log ( ` ${ sim . job_id } : ${ sim . status } ` );
}
cancelSimulation()
Cancel a pending or running simulation job.
cancelSimulation ( simulationId : string ): Promise < void >
Parameter Type Description simulationIdstringThe simulation ID to cancel
Returns: Promise<void> - Resolves when cancelled.
await client . cancelSimulation ( job . job_id );
console . log ( 'Simulation cancelled' );
Simulation methods can be called without an active connection. They only require a valid API key.
Properties
isConnected
get isConnected (): boolean
Whether the client is currently connected and ready.
currentStatus
get currentStatus (): ConnectionStatus
Current connection status.
Possible values: 'authenticating' | 'connecting' | 'reconnecting' | 'connected' | 'disconnected' | 'failed'
currentSessionId
get currentSessionId (): string | null
Current session ID, or null if not connected.
get mediaStream (): MediaStream | null
Current media stream containing video track from the streamer.
connectionState
get connectionState (): RTCPeerConnectionState | null
Current WebRTC peer connection state.
Possible values: 'new' | 'connecting' | 'connected' | 'disconnected' | 'failed' | 'closed' | null
iceConnectionState
get iceConnectionState (): RTCIceConnectionState | null
Current ICE connection state.
Possible values: 'new' | 'checking' | 'connected' | 'completed' | 'failed' | 'disconnected' | 'closed' | null