Skip to main content
The main client class for connecting to Odyssey’s audio-visual intelligence platform.

Constructor

constructor(config: ClientConfig)
Creates a new Odyssey client instance with the provided API key.
ParameterTypeDescription
configClientConfigConfiguration with API key
import { Odyssey } from '@odysseyml/odyssey';

const client = new Odyssey({ apiKey: 'ody_your_api_key_here' });

Methods

connect()

Connect to a streaming session. The Odyssey API automatically assigns an available session.
async connect(handlers?: OdysseyEventHandlers): Promise<MediaStream>
ParameterTypeDescription
handlersOdysseyEventHandlersOptional event handlers for callback style
Returns: Promise<MediaStream> - Resolves with the MediaStream when the connection is fully ready (including data channel). You can call startStream() immediately after this resolves.
The connect() method supports two usage patterns: Await Style for sequential code, and Callback Style for event-driven code. Both patterns wait for the data channel to be ready before proceeding.
// Await style - use when you want sequential, Promise-based code
const mediaStream = await client.connect();
videoElement.srcObject = mediaStream;

// Connection is fully ready - no delay needed!
await client.startStream({ prompt: 'A cat' });
await client.interact({ prompt: 'Pet the cat' });

When to use each style

StyleBest for
AwaitSequential operations, simpler code flow, when you control the timing
CallbackUI-driven interactions, reactive patterns, when you need to respond to events
Both styles properly wait for the data channel to be ready. You do not need to add any artificial delays between connect() and startStream().

disconnect()

Disconnect from the session and clean up resources.
disconnect(): void
client.disconnect();

startStream()

Start an interactive stream session.
startStream(options?: StartStreamOptions): Promise<string>
OptionTypeDefaultDescription
promptstring''Initial prompt to generate video content
portraitbooleantruetrue for portrait (704x1280), false for landscape (1280x704). Resolution may vary by model.
image`FileBlob`Optional image for image-to-video generation
Returns: Promise<string> - Resolves with the stream ID when the stream is ready. Use this ID to retrieve recordings.
const streamId = await client.startStream({ prompt: 'A cat', portrait: true });
console.log('Stream started:', streamId);
Image-to-video requirements:
  • SDK version 1.0.0+
  • Max size: 25MB
  • Supported formats: JPEG, PNG, WebP, GIF, BMP, HEIC, HEIF, AVIF
  • Images are resized to 1280x704 (landscape) or 704x1280 (portrait)
// Image-to-video example
const mediaStream = await client.connect();
const imageFile = fileInput.files[0];
const streamId = await client.startStream({
  prompt: 'A cat',
  portrait: false,
  image: imageFile
});

interact()

Send an interaction prompt to update the video content.
interact(options: InteractOptions): Promise<string>
OptionTypeDescription
promptstringThe interaction prompt
Returns: Promise<string> - Resolves with the acknowledged prompt when processed.
const ackPrompt = await client.interact({ prompt: 'Pet the cat' });
console.log('Interaction acknowledged:', ackPrompt);

endStream()

End the current interactive stream session.
endStream(): Promise<void>
Returns: Promise<void> - Resolves when the stream has ended.
await client.endStream();

attachToVideo()

Attach the media stream to a video element.
attachToVideo(videoElement: HTMLVideoElement | null): HTMLVideoElement | null
ParameterTypeDescription
videoElement`HTMLVideoElementnull`The video element to attach the stream to
Returns: The video element for chaining, or null if no element provided.
const videoEl = document.querySelector('video');
client.attachToVideo(videoEl);

getRecording()

Added in v1.0.0
Get recording URLs for a completed stream.
getRecording(streamId: string): Promise<Recording>
ParameterTypeDescription
streamIdstringThe stream ID to get recording for
Returns: Promise<Recording> - Recording data with presigned URLs.
const recording = await client.getRecording('abc-123-def');
console.log('Video URL:', recording.video_url);
console.log('Duration:', recording.duration_seconds, 'seconds');

listStreamRecordings()

Added in v1.0.0
List the user’s stream recordings. Only returns streams that have recordings.
listStreamRecordings(options?: ListStreamRecordingsOptions): Promise<StreamRecordingsListResponse>
ParameterTypeDescription
optionsListStreamRecordingsOptionsOptional pagination options
Returns: Promise<StreamRecordingsListResponse> - Paginated list of stream recordings.
// Get recent recordings
const { recordings, total } = await client.listStreamRecordings({ limit: 20 });

// Paginate
const page2 = await client.listStreamRecordings({ limit: 20, offset: 20 });

Simulate API Methods

Simulate API methods were added in v1.0.0
The Simulate API allows you to run scripted interactions asynchronously. Unlike the Interactive API, simulations execute in the background and produce recordings you can retrieve when complete.

simulate()

Create a new simulation job.
simulate(options: SimulateOptions): Promise<SimulationJob>
ParameterTypeDescription
optionsSimulateOptionsSimulation options with script
Returns: Promise<SimulationJob> - The created simulation job with ID and initial status.
const job = await client.simulate({
  script: [
    { timestamp_ms: 0, start: { prompt: 'A cat sitting on a windowsill' } },
    { timestamp_ms: 3000, interact: { prompt: 'The cat stretches' } },
    { timestamp_ms: 6000, interact: { prompt: 'The cat yawns' } },
    { timestamp_ms: 9000, end: {} }
  ],
  portrait: true
});
console.log('Simulation started:', job.job_id);

getSimulateStatus()

Get the current status of a simulation job.
getSimulateStatus(simulationId: string): Promise<SimulationJobDetail>
ParameterTypeDescription
simulationIdstringThe simulation ID to check
Returns: Promise<SimulationJobDetail> - Detailed status including streams created.
const status = await client.getSimulateStatus(job.job_id);
console.log('Status:', status.status);
if (status.status === 'completed') {
  for (const stream of status.streams) {
    console.log('Stream:', stream.stream_id);
  }
}

listSimulations()

List simulation jobs for the authenticated user.
listSimulations(options?: ListSimulationsOptions): Promise<SimulationJobsList>
ParameterTypeDescription
optionsListSimulationsOptionsOptional pagination options
Returns: Promise<SimulationJobsList> - Paginated list of simulation jobs.
const { jobs, total } = await client.listSimulations({ limit: 10 });
for (const sim of jobs) {
  console.log(`${sim.job_id}: ${sim.status}`);
}

cancelSimulation()

Cancel a pending or running simulation job.
cancelSimulation(simulationId: string): Promise<void>
ParameterTypeDescription
simulationIdstringThe simulation ID to cancel
Returns: Promise<void> - Resolves when cancelled.
await client.cancelSimulation(job.job_id);
console.log('Simulation cancelled');
Simulation methods can be called without an active connection. They only require a valid API key.

Properties

isConnected

get isConnected(): boolean
Whether the client is currently connected and ready.

currentStatus

get currentStatus(): ConnectionStatus
Current connection status. Possible values: 'authenticating' | 'connecting' | 'reconnecting' | 'connected' | 'disconnected' | 'failed'

currentSessionId

get currentSessionId(): string | null
Current session ID, or null if not connected.

mediaStream

get mediaStream(): MediaStream | null
Current media stream containing video track from the streamer.

connectionState

get connectionState(): RTCPeerConnectionState | null
Current WebRTC peer connection state. Possible values: 'new' | 'connecting' | 'connected' | 'disconnected' | 'failed' | 'closed' | null

iceConnectionState

get iceConnectionState(): RTCIceConnectionState | null
Current ICE connection state. Possible values: 'new' | 'checking' | 'connected' | 'completed' | 'failed' | 'disconnected' | 'closed' | null