Skip to main content
By the end of this guide, you will have a working web application that uses the Cognigy Click To Call SDK to establish a SIP-based voice call from the browser. You will install the SDK, verify browser compatibility, configure a client, handle audio streams, and make your first call.

Prerequisites

  • JavaScript or TypeScript. The SDK is framework-agnostic and works with React, Angular, Vue, or vanilla JS.
  • Node.js version 16 or later for build tooling and package management.
  • Package manager npm.
  • A modern browser with Click To Call support — Chrome 93+, Firefox 92+, Safari 15.4+, or Edge 93+. For more information, refer to Supported Browsers.
  • A Cognigy.AI Flow with a configured Voice Gateway Endpoint. You will need the Endpoint URL.
  • HTTPS (or localhost) context. WebSocket and UDP traffic must be allowed by your firewall.

Installation

  1. Install the SDK via npm:
    npm install @cognigy/click-to-call-sdk
    
  2. Verify the installation:
    npm list @cognigy/click-to-call-sdk
    
    You will see the installed version in the output, for example:
    └── @cognigy/click-to-call-sdk@0.0.5
    

Configuration

Before creating a client, verify that the user’s browser supports Click To Call. Then create a client instance by providing your Endpoint URL and an optional user identifier.
import { createWebRTCClient, checkWebRTCSupport } from '@cognigy/click-to-call-sdk';

// Step 1: Verify browser support
const support = checkWebRTCSupport();
if (!support.supported) {
  console.error('WebRTC not supported. Missing features:', support.missing);
  throw new Error('Browser does not support WebRTC');
}

// Step 2: Create the client
const client = await createWebRTCClient({
  endpointUrl: 'https://your-cognigy-environment.com/token',
  userId: 'user-123',
});
The endpointUrl must point to a valid Cognigy Voice Gateway Endpoint:
  1. In the Configration section of the Voice Gateway Endpoint, copy the URL from the Endpoint URL field.
  2. Replace wss:// with https:// and remove /voiceGateway from the path. For example, if your Endpoint URL is:
    wss://endpoint-trial.cognigy.ai/d17711ad79bc73f9da03865066201d744386d78cf11651a3377ae35085902268/voiceGateway
    
    Then your endpointUrl should be:
    https://endpoint-trial.cognigy.ai/d17711ad79bc73f9da03865066201d744386d78cf11651a3377ae35085902268
    
The SDK fetches SIP credentials and server details from this URL automatically. For advanced configuration, such as specifying custom STUN/TURN servers, pass a pcConfig object:
const client = await createWebRTCClient({
  endpointUrl: 'https://your-cognigy-environment.com/token',
  userId: 'user-123',
  pcConfig: {
    iceServers: [
      { urls: 'stun:stun.l.google.com:19302' },
      { urls: 'turn:your-turn-server.com:3478', username: 'user', credential: 'password' }
    ]
  }
});

Initialize the Application

Register event listeners to respond to connection and call state changes. This step prepares the client before any calls are made.
// Connection events
client.on('connecting', () => console.log('Connecting to SIP server...'));
client.on('registered', () => console.log('SIP client registered'));
client.on('disconnected', () => console.log('Disconnected from SIP server'));

// Call events
client.on('answered', (session) => console.log('Call answered. Session ID:', session.id));
client.on('ended', (session, endInfo) => console.log('Call ended. Cause:', endInfo.cause));
client.on('failed', (session, endInfo) => console.error('Call failed. Cause:', endInfo.cause));

// Error handling
client.on('error', (error) => console.error('WebRTC SDK error:', error.message));
Set up audio handling. The SDK plays remote audio automatically using an internal Audio element — no manual audio setup is required. If you need access to the raw audio stream for advanced processing such as visualization or recording, set captureAudio: true in the client configuration and listen for the captureAudio event:
const client = await createWebRTCClient({
  endpointUrl: 'https://your-cognigy-environment.com/click-to-call-config',
  userId: 'user-123',
  captureAudio: true,
});

client.on('captureAudio', (stream) => {
  // Optional: process the raw audio stream for visualization, recording, etc.
  console.log('Received audio stream for processing');
});

client.on('audioEnded', () => {
  console.log('Remote audio ended');
});
The SDK manages audio playback internally. You don’t need to add an <audio> element to your HTML.

Your First Feature

With the client configured and events registered, connect to the SIP server and start a voice call. You can do this in two ways: Once the call is answered, you will hear the remote audio automatically (the SDK manages playback internally), and the answered event will fire in your console. To end the call and clean up:
await client.endCall();
await client.disconnect();

Run and Verify

  1. Start your application:
    npm run dev
    
  2. Open the application in a supported browser, for example, http://localhost:3000.
  3. Verify the following in your browser developer console:
    • Connecting to SIP server... — the SDK is establishing a connection.
    • SIP client registered — registration succeeded.
    • Call answered. Session ID: ... — the call is active and audio should be playing.
  4. To end the call, trigger client.endCall() from your UI or console.
If you don’t hear audio, check Troubleshooting — No Audio for common causes such as browser autoplay policies or missing TURN server configuration. Audio is managed automatically by the SDK, but the call must be initiated by a user gesture for autoplay policies to allow playback.

Complete Example

import { createWebRTCClient, checkWebRTCSupport } from '@cognigy/click-to-call-sdk';

async function main() {
  const support = checkWebRTCSupport();
  if (!support.supported) {
    console.error('WebRTC not supported:', support.missing);
    return;
  }

  const client = await createWebRTCClient({
    endpointUrl: 'https://your-cognigy-environment.com/click-to-call-config',
    userId: 'user-123',
    captureAudio: true,
  });

  client.on('captureAudio', (stream) => {
    console.log('Audio stream available for processing');
  });
  client.on('audioEnded', () => {
    console.log('Remote audio ended');
  });

  client.on('answered', (session) => console.log('Call answered:', session.id));
  client.on('ended', (session, endInfo) => console.log('Call ended:', endInfo.cause));
  client.on('error', (error) => console.error('Error:', error.message));

  await client.connectAndCall();

  window.addEventListener('beforeunload', () => {
    void client.destroy().catch(() => {
    });
  });
}

main();

Next Steps