Skip to content

Webinar API

Overview

The Webinar API provides comprehensive functionality for creating and managing webinar sessions. It covers two main interaction methods:

  • REST APIs: For managing data and long-running operations (create sessions, list recordings, request transcripts)
  • Realtime APIs (WebSocket): For live events during webinars (viewer joins, chat messages, live captions)

Base URL: https://prod.altegon.com

Product Packages

Essential: Everything you need to run a normal webinar & events - Create/join webinars, audio/video streaming, text chatting, co-host & viewer controls, screen sharing, basic recording

Professional: Builds on Essential with AI features - Noise suppression, enhanced chatting, real-time captions, transcripts, summaries, interactive whiteboard, virtual backgrounds, speaker tagging, advanced analytics

Key Concepts

  • Webinar: A container where host and co-host connect and viewers watch them
  • Viewer: A connected user to the webinar who is viewing host & co-host discussions
  • Track: An audio or video stream from a host & co-host
  • SDK/Client: The JavaScript helper library that simplifies talking to REST + real-time APIs

Quick Rules: Use REST for one-off or administrative tasks (create a session, fetch recordings). Use WebSocket for real-time UI updates (participant presence, chat, live captions).

Platform Features

The Webinar platform includes comprehensive functionality:

Core Features: Component initialization, room parameter retrieval, socket connection setup, room creation and joining, device and transport initialization

Media Features: Media production for local audio/video streams, remote media consumption, screen sharing with conflict prevention, individual and host-initiated recording

Interactive Features: Real-time chat with text and file messaging, co-host management with request/approval system, active speaker detection using hark library, live transcription with real-time captions

Advanced Features: Collaborative screen annotation with drawing tools, virtual background & effects using MediaPipe, dynamic video layout management with opentok-layout-js, proper cleanup and exit procedures

REST API Endpoints

Authentication & User Management

Register User - POST /auth/register

Creates a new user account.

Field Type Required Description
email String Yes Valid user email which is not already registered
fName String Yes User's first name
lName String Yes User's last name
phoneNumber String Yes User's contact number
country String Yes User's country where he/she lives
password String Yes Password used for login purpose

Example Request:

{
    "email": "haseeb@altegon.com",
    "fName": "Haseeb",
    "lName": "Asif",
    "phoneNumber": "923085000453",
    "country": "Pakistan",
    "password": "AB1239001"
}

Response (201, 409, 401):

{
    "success": true,
    "message": "User created successfully",
    "data": {
        "user": {
            "fName": "Haseeb",
            "lName": "Asif",
            "email": "haseeb@altegon.com",
            "_id": "68afea5f8ed95fae8f7f6d86"
        }
    }
}

User Login - POST /auth/login

Authenticates a user and returns an access token.

Field Type Required Description
userName String Yes Valid user email
password String Yes Password that was entered on registration

Example Request:

{
    "userName": "haseeb@altegon.com",
    "password": "AB1239001"
}

Response (200, 401):

{
    "token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
}

Room Management

Create Room - POST /room/create

Creates a new virtual room for hosting meetings or conferences.

Field Type Required Description
name String Yes Room name
type String Yes Room type (e.g., "meeting", "conference")
maxParticipants Number No Maximum number of participants
isPrivate Boolean No Whether the room is private

Example Request:

{
  "name": "Haseeb Asif",
  "p2p": true,
  "type": "conference"
}

Response

{
  "success": true,
  "data": {
    "id": "68aff60d0d7bbb02c917906d",
    "name": "Haseeb Asif",
    "type": "conference",
    "p2p": true
  },
  "msg": "Room created Successfully!"
}

Meeting Management

Create Meeting - POST /meetings/create

Creates a new meeting with a title, host, and transcript settings.

Field Type Required Description
title String Yes Meeting title
hostId String Yes User ID
transcriptEnabled Boolean Yes Transcript enabled or not

Example Request:

{ 
    "title": "My Meeting", 
    "hostId": "67da6b256a7ab89c58666496", 
    "transcriptEnabled": false 
} 

Response (201, 500):

{ 
"meetingId": "68aff60da51d6c83a24245a2" 
} 

Update Meeting - POST /meetings/update/:meetingId

Updates meeting details such as title, transcript, summary generation, and recording information.

Field Type Required Description
userId String Yes User unqiue ID
roomId String No Room ID of meeting room
recordingId String No Recording ID
title String No Title of meeting
summaryGenerated Boolean No Summary generation enabled
transcriptEnabled Boolean No Transcription enabled

Example Request:

{ 
    "title": "Test Meeting", 
    "transcriptEnabled": false, 
    "summaryGenerated": false, 
    "roomId": "68aff60da51d6c83a24245a2", 
    "recordingId": "1737978989992_59591", 
}

Response (204, 404, 500)

Get Meeting - GET /meetings/list/:userId

Retrieves the list of meetings created by a specific user.

Field Type Required Description
userId String Yes User unqiue ID

Response (200, 500):

{
    "data" : {
        "createdAt" : "2025-10-14T13:08:09.559Z",
        "hostId" : "68de7a9d8f40ff41c940c7fb",
        "recordingIds" : [],
        "summaryGenerated" : false,
        "title" : "dsa",
        "transcriptEnabled" : false,
        "_id": "68ee4b39b22d6134386fc5dc"
    }
}

Recording Controls

Start Recording - POST /recording/start

Initiates recording for a specific room and stream with defined recording details.

Field Name Type Mandatory Description
roomId String Yes Room ID where recording.
userName String Yes Username who requested recording.
streamId String Yes Stream ID.
recordingId String Yes Recording ID.
startTime Date Yes Recording start time.
videoType String Yes Video type.
status String Yes Recording status.

Example Request:

{ 
    "roomId": "68affe24fabc0702c5d30c29", 
    "userName": "Haseeb Asif", 
    "streamId": 755961800257348400, 
    "recordingId": "1756364334628_738386", 
    "startTime": 1756364334634, 
    "videoType": "video", 
    "status": "pending" 
}

Response (201, 500)

Stop Recording - POST /recording/stop

Stops the ongoing recording and saves the recording end time.

Field Name Type Mandatory Description
recordingId String Yes Recording ID to be stopped.
streamId String Yes Stream ID.
endTime Date Yes End time when you ended the recording.

Example Request:

{ 
    " endTime": 1756364334634 
} 

Response (204, 404, 500)

Get Recordings - GET /recordings/list

Retrieves a list of all recorded sessions with their details.

Example Request:

GET /recordings/list

Response (200, 500):

    [
        {
            "_id": "670bc2ab12f5b0e6f7a4e9d4",
            "roomId": "68affe24fabc0702c5d30c29",
            "recordingId": "1756364334628_738386",
            "hostName": "Haseeb Asif",
            "status": true,
            "createdAt": "2025-10-13T10:15:30.000Z",
            "updatedAt": "2025-10-13T10:20:00.000Z"
        }
    ]

File Upload

Upload File - POST /room/:roomId/upload

Uploads a file to a specific meeting room.

Field Name Type Mandatory Description
roomId String Yes Room ID for file upload.

Example Request:

File Data

Response (200, 400):

{ 
    "success": true, 
    "data": { 
        "originalName": "Document_72835505.docx", 
        "filename": "68b01f530d7bbb02c917906f_Document_72835505.docx", 
        "size": 27685, 
        "roomId": "68b01f530d7bbb02c917906f", 
        "path": "documents/68b01f530d7bbb02c917906f/68b01f530d7bbb02c917906f_Document_72835505.docx" 
    }, 
    "msg": "File uploaded successfully!" 
}

Get File - GET /room/:roomId/files/:filename

Retrieves the details of an uploaded file in a specific meeting room.

Field Name Type Mandatory Description
roomId String Yes Room ID for file upload.
filename String Yes File Name of uploaded file.

Response (200, 404, 500):

{ 
    "success": true, 
    "data": { 
        "filename": "68b01f530d7bbb02c917906f_ZekliOpenAPI-for Customers - Instant.docx", 
        "originalName": "ZekliOpenAPI-for Customers - Instant.docx", 
        "size": 27685, 
        "uploadDate": "2025-08-28T09:32:48.745Z", 
        "path": "documents/68b01f530d7bbb02c917906f/68b01f530d7bbb02c917906f_ZekliOpenAPI for Customers - 
        Instant.docx" 
    }, 
    "msg": "File retrieved successfully!" 
}

Delete Files - DELETE /room/:roomId/files

Deletes all files uploaded in a specific meeting room.

Field Name Type Mandatory Description
roomId String Yes Room ID for file upload.

Response (200, 400, 404, 500):

{ 
    "success": true, 
    "msg": " All files for room 68b01f530d7bbb02c917906f deleted successfully" 
}

Transcription

Get Speech Token - GET /api/get-speech-token

Generates speech token for transcripiton using Azure AI foundry model.

Response:

{ 
    "token": 
"eyJhbGciOiJFUzI1NiIsImtpZCI6ImtleTEiLCJ0eXAiOiJKV1QifQ.eyJyZWdpb24iOiJlYXN0dXMyIiwic3Vic2NyaXB0aW9uL
 WlkIjoiZGQzMzZkZTFjN2M4NDk3MTgyZjQ4MTQ1YzZkYzM3MTciLCJwcm9kdWN0LWlkIjoiQUlTZXJ2aWNlcy5TMCIsI
 mNvZ25pdGl2ZS1zZXJ2aWNlcy1lbmRwb2ludCI6Imh0dHBzOi8vYXBpLmNvZ25pdGl2ZS5taWNyb3NvZnQuY29tL2lu
 dGVybmFsL3YxLjAvIiwiYXp1cmUtcmVzb3VyY2UtaWQiOiIvc3Vic2NyaXB0aW9ucy8wYzUxMWU0My0xNWNmLTRj
 ODMtOWE1ZS05MTY3NmE0NjFmMGUvcmVzb3VyY2VHcm91cHMvcmctc2hhaG5hd2F6LTI1NTVfYWkvcHJvdmlkZ
 XJzL01pY3Jvc29mdC5Db2duaXRpdmVTZXJ2aWNlcy9hY2NvdW50cy9haS1zaGFobmF3YXo4NzI5YWk1NTU2NDUw
 OTQ4NzEiLCJzY29wZSI6WyJzcGVlY2hzZXJ2aWNlcyIsInZpc2lvbiIsImh0dHBzOi8vYXBpLm1pY3Jvc29mdHRyYW5zbGF
 0b3IuY29tLyJdLCJhdWQiOlsidXJuOm1zLnNwZWVjaCIsInVybjptcy5zcGVlY2hzZXJ2aWNlcy5lYXN0dXMyIiwidXJuOm1
 zLnZpc2lvbi5lYXN0dXMyIiwidXJuOm1zLm1pY3Jvc29mdHRyYW5zbGF0b3IiXSwiZXhwIjoxNzU4MjgyNjM5LCJpc3Mi
 OiJ1cm46bXMuY29nbml0aXZlc2VydmljZXMifQ.byrrHOyz6gvykiqylXh2A7A79rlmmQBrgpnGZSsJZR9qJjRgLfFO1QzR
 MTyMOz3pwXyAMk0D682SPS3_gdL8PA", 
    "region": "eastus2" 
}

Send Transcript - POST /send-transcript

Stores a transcript message for a specific meeting.

Field Type Required Description
meetingId String Yes ID of the meeting
userName String Yes Name of the user sending the message
message String Yes Transcript message

Example Request:

{
    "meetingId": "68cd413466eab302c9f078cd",
    "userName": "Haseeb Stats",
    "message": "Hello."
}

Response (201, 500)

View Transcript - GET /view-transcript/:meetingId

Retrieves the entire transcript history for a specific meeting.

Parameter Type Required Description
meetingId String Yes ID of the meeting

Response:

{
    "data": [
        {
            "_id": "670b88775cba2f7de6f739d0",
            "meetingId": "68cd413466eab302c9f078cd",
            "userName": "Haseeb Stats",
            "message": "Hello.",
            "createdAt": "2025-10-13T12:20:30.000Z"
        }
    ]
}

Generate Transcript Summary - POST /transcription/generate-summary/:meetingId

Generates a summary of the transcript for a specific meeting.

Parameter Type Required Description
meetingId String Yes ID of the meeting

Response (200, 404, 500):

{
    "msg": "Summary Generated Successfully."
}

Get Transcript Summary - GET /meeting/:meetingId/summary

Retrieves the generated summary of a meeting.

Parameter Type Required Description
meetingId String Yes ID of the meeting

Response (200, 404, 500):

{
    "summary": "The participants discussed the upcoming product release, assigned tasks, and confirmed the timeline for testing."
}

Dashboard Analytics

Get Total Sessions - GET /reporting/total-sessions-for-user

Returns the total number of completed sessions based on filters.

Query Parameter Type Required Description
domain String No Domain name to filter sessions
sessionType String No Type of session
countryCode String No Country code
countryName String No Country name
cityName String No City name
startDateRange Date No Start date for filtering sessions
endDateRange Date No End date for filtering sessions
userID String No User ID of the host

Example Response:

GET /reporting/total-sessions-for-user?startDateRange=2025-10-01T00:00:00.000Z&endDateRange=2025-10-31T23:59:59.999Z&sessionType=&userID=68de7a9d8f40ff41c940c7fb

Response:

{
  "success": true,
  "data": 35
}

Get Total Sessions Aggregation - GET /reporting/total-sessions-aggregation-for-user

Returns total session counts grouped by domain and date.

Query Parameter Type Required Description
domain String No Domain name to filter sessions
sessionType String No Type of session
countryCode String No Country code
countryName String No Country name
cityName String No City name
startDateRange Date No Start date for filtering sessions
endDateRange Date No End date for filtering sessions
userID String No User ID of the host

Example Request:

GET /reporting/total-sessions-aggregation-for-user?sessionType=&userID=68de7a9d8f40ff41c940c7fb&startDateRange=2025-10-12T00%3A00%3A00.000Z&endDateRange=2025-10-18T23%3A59%3A59.999Z

Response:

{
  "success": true,
  "data": [
    {
      "domainCounts": [
        {
          "domain": "altegon.com",
          "count": 5
        }
      ],
      "date": "2025-10-12T00:00:00.000Z",
    }
  ]
}

Get Total Sessions Statistics - GET /reporting/total-sessions-statistics-for-user

Provides total number of sessions and sessions grouped by session type.

Query Parameter Type Required Description
domain String No Domain name to filter sessions
sessionType String No Type of session
countryCode String No Country code
countryName String No Country name
cityName String No City name
startDateRange Date No Start date for filtering sessions
endDateRange Date No End date for filtering sessions
userID String No User ID of the host

Example Request:

GET /reporting/total-sessions-statistics-for-user?startDateRange=2025-10-01T00:00:00.000Z&endDateRange=2025-10-31T23:59:59.999Z&sessionType=&userID=68de7a9d8f40ff41c940c7fb

Response:

{
  "success": true,
  "data": {
    "totalSessions": 35,
    "totalSessionsByType": [
      {
        "sessionType": "live",
        "totalSessions": 15
      }
      {
        "sessionType": "webinar",
        "totalSessions": 20
      },
      {
        "sessionType": "conference",
        "totalSessions": 15
      }
    ]
  }
}

Get Total Sessions By Location - GET /reporting/total-sessions-by-location-for-user

Returns total number of sessions grouped by city, region, and country.

Query Parameter Type Required Description
domain String No Domain name to filter sessions
sessionType String No Type of session
countryCode String No Country code
countryName String No Country name
cityName String No City name
startDateRange Date No Start date for filtering sessions
endDateRange Date No End date for filtering sessions
userID String No User ID of the host

Example Request:

GET /reporting/total-sessions-by-location-for-user?startDateRange=2025-10-01T00:00:00.000Z&endDateRange=2025-10-31T23:59:59.999Z&sessionType=&userID=68de7a9d8f40ff41c940c7fb

Response:

{
  "success": true,
  "data": [
    {
      "_id": {
        "city": "Lahore",
        "region": "Punjab",
        "country": "PK",
        "country_name": "Pakistan"
      },
      "count": 12
    },
    {
      "_id": {
        "city": "Karachi",
        "region": "Sindh",
        "country": "PK",
        "country_name": "Pakistan"
      },
      "count": 8
    }
  ]
}

Get Total Sessions and Participants Count By Location - GET /reporting/total-sessions-and-participants-count-by-location-for-user

Returns total sessions and total participants count grouped by city.

Query Parameter Type Required Description
domain String No Domain name to filter sessions
sessionType String No Type of session
countryCode String No Country code
countryName String No Country name
cityName String No City name
startDateRange Date No Start date for filtering sessions
endDateRange Date No End date for filtering sessions
userID String No User ID of the host

Example Request:

GET /reporting/total-sessions-and-participants-count-by-location-for-user?startDateRange=2025-10-01T00:00:00.000Z&endDateRange=2025-10-31T23:59:59.999Z&sessionType=&userID=68de7a9d8f40ff41c940c7fb

Response:

{
  "success": true,
  "data": [
    {
      "_id": "Lahore",
      "totalSessions": 10,
      "totalParticipants": 125
    },
    {
      "_id": "Karachi",
      "totalSessions": 6,
      "totalParticipants": 89
    }
  ]
}

Azure Blob Storage

Get User Storage Info - GET /recordings/user/{userId}/storage-info

Retrieves storage usage, limit, available space, and recording details for a specific user.

Query Parameter Type Required Description
userId String Yes User ID to fetch storage

Example Request:

GET /recordings/user/68de7a9d8f40ff41c940c7fb/storage-info

Response:

{
  "success": true,
  "data": {
    "totalUsage": 104857600,
    "totalUsageGB": "0.10",
    "availableStorage": 5263852800,
    "availableStorageGB": "4.90",
    "storageLimit": 5368709120,
    "storageLimitGB": "5.00",
    "isLimitExceeded": false,
    "exceedAmount": 0,
    "exceedAmountGB": "0.00",
    "usagePercentage": 2,
    "recordingsCount": 2,
    "recordings": [
      {
        "recordingId": "1756364334628_738386",
        "roomId": "68affe24fabc0702c5d30c29",
        "fileName": "meeting_2025-10-14.mp4",
        "fileSize": 52428800,
        "uploadedAt": "2025-10-14T13:08:09.559Z",
        "blobUrl": "https://blob.core.windows.net/user/68de7a9d8f40ff41c940c7fb/meeting_2025-10-14.mp4"
      }
    ],
    "warnings": [
      {
        "type": "usage",
        "message": "You have used 75% of your storage limit."
      }
    ]
  }
}

Update User Storage Limit - PUT /recordings/user/{userId}/storage-limit

Updates the storage limit for a user (admin only).

Query Parameter Type Required Description
userId String Yes User ID to update limit
Body Parameter Type Required Description
newLimit Number Yes New storage limit in bytes

Example Request:

PUT /recordings/user/68de7a9d8f40ff41c940c7fb/storage-limit
Body:
{
  "newLimit": 10737418240
}

Response:

{
  "success": true,
  "data": {
    "userId": "68de7a9d8f40ff41c940c7fb",
    "storageLimit": 10737418240,
    "storageLimitGB": "10.00"
  }
}

Delete Recording - DELETE /recordings/user/{userId}/room/{roomId}/recording/{recordingId}/{fileName}

Deletes a specific recording for a user from Azure Blob Storage and updates storage usage.

Query Parameter Type Required Description
userId String Yes User ID
roomId String Yes Room ID
recordingId String Yes Recording ID
fileName String Yes File name to delete

Example Request:

DELETE /recordings/user/68de7a9d8f40ff41c940c7fb/room/68affe24fabc0702c5d30c29/recording/1756364334628_738386/meeting_2025-10-14.mp4

Response:

{
  "success": true,
  "msg": "Recording deleted successfully.",
  "data": {
    "totalUsage": 52428800,
    "availableStorage": 5316280320,
    "recordingsCount": 1
  }
}

Get Exceeded Storage Users - GET /recordings/admin/exceeded-storage-users

Retrieves a list of users who have exceeded their storage limits (admin only).

Example Request:

GET /recordings/admin/exceeded-storage-users

Response:

{
  "success": true,
  "data": [
    {
      "userId": "68de7a9d8f40ff41c940c7fb",
      "totalUsage": 5468709120,
      "storageLimit": 5368709120,
      "exceedAmount": 100000000,
      "recordingsCount": 12
    }
  ]
}

Features

Security Features: Storage & Recordings

This section provides a comprehensive overview of the security features implemented for Azure Blob Storage and recording management in Altegon Meet APIs. Security is enforced at every layer, from API endpoints to storage access, ensuring data privacy, integrity, and compliance.

Overview

All storage and recording operations are protected by a multi-layered security model, including:

  • Authentication (JWT)

  • Authorization (role-based and resource-based)

  • Input validation

  • Rate limiting

  • Audit logging

  • Secure access tokens (SAS)

  • Principle of least privilege

Security Mechanisms

1. Middleware

Middleware components are used to enforce security policies on every API request:

  • Authentication (validateToken): Ensures that only authenticated users and admins can access protected endpoints. JWT tokens are validated for integrity and expiration.

  • Authorization (secureBlobAccess): Restricts access to user-specific resources. Only the resource owner or an admin can perform actions on a user's storage or recordings.

  • Input Validation: Middleware such as validateGetUserRecordings, validateDeleteRecording, validateGenerateSAS, and others check all incoming parameters and payloads for correctness, type safety, and malicious content. This prevents injection attacks and malformed requests.

  • Rate Limiting: Custom rate limiters (e.g., blobStorageRateLimit, blobUploadRateLimit, blobDeleteRateLimit, sasTokenRateLimit) are applied to sensitive endpoints to prevent brute-force, abuse, and denial-of-service (DoS) attacks. Limits are configurable per endpoint and user.

  • Audit Logging (logBlobAccess, logBlobModification): All sensitive operations (access, modification, deletion, SAS generation) are logged with user, timestamp, action, and resource details. Audit logs are used for compliance, monitoring, and incident response.

2. JWT Authentication
  • All protected endpoints require a valid JWT token in the Authorization: Bearer <token> header.

  • JWT tokens are signed and include claims for user identity, roles, and permissions.

  • Admin-only endpoints require the admin role, validated by JWT claims.

  • Tokens are checked for expiration and tampering on every request.

3. SAS Tokens (Shared Access Signatures)

Purpose:

  • SAS tokens provide secure, time-limited, and permission-scoped access to Azure Blob Storage resources without exposing storage account keys.

Generation:

  • SAS tokens are generated server-side for specific blobs, with defined permissions (read, write, delete) and expiry times (default: 60 minutes).

  • Only authenticated and authorized users can request SAS tokens for their own recordings.

Usage:

  • SAS URLs are returned by endpoints such as GET /recordings/user/{userId}/room/{roomId}/recording/{recordingId}/file/{fileName}/sas-url.

  • The client can use the SAS URL to download or share the recording securely within the allowed time window.

Security:

  • SAS tokens are never stored long-term and expire automatically.

  • Permissions are restricted to the minimum required for the operation.

4. Audit Logging

Every access, modification, deletion, and SAS generation event is logged with:

  • User ID

  • Action type (GET, DELETE, UPLOAD, SAS-GENERATE, etc.)

  • Resource details (userId, roomId, recordingId, fileName)

  • Timestamp

  • Outcome (success/failure)

Logs are stored securely and can be reviewed for compliance, troubleshooting, and security investigations.

5. Rate Limiting
  • Each sensitive endpoint is protected by a dedicated rate limiter.

  • Limits are set per user, per endpoint, and per time window (e.g., max N requests per minute).

  • Exceeding the limit results in a 429 Too Many Requests error.

  • Rate limiting helps prevent abuse, brute-force, and DoS attacks.

6. Best Practices & Additional Protections
  • Principle of Least Privilege: Users can only access and modify their own resources unless they have admin privileges.
  • Error Handling: All errors are logged and generic error messages are returned to avoid information leakage.
  • HTTPS Only: All API endpoints and blob access URLs require HTTPS to prevent eavesdropping and man-in-the-middle attacks.
  • Configurable Security: Audit logs, rate limits, and SAS token expiry are configurable to meet compliance and operational needs.

Notes

  • All security features are implemented in dedicated middleware files (see middleware/ folder).
  • Audit logs and rate limits are configurable for compliance and performance.
  • SAS tokens provide secure, temporary access to recordings for download or sharing.
  • Security is regularly reviewed and updated to address new threats and compliance requirements.

Recording Upload to Azure Blob Storage

Overview

  • Recordings are uploaded to Azure Blob Storage for secure, scalable storage and easy access.

  • Each recording is associated with a user, room, and recording ID for organization.

Upload Flow

Recording Completion:

  • When a meeting recording is completed, the backend processes and finalizes the recording file.

Blob Upload API:

  • Endpoint: POST /recordings/user/{userId}/room/{roomId}/recording/{recordingId}/upload-to-blob

  • Description: Uploads a finalized recording file to the user's blob directory in Azure Storage.

  • Request Body:

    • File data (multipart/form-data or server file path)
  • Response:

    • Blob URL of the uploaded recording

    • Updated storage info

Storage Tracking:

  • After upload, the user's storage usage is updated and warnings are checked.

  • Recording metadata is stored in the database (recordingId, roomId, fileName, fileSize, blobUrl, uploadedAt).

Accessing Recordings:

  • Recordings can be listed, deleted, or accessed via SAS URLs for secure download.

Transcription

Overview

The transcription feature enables real-time speech-to-text conversion during meetings, allowing participants' spoken words to be transcribed, encrypted, and stored securely. The system uses Microsoft Azure Cognitive Services for speech recognition and supports secure transmission and storage of transcript data.

Frontend (Angular)

Service: SpeechRecognitionService

Location: meet/src/app/shared/services/speech-recognition/speech-recognition.service.ts

Speech SDK: Uses microsoft-cognitiveservices-speech-sdk for speech recognition.

Token Handling: Fetches Azure speech token and region using a utility service.

Recognition Flow:

  1. Checks user permissions and feature flags before starting.

  2. Initializes the recognizer with the correct language and audio input.

  3. Handles partial and final recognition events.

  4. On final result, triggers a callback and sends the transcript to the backend.

  5. Handles errors, session stops, and cancellation events gracefully.

Encryption:

  • Before sending a transcript to the backend, the message is encrypted using the EncryptionService (AES-256 compatible with backend).

  • Encryption is performed asynchronously to avoid blocking the UI or video stream.

API Call:

  • Sends the encrypted transcript, along with userName and meetingId, to the backend using AllApiService.sendTranscript().
Key Methods

startTranscription(userName, meetingId, onTranscriptCallback)

  • Starts continuous speech recognition and handles transcript events.

stopTranscription()

  • Stops the recognizer and cleans up resources.

sendTranscript(body)

  • Encrypts and sends the transcript to the backend.

Backend (Node.js)

Controller: transcript.js

Location: altegon-meet-apis/controller/transcript.js

Transcript Storage:

  • Receives encrypted transcript messages from the frontend and stores them in the database.
  • Marks messages as isEncrypted: true.

Summary Generation:

  • Decrypts all transcript messages for a meeting.
  • Formats the decrypted messages for OpenAI summary generation.
  • Encrypts the generated summary before storing it.

Summary Retrieval:

  • Decrypts the summary before sending it to the frontend.

Security

  • All transcript messages and summaries are encrypted using AES-256 before storage.
  • Encryption is compatible between frontend (CryptoJS) and backend (Node.js crypto).
  • The encryption secret is managed securely in environment/config files.

Speaker Functionality

Overview

The speaker functionality in the webinar application allows users to control audio output devices (speakers), mute/unmute speakers, and select preferred output devices for optimal listening experience during a session.

Key Features

  • Speaker Mute/Unmute: Users can toggle speaker output to mute or unmute audio from the webinar.
  • Speaker Device Selection: Users can choose from available audio output devices (e.g., built-in speakers, headphones) for playback.
  • Active Device Highlighting: The currently selected speaker device is visually indicated in the UI.
  • Device Change Handling: The application responds to device changes (e.g., plugging in headphones) and updates the available device list.

How It Works

  • The footer component provides a control (usually an icon or button) to toggle speaker mute/unmute.
  • A device menu lists all detected audio output devices, allowing users to select their preferred speaker.
  • When a new device is selected, audio output is routed to that device.
  • The application listens for system device changes and refreshes the device list automatically.

User Actions

  • Mute/Unmute Speaker: Click the speaker icon/button in the footer to toggle sound output.
  • Change Speaker Device: Open the speaker device menu and select a different output device.

Summary and Key Points Generation

This document describes the logic and requirements for generating chat summaries and structured meeting notes using the OpenAI API in the Altegon Meet APIs backend.

Overview

The summary-generation utility provides two main functions: - Chat Summary Generation: Produces a concise paragraph summarizing a chat conversation. - Meeting Notes Generation: Produces a detailed, structured JSON summary of a meeting transcript.

Both functions use the Azure OpenAI API via the openai library, with custom system prompts to guide the output format and style.

Chat Summary Generation

  • Purpose: Summarizes a chat conversation in a clear, concise, and professional manner.
  • Prompt Instructions:
  • Identify key participants, intentions, and main topics.
  • Remove filler words, typos, and irrelevant small talk.
  • Clarify confusing statements where possible.
  • Return a short, easy-to-read paragraph capturing the overall context and purpose.
  • Mention ambiguities clearly rather than guessing.
  • API Call: Uses OpenAI Chat Completion API with a system prompt and the chat string as user input.
  • Output: A single paragraph summary (JSON object).

Meeting Notes Generation

  • Purpose: Analyzes a meeting transcript and produces a structured JSON summary.
  • Prompt Instructions:
  • Output ONLY valid JSON (no extra text or markdown).
  • Required fields:
    • title: Brief meeting title
    • summary: One-paragraph summary of key outcomes
    • highlights: 3-6 important points
    • chapters: Array of chapters with timestamps, titles, and bullet points
    • decisions: Decisions made, with agreement info
    • actions: Tasks, assignees, and due dates
    • qna: Questions and answers
    • participants: Names and roles
    • topics: List of topics
    • keywords: List of keywords
    • ambiguities: Items needing clarification
  • Use exact field names and structure as shown in the example.
  • Preserve timestamps in MM:SS format.
  • Use empty arrays or 'Unassigned'/'Not specified' if data is missing.
  • Do NOT hallucinate; only use information from the transcript.
  • Infer participant roles from context when possible.
  • API Call: Uses OpenAI Chat Completion API with a detailed system prompt and the transcript as user input.
  • Output: Structured JSON object with all required fields.

Example Output (Meeting Notes)

{
  "title": "Improving Transcription Accuracy and Chunking Approach",
  "summary": "Discussion focused on challenges and potential solutions for improving sentence chunking and transcription accuracy in multilingual speech-to-text systems.",
  "highlights": [
    "Current transcription model struggles with chunking and sentence accuracy.",
    "Shah prefers sentence-based chunking triggered by pauses, not fixed durations.",
    "Team will develop a test script to iterate solutions."
  ],
  "chapters": [
    {
      "timestamp": "00:00",
      "title": "Initial Model and Title Discussion",
      "bullets": [
        "Faheem and Shah briefly discuss project titles.",
        "No concrete decision made on title."
      ]
    },
    {
      "timestamp": "03:00",
      "title": "Problems with Chunking",
      "bullets": [
        "Chunking by fixed word count causes translation issues.",
        "Team discusses increasing chunk size."
      ]
    }
  ],
  "decisions": [
    "Team will develop test script for evaluating chunking methods (Agreed by Faheem)."
  ],
  "actions": [
    {
      "action": "Create testing script for evaluating chunking and transcription accuracy",
      "assignee": "Faheem",
      "due_date": "No due date"
    }
  ],
  "qna": [
    {
      "question": "Which model was used in Phase One?",
      "answer": "Not specified; Faheem to consult team lead."
    }
  ],
  "participants": [
    {
      "name": "Faheem",
      "role": "Technical Lead"
    },
    {
      "name": "Shah",
      "role": "Product Owner"
    }
  ],
  "topics": ["Transcription models", "Chunking strategies", "Speech-to-text accuracy"],
  "keywords": ["transcription", "chunking", "pause detection", "model selection"],
  "ambiguities": []
}

Implementation Notes

  • Both functions use environment variables for Azure OpenAI API configuration.
  • Meeting notes output must strictly follow the required JSON structure.
  • Chat summary output is a concise paragraph, typically in JSON format.
  • Error handling and logging are implemented for debugging and reliability.

Encrypted Transcription

This document describes the encryption and decryption logic for meeting transcripts and summaries in the Altegon Meet APIs backend.

Overview

Transcription data (messages and summaries) is encrypted before being stored in the database and decrypted when retrieved. This ensures that sensitive meeting content remains secure at rest and in transit.

Encryption is implemented using AES-256-CBC (compatible with CryptoJS on the frontend) and supports legacy AES-256-GCM for backward compatibility.


Encryption Logic

  • AES-256-CBC encryption with PBKDF2 key derivation (10,000 iterations, 64-byte salt)
  • IV: 16 bytes, randomly generated per encryption
  • Salt: 64 bytes, randomly generated per encryption
  • Backward compatibility: Supports legacy GCM format for decryption
  • Base64 encoding: Encrypted data is stored as base64 (salt + iv + encryptedData)

Functions

encrypt(text, encryptionSecret)

  • Encrypts a string using AES-256-CBC.
  • Returns a base64 string containing salt, IV, and encrypted data.
  • Throws if no encryption secret is provided.

decrypt(encryptedText, encryptionSecret)

  • Decrypts a base64 string using AES-256-CBC (or GCM if detected).
  • Returns the original plain text.
  • Throws if decryption fails or no secret is provided.

encryptObject(obj, fields, encryptionSecret)

  • Encrypts specified fields in an object.
  • Returns a new object with encrypted fields.

decryptObject(obj, fields, encryptionSecret)

  • Decrypts specified fields in an object.
  • Returns a new object with decrypted fields.
  • If decryption fails, the field is left as-is (for legacy or unencrypted data).

Transcription Flow

Adding a Transcript
  • Incoming transcript messages from the frontend are already encrypted.
  • The backend stores the message as-is and marks it as isEncrypted: true.
Viewing a Transcript
  • When retrieving transcripts, the backend decrypts each message (if marked as encrypted) before further processing or sending to OpenAI for summary generation.
Generating a Summary
  • All transcript messages are decrypted and formatted for OpenAI.
  • The generated summary is encrypted before being saved to the database.
  • The summary is marked as isEncrypted: true.
Retrieving a Summary
  • The summary is decrypted before being sent to the frontend.
  • If decryption fails, the original (possibly unencrypted) summary is returned.

Security Notes

  • The encryption secret is provided via environment/config and must be kept secure.
  • PBKDF2 with 10,000 iterations and a 64-byte salt provides strong key derivation.
  • AES-256-CBC is compatible with CryptoJS for cross-platform encryption/decryption.
  • Legacy GCM support ensures backward compatibility with older data.

Noise Cancelation

Overview

Noise Cancelation in webinar dynamically enables or disables audio processing features (noise suppression, echo cancellation, auto gain control, and stereo/mono) for the local audio stream in a Mediasoup-based webinar. It updates the user's audio track and synchronizes the change with the Mediasoup producer, ensuring all participants receive the updated audio.

Parameters

Name Type Description
isEnabled boolean If true, enables noise cancelation features; if false, disables them.

Functionality

  • Updates the internal state to reflect the new noise cancelation setting.
  • Checks for the existence of a local audio stream and audio track.
  • Requests a new audio track from the user's device with updated constraints:
  • Stereo (2 channels) and all processing features enabled if isEnabled is true.
  • Mono (1 channel) and all processing features disabled if isEnabled is false.
  • Replaces the old audio track in the local stream with the new one.
  • Updates the Mediasoup producer to use the new audio track, so remote participants hear the change.
  • Stops the old audio track to free resources.
  • Notifies the user of success or failure via toast messages and alerts.

Error Handling

  • Logs errors if the local stream or audio track is missing.
  • Alerts the user if updating the audio track fails.
  • Handles missing Mediasoup producer or producer label gracefully.

User Feedback

  • Shows a success toast when noise cancelation is enabled.
  • Shows an info toast when noise cancelation is disabled.
  • Alerts the user if the update fails.

Authentication & Headers

All API endpoints require authentication using the x-api-key header:

Content-Type: application/json
Authorization: x-api-key

Contact our support team to obtain your API key.

Error Handling

The API uses standard HTTP status codes: - 200 - Success | 201 - Created | 400 - Bad Request | 401 - Unauthorized - 403 - Forbidden | 404 - Not Found | 409 - Conflict | 500 - Internal Server Error

WebSocket Events

Overview

This section describes all the WebSocket events exchanged between Client ↔ Server,and their expected payloads.

SFU (mediasoap) socket:

Base URL: wss://sfu.altegon.com/socket.io

Events

createRoom

Description: Creates a new room with a unique room_id.

Payload:

{
  "room_id": "12345"
}

Response Payload (Callback):

{
    "already exists" // if room already exists
} 
{
  "room_id": "12345" //created successfully
}
join

Description: Joins an existing room with the given room_id.

Request Payload:

{
  "name": "John Doe",
  "room_id": "12345",
  "type": "host",
  "userId": "user-123"
}

Response Payload (Callback):

{
  "room": { /* Room state */ },
  "chatHistory": [
    {
      "room_id": "12345",
      "name": "Jane",
      "message": "Hello",
      "timestamp": 1692401234567
    }
  ]
}
getProducers

Description: Fetches list of active media producers in the room.

Response:

[
  {
    "producer_id": "abcd1234",
    "kind": "video",
    "peerId": "socket123"
  }
]
getRouterRtpCapabilities

Description: Returns mediasoup router RTP capabilities for the room.

Response Payload (Callback):

{
  "codecs": [...],
  "headerExtensions": [...]
}
createWebRtcTransport

Description: Creates a new WebRTC transport for the peer.

Response Payload (Callback):

{
  "id": "transport-id",
  "iceParameters": {},
  "iceCandidates": [],
  "dtlsParameters": {}
}
connectTransport

Description: Connects peer transport with provided DTLS parameters.

Payload:

{
  "transport_id": "transport-id",
  "dtlsParameters": { /* WebRTC DTLS params */ }
}

Response Payload (Callback):

"success"
produce

Description: Starts producing audio/video track.

Request Payload:

{
  "kind": "video",
  "rtpParameters": { /* WebRTC RTP params */ },
  "producerTransportId": "transport-id",
  "appData": { "type": "camera" }
}

Response Payload (Callback):

{
  "producer_id": "producer-uuid"
}
consume

Description: Consumes a producer stream from another peer.

Payload:

{
  "consumerTransportId": "transport-id",
  "producerId": "producer-uuid",
  "rtpCapabilities": { /* local RTP caps */ }
}

Response Payload (Callback):

{
  "id": "consumer-uuid",
  "producerId": "producer-uuid",
  "kind": "video",
  "rtpParameters": {}
}
resume

Description: Resumes a paused consumer stream.

Payload:

{}
getMyRoomInfo

Description: Returns the current room information.

Response Payload (Callback):

{
  "room_id": "12345",
  "peers": [...],
  "transports": [...]
}
sendMessage

Description: Sends a chat message to the room and saves it in Redis.

Payload:

{
  "room_id": "12345",
  "name": "John",
  "message": "Hello World"
}
videoPaused & videoResumed

Description: Notifies participants about video pause/resume state of a peer.

Payload:

{
  "socketId": "socket123",
  "name": "John",
  "producer_id": "producer-uuid",
  "room_id": "12345"
}
screen:shared

Description: Broadcasts that a peer has started sharing their screen.

Payload:

{
  "room_id": "12345",
  "socketId": "socket123",
  "name": "John"
}

Broadcast Payload:

{
  "socketId": "socket123",
  "name": "John"
}
screen:stopped

Description: Broadcasts that a peer has stopped screen sharing.

Payload:

{
  "room_id": "12345"
}
producerClosed

Description: Closes a specific media producer.

Payload:

{
  "producer_id": "producer-uuid"
}
exitRoom

Description: Leaves the room and cleans up transports.

Response Payload (Callback):

"successfully exited room"
cohost:request / respond

Payload: cohost:request (Client → Server → Broadcasters)

{
  "name": "John",
  "room_id": "12345",
  "socketId": "socket123",
  "userId": "user-123"
}

cohost:respond (Client → Server)

{
  "targetSocketId": "socket123",
  "targetUserId": "user-123",
  "status": "accepted" // or "revoked"
}

Broadcast Payload (Server → Target):

{
  "status": "accepted"
}

ART package socket:

Base URL: wss://prod.altegon.com/erizocontroller/socket.io

whiteBoardToggle

Description:
Toggles the whiteboard on or off for a specific room. This is typically emitted when a host enables or disables the whiteboard feature.

Payload:

{
  "type": "whiteBoardToggle",
  "msg": {
    "room_id": "12345",
    "flag": true,
    "socketId": "abc123"
  }
}
whiteBoardStateRequest

Description: A newly joined participant emits this to request the current whiteboard state from the active whiteboard owner.

Payload:

{
  "type": "whiteBoardStateRequest",
  "msg": {
    "room_id": "12345",
    "socketId": "newPeerSocketId"
  }
}
white-board-stage

Description:Sends the entire current canvas state as JSON. Used when a new peer joins and needs the current state. The host explicitly syncs the canvas to everyone.

Payload:

{
  "type": "white-board-stage",
  "msg": {
    "stage": { "attrs": { ... }, "children": [ ... ] },
    "socketId": "abc123",
    "room_id": "12345"
  }
}
sendDrawingData

Description: Handles real-time drawing actions (brush strokes, erasing, cursor movements, etc.) to sync between participants.

Payload:

{
  "type": "sendDrawingData",
  "msg": {
    "drawingType": "mousedown",
    "payload": {
      "pos": { "x": 200, "y": 150 },
      "eraser": false,
      "brushSize": 3,
      "inkColor": "#000000",
      "brushOpacity": 1
    },
    "socketId": "abc123",
    "room_id": "12345"
  }
}
Un-do

Description: Triggered when a user performs an undo action on their whiteboard. All peers remove the most recent stroke.

Payload:

{
  "type": "Un-do",
  "msg": {
    "socketId": "abc123",
    "room_id": "12345"
  }
}
clear-whiteBoard

Description: Triggered when the whiteboard is cleared by the host or a user. All peers remove all strokes.

Payload:

{
  "type": "clear-whiteBoard",
  "msg": {
    "socketId": "abc123",
    "room_id": "12345"
  }
}

Altegon Meet APIs socket:

Base URL: wss://prod.altegon.com/socket.io

Events

room.join

Description: Joins a user to a session room (v1.0 logic).

Payload

{
  "roomId": "abc123",
  "sessionType": "viewer",
  "ip": "192.168.0.100"
}

Recording Server Socket:

Base URL: wss://prod.altegon.com:3304/socket.io

Events

create-file

Description: Starts creating a recording file on server for a specific video track.

Payload

{
  "roomId": "room-1",
  "recordingId": "rec-101",
  "videoId": "cam-1"
}
stream

Description: Streams binary video chunks to the server.

Payload:

{
"data":{Binary video chunk},
"videoId":"VideoId"
}
process-recording

Description: Trigger final recording processing (scale + concat video).

Payload

{
  "roomId": "room-1",
  "recordingId": "rec-101"
}
stop-writing

Description: Stop writing stream for a specific video ID and finalize file.

Payload

{
"cam-1"
}

Support & Contact

For technical implementation assistance, troubleshooting, or additional architecture details, please contact our support team.