Developer Hub

Get quick access to our vast documentation library.

Easily build features like semantic search, anomaly detection, content recommenders and capabilities tailored to you. Our API unlocks your video’s full potential.

Try our API.

Python

Node

import requests
url = "https://api.twelvelabs.io/v1.3/indexes"
headers = {
    "accept": "application/json",
    "Content-Type": "application/json"
}
response = requests.post(url, headers=headers)
print(response.text)

Follow these steps for a running start using TwelveLabs’ multimodal API. 

Try our API.

Python

Node

import requests
url = "https://api.twelvelabs.io/v1.3/indexes"
headers = {
    "accept": "application/json",
    "Content-Type": "application/json"
}
response = requests.post(url, headers=headers)
print(response.text)

Follow these steps for a running start using TwelveLabs’ multimodal API. 

Try our API.

Python

Node

import requests
url = "https://api.twelvelabs.io/v1.3/indexes"
headers = {
    "accept": "application/json",
    "Content-Type": "application/json"
}
response = requests.post(url, headers=headers)
print(response.text)

Follow these steps for a running start using TwelveLabs’ multimodal API. 

Try our API.

Python

Node

import requests
url = "https://api.twelvelabs.io/v1.3/indexes"
headers = {
    "accept": "application/json",
    "Content-Type": "application/json"
}
response = requests.post(url, headers=headers)
print(response.text)

Follow these steps for a running start using TwelveLabs’ multimodal API. 

Our stable of models.

Learn more about TwelveLabs’ world-leading video foundation models.

At TwelveLabs, we’re developing video-native AI systems that can solve problems with human-level reasoning. Helping machines learn about the world — and enabling humans to retrieve, capture, and tell their visual stories better.

Cover image
Logo animated
Marengo

Our breakthrough video foundation model analyzes frames and their temporal relationships, along with speech and sound — a huge leap forward for search and any-to-any retrieval tasks.

Cover image
Logo animated
Marengo

Our breakthrough video foundation model analyzes frames and their temporal relationships, along with speech and sound — a huge leap forward for search and any-to-any retrieval tasks.

Cover image
Logo animated
Marengo

Our breakthrough video foundation model analyzes frames and their temporal relationships, along with speech and sound — a huge leap forward for search and any-to-any retrieval tasks.

Cover
Logo animated
Pegasus

Our powerful video-first language model integrates visual, audio, and speech information — and employs this deep video understanding to reach new heights in text generation.

Cover
Logo animated
Pegasus

Our powerful video-first language model integrates visual, audio, and speech information — and employs this deep video understanding to reach new heights in text generation.

Cover
Logo animated
Pegasus

Our powerful video-first language model integrates visual, audio, and speech information — and employs this deep video understanding to reach new heights in text generation.