
Developer Hub
Get quick access to our vast documentation library.
Easily build features like semantic search, anomaly detection, content recommenders and capabilities tailored to you. Our API unlocks your video’s full potential.
Try our API.
Python
Node
import requests url = "https://api.twelvelabs.io/v1.3/indexes" headers = { "accept": "application/json", "Content-Type": "application/json" } response = requests.post(url, headers=headers) print(response.text)
Follow these steps for a running start using TwelveLabs’ multimodal API.
Try our API.
Python
Node
import requests url = "https://api.twelvelabs.io/v1.3/indexes" headers = { "accept": "application/json", "Content-Type": "application/json" } response = requests.post(url, headers=headers) print(response.text)
Follow these steps for a running start using TwelveLabs’ multimodal API.
Try our API.
Python
Node
import requests url = "https://api.twelvelabs.io/v1.3/indexes" headers = { "accept": "application/json", "Content-Type": "application/json" } response = requests.post(url, headers=headers) print(response.text)
Follow these steps for a running start using TwelveLabs’ multimodal API.
Try our API.
Python
Node
import requests url = "https://api.twelvelabs.io/v1.3/indexes" headers = { "accept": "application/json", "Content-Type": "application/json" } response = requests.post(url, headers=headers) print(response.text)
Follow these steps for a running start using TwelveLabs’ multimodal API.
Jump right in with a
Sample App...
Discover what TwelveLabs can do by experimenting with our fully functional sample applications.
Discover what TwelveLabs can do by experimenting with our fully functional sample applications.
Jump right in with a
Sample App...
Discover what TwelveLabs can do by experimenting with our fully functional sample applications.
Python
Who talked about us
Use semantic search capabilities of the platform to identify the most suitable influencers to reach out to.
Tutorial
Tutorial
Try this sample app
Node
Generate social media posts for your videos
Simplify the cross-platform video promotion workflow by generating unique posts for each social media platform.
Tutorial
Tutorial
Code
Try this sample app
Python
Shade finder
This application uses the image-to-video search feature to find color shades in videos.
Tutorial
Tutorial
Code
Try this sample app
...Or give one of our recipes a whirl.
Not sure where to start? Try out one of our recipes, complete with ready-made instructions.
Not sure where to start? Try out one of our recipes, complete with ready-made instructions.
...Or give one of our recipes a whirl.
Not sure where to start? Try out one of our recipes, complete with ready-made instructions.
Integrations
Expand the functionality and efficiency of TwelveLabs’ Video Understanding Platform with our partner integrations.
Expand the functionality and efficiency of TwelveLabs’ Video Understanding Platform with our partner integrations.

Semantic video search plugin
The plugin allows you to accurately identify movements, actions, objects, people, sounds, on-screen text, and speech

The Twelve Labs handler
This guide outlines how you can use the handler and how the handler interfaces with the Twelve Labs Video Understanding Platform.

Semantic video search
This integration combines Twelve Labs' Embed API with MongoDB Atlas Vector Search to create an efficient semantic video search solution.
Integrations
Expand the functionality and efficiency of TwelveLabs’ Video Understanding Platform with our partner integrations.

Semantic video search plugin
The plugin allows you to accurately identify movements, actions, objects, people, sounds, on-screen text, and speech

The Twelve Labs handler
This guide outlines how you can use the handler and how the handler interfaces with the Twelve Labs Video Understanding Platform.

Semantic video search
This integration combines Twelve Labs' Embed API with MongoDB Atlas Vector Search to create an efficient semantic video search solution.
Browse by product
Explore documentation specific to each of our product offering.
Explore documentation specific to each of our product offering.
Browse by product
Explore documentation specific to each of our product offering.
Our stable of models.
Learn more about TwelveLabs’ world-leading video foundation models.
At TwelveLabs, we’re developing video-native AI systems that can solve problems with human-level reasoning. Helping machines learn about the world — and enabling humans to retrieve, capture, and tell their visual stories better.

Marengo
Our breakthrough video foundation model analyzes frames and their temporal relationships, along with speech and sound — a huge leap forward for search and any-to-any retrieval tasks.

Marengo
Our breakthrough video foundation model analyzes frames and their temporal relationships, along with speech and sound — a huge leap forward for search and any-to-any retrieval tasks.

Marengo
Our breakthrough video foundation model analyzes frames and their temporal relationships, along with speech and sound — a huge leap forward for search and any-to-any retrieval tasks.

Pegasus
Our powerful video-first language model integrates visual, audio, and speech information — and employs this deep video understanding to reach new heights in text generation.

Pegasus
Our powerful video-first language model integrates visual, audio, and speech information — and employs this deep video understanding to reach new heights in text generation.

Pegasus
Our powerful video-first language model integrates visual, audio, and speech information — and employs this deep video understanding to reach new heights in text generation.