Multimodal shouldn’t mean multi-model

Our multimodal Embed API makes it easier to build features like semantic search, hybrid search, recommender systems, anomaly detection, classification, and more.

This feature is currently in limited Beta and accessible exclusively to a select group of users. Please register on this waitlist to request access.

Find any moment, when you need it

Unlock the wealth of data stored in any video. Twelve Labs can index petabytes of video content and make it semantically searchable with everyday language

OUR STRENGTH

Embeddings for developers, by developers

High performance motion understanding

Traditional approaches fail to consider the nuances of motion detection, our video first approach fixes that.

Multimodal API

No more duct taping siloed solutions for image, text, audio and video together. Support all modalities and turn rich video data into vectors in the same space.

Domain specificity

Your data is unique, your models should be too. Our base model is SOTA, fine tune for your domain and they deliver unparalleled performance.

Fast and reliable

With native video support Embed API reduces processing time, increasing throughput, saving you time and money.

USE CASES

Generate visual embeddings

Create flexible, domain-specific solutions with embeddings for input with your custom models.
RAG

Pair our models with your RAG pipeline to retrieve relevant information and improve data output.

Training models

Use embeddings to improve data quality when training large language models.

Creating high-quality training data

Transform workflows with embeddings to create training data, improve data quality, and reduce manual labeling needs.

Anomaly detection

You can use the platform to identify unusual patterns or anomalies in diverse data types. For example, you can detect and remove corrupt videos that only display a black background, thereby enhancing the quality of data set curation.

flexibility

Deploy with flexibility

Whether you prefer native cloud, private cloud,
or on-premise, our APIs can run in any environment.
SDK

We're developers that build for developers

Hit the ground running for any project using our dev friendly SDK. Available now in Python, and JS with Golang and Java coming soon.
Try now

Python

from

import os

client = TwelveLabs(<"YOUR_API_KEY">)

# Create new Index

index = client.index.create(

        name="My First Index",

        engines=[

            {

                "name": "marengo2.6",

                "options": ["visual", "conversation", "text_in_video"],

            },

        ],

    )

# Create new Task on Index (Upload video)

video_path = os.path.join(os.path.dirname(__file__), "<YOUR_FILE_PATH>")

task = client.task.create(index_id=index.id, file=video_path, language="en")

# Wait for indexing to finish

task.wait_for_done()

# Search from your index

query = "An artist climbing up the ladder that he painted."

result = client.search.query(index.id, query, ["visual", "conversation"])

print(result)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36

from twelvelabs import TwelveLabs
from twelvelabs.models.embed import EmbeddingsTask

# Initialize teh Tweleve Labs Client
tl_client = TwelveLabs(api_key="<your-twelvelabs-api-key>"

# Creae a video embedding task for your video
task
= tl_client.embed.task.create (
   engine_name: "Marengo-retrieval-2.6",
   video_url: "<your-video-url>"
)

# Retrieve the video embeddings
response
= tl_client.embed.task.retrieve(task.id)

# Print the embeddings
print(f"Task ID: {response.id}")
print(f"Engine Name: {response.engine_name}")
print(f"Status: {response.status}")

if response.video_embeddings:
  for
i, embedding in    enumerate(response.video_embeddings);
     print(f"\nEmbedding {i + 1}:")
     print(f" Start Offset:       {embedding.start_offset_sec}seconds")
      print(f" End Offset: {embedding.end_offset_sec}       seconds")
      print(f" Embedding Scope:       {embedding.embedding_scope}")
      print(f" Embedding Values:       {embedding.embedding.float[:5]}(truncated)")
else:
   print("No video embeddings available")

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

Task ID: <task_id>
Engine Name: Marengo-retrieval-2.6
Status: ready

Embedding 1:
Start Offset: 0.0 seconds
End Offset: 9.968292 seconds
Embedding Scope: clipEmbedding Values: [0.0007091692, 0.012394896, 0.015160556,0.032788984, 0.013209934,...] (truncated)

Embedding 2:
Start Offset: 0.0 seconds
End Offset: 9.968292 seconds
Embedding Scope: video
Embedding Values: [-0.0032605825, 0.0006030849, 0.005895467,0.036644306, 0.020424947,...] (truncated)

Python

from

import os

client = TwelveLabs(<"YOUR_API_KEY">)

# Create new Index

index = client.index.create(

        name="My First Index",

        engines=[

            {

                "name": "marengo2.6",

                "options": ["visual", "conversation", "text_in_video"],

            },

        ],

    )

# Create new Task on Index (Upload video)

video_path = os.path.join(os.path.dirname(__file__), "<YOUR_FILE_PATH>")

task = client.task.create(index_id=index.id, file=video_path, language="en")

# Wait for indexing to finish

task.wait_for_done()

# Search from your index

query = "An artist climbing up the ladder that he painted."

result = client.search.query(index.id, query, ["visual", "conversation"])

print(result)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42

import { TwelveLabs, EmbeddingsTask } from 'twelvelabs';

(async () => {
# Initialize the Twelve Labs client
const client = new TwelveLabs({ apiKey: "<your-twelvelabs-api-key>"});

// Create a video embedding task for your video
const videoUrl = "<your-video-url>";
let task = await client.embed.task.create(engineName, {   
   url: videoUrl,
});

// Retrieve the video embeddings
response = await client.embed.task.retrieve(task.id);})();

// Print the embeddings
console.log(`Task ID: ${response.id}`);
console.log(`Engine Name: ${response.engineName}`);
console.log(`Status: ${response.status}`);

if (response.videoEmbeddings && response.videoEmbeddings.length > 0) {response.videoEmbeddings.forEach((embedding, i) => {
   console.log(`\nEmbedding ${i + 1}:`);
   console.log(` Start Offset:    ${embedding.startOffsetSec}seconds`);
   console.log(` End Offset: ${embedding.endOffsetSec}    seconds`);
   console.log(` Embedding Scope:    ${embedding.embeddingScope}`);
   console.log(` Embedding    Values:${embedding.embedding.float.slice(0, 5)}    (truncated)`);});
} else {
  console.log("No video embeddings available");
}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

Task ID: <task_id>
Engine Name: Marengo-retrieval-2.6
Status: ready

Embedding 1:
Start Offset: 0.0 seconds
End Offset: 9.968292 seconds
Embedding Scope: clipEmbedding Values: [0.0007091692, 0.012394896, 0.015160556,0.032788984, 0.013209934,...] (truncated)

Embedding 2:
Start Offset: 0.0 seconds
End Offset: 9.968292 seconds
Embedding Scope: video
Embedding Values: [-0.0032605825, 0.0006030849, 0.005895467,0.036644306, 0.020424947,...] (truncated)

Python

from

import os

client = TwelveLabs(<"YOUR_API_KEY">)

# Create new Index

index = client.index.create(

        name="My First Index",

        engines=[

            {

                "name": "marengo2.6",

                "options": ["visual", "conversation", "text_in_video"],

            },

        ],

    )

# Create new Task on Index (Upload video)

video_path = os.path.join(os.path.dirname(__file__), "<YOUR_FILE_PATH>")

task = client.task.create(index_id=index.id, file=video_path, language="en")

# Wait for indexing to finish

task.wait_for_done()

# Search from your index

query = "An artist climbing up the ladder that he painted."

result = client.search.query(index.id, query, ["visual", "conversation"])

print(result)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

from twelvelabs import TwelveLabs
import os

client = TwelveLabs(<"YOUR_API_KEY">)

# Create new Index
index = client.index.create(        
     name="My First Index",
     engines=[
       {                
         "name": "marengo2.6",
         "options": ["visual", "conversation", "text_in_video"],
       },
     ],
  )
# Create new Task on Index (Upload the video)
video_path = os.path.join(os.path.dirname(__file__), "<YOUR_FILE_PATH>")
task = client.task.create(index_id=index.id, file=video_path, language="en")

# Wait for indexing to finish
task.wait_for_done()

# Search from your index
query = "An artist climbing up the ladder that he painted."
result = client.search.query(index.id, query, ["visual", "conversation"])
print(result)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

SearchResult(
  pool=SearchPool(
    total_count=16, total_duration=1754.0, index_id="<INDEX_ID>"
  ),
  data=[
     SearchData(
       score=83.07,
       start=30.09375,
       end=45.671875,
       video_id=<VIDEO_ID>",
       metadata=[{"type":"visual"}], 
       confidence="high",
       thumbnail_url=None,
       module_confidence=None,
     ),
...
  ],
  page_info=SearchPageInfo(
  limit_per_page=10,
  total_results=73,
  page_expired_at="2024-03-04T01:23:08Z",
  next_page_token="<NEXT_PAGE_TOKEN>",
  prev_page_token=None,
  ),
)

Python

from

import os

client = TwelveLabs(<"YOUR_API_KEY">)

# Create new Index

index = client.index.create(

        name="My First Index",

        engines=[

            {

                "name": "marengo2.6",

                "options": ["visual", "conversation", "text_in_video"],

            },

        ],

    )

# Create new Task on Index (Upload video)

video_path = os.path.join(os.path.dirname(__file__), "<YOUR_FILE_PATH>")

task = client.task.create(index_id=index.id, file=video_path, language="en")

# Wait for indexing to finish

task.wait_for_done()

# Search from your index

query = "An artist climbing up the ladder that he painted."

result = client.search.query(index.id, query, ["visual", "conversation"])

print(result)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42

import { TwelveLabs } from 'twelvelabs';
import fs from 'fs';
import path from 'path';

const client = new TwelveLabs({ apiKey: '<YOUR_API_KEY>' });

// Create new Index
const index = await client.index.create({
    name: 'My First Index',
    engines: [
      {
        name: 'marengo2.6',
        options: ['visual', 'conversation', 'text_in_video'],
      },
    ]
  });

// Create new Task on Index (Upload the video)
const videoPath = path.join(__dirname, '<YOUR_FILE_PATH>');
const task = await client.task.create({
  indexId: index.id,
  file: fs.createReadStream(videoPath),
language: 'en',
});

// Wait for indexing to finish
await task.waitForDone();

// Search from your index
const query = 'An artist climbing up the ladder that he painted';
const result = await client.search.query({
    indexId: index.id,
    query,
    options: ['visual', 'conversation'],
  });
console.log(result);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

SearchResult(
  pool: {
    totalCount: 16,
    totalDuration: 1754,
    indexId: '<INDEX_ID>'
  ),
  data: [
    {
      score: 83.07,
      start: 30.09375,
      end: 45.671875,
      metadata: [Array],
      vvideoId: '<VIDEO_ID>'
      confidence: 'high', 
      modules: [Array]
    },
...
  ],
  pageInfo: {
    limitPerPage: 10,,
    totalResults: 73
    pageExpiredAt: '2024-03-04T01:29:55Z',
    nextPageToken: '45e12410-5e5b-4a45-9a55-12b8a838fd99-1'
  }
}