Author
Simran Butalia
Date Published
November 08, 2024
Tags
Video understanding
Video Editing
Media and Entertainment
Share
Join our newsletter
You’re now subscribed to the Twelve Labs Newsletter! You'll be getting the latest news and updates in video understanding.
Oh no, something went wrong.
Please try again.

In the fast-paced world of film production, time is of the essence. One of the most labor-intensive tasks is managing daily rushes, or "dailies"—the raw, unedited footage filmed each day. Organizing these dailies can be both time-consuming and tedious. Traditionally, production teams manually sift through hours of footage, categorizing scenes and organizing them into bins based on criteria like location, characters, and actions. This manual process can slow down the production pipeline and delay crucial decisions, such as selecting the best takes for the final cut.

‍

A typical editorial workflow

At Twelve Labs, we develop advanced AI models that understand video content at a deep, contextual level. These models can be integrated into your workflow to automate the organization and preliminary editing of your footage, helping you move from raw dailies to an editable timeline more efficiently.

Marengo, our multimodal video understanding model serves as a powerful tool that can be utilized to automatically index and analyze incoming dailies. By examining visual, auditory, and contextual elements, Marengo enables developers to create solutions that categorize footage with remarkable accuracy.

With Marengo, you can build applications that sort scenes into predefined bins based on criteria like location, characters, and actions. For example, in a fictional medieval epic like Game of Thrones, scenes could be automatically categorized into bins like “Battle Sequences,” “Dialogue in the Throne Room,” or “Outdoor Scenes.”

This automation allows editors and directors to quickly access the relevant footage, reducing the time spent manually organizing and searching for specific takes.

Pegasus, our generative model, further enhances the review process by generating summaries, chapters, and key highlights for each video in the categorized bin. For instance, Pegasus can generate a brief overview of all the takes within the “Battle Sequences” bin, highlighting key moments like specific sword fights or emotional close-ups.

‍

This allows directors to quickly review the dailies, focus on critical scenes, and provide feedback without sifting through hours of footage. Editors can also start working on rough cuts sooner, as the AI has already organized and prioritized the footage.

‍Jockey, now this is where things get really fun. Jockey is an open-source conversational video agent that demonstrates how our models can be harnessed to create innovative solutions. Built using Twelve Labs' powerful video understanding APIs and LangGraph's flexible framework, Jockey showcases the potential of our technology in streamlining the video editing process.

‍

Jockey allows users to interact with their video content through natural language prompts. Simply describe the scenes or sequences you want, and Jockey assembles a rough cut that can be imported as an EDL directly into your timeline. 

As an open-source project, Jockey can be customized and extended to meet the specific needs of your production, highlighting the flexibility of solutions built with Twelve Labs' models.

At Twelve Labs, we understand that creativity is at the heart of filmmaking. Our AI models are designed to assist, not replace, the creative process. By automating the tedious aspects of dailies management, Twelve Labs frees up more time for directors, editors, and other creatives to focus on what they do best, crafting a compelling story.

The AI doesn’t make creative decisions; instead, it enhances the creative process by providing a more organized and efficient workflow. The result is a production pipeline that is faster, more efficient, and still deeply rooted in the vision and artistry of the filmmakers.

‍

Your editorial workflow with Twelve Labs

Imagine you are working on a large-scale fantasy film, akin to Lord of the Rings. Every day, your team generates hours of dailies from various locations, ranging from vast battlefields to intimate dialogue scenes.

  • Scene Classification: Marengo automatically categorizes these scenes into bins such as “Orc Battle,” “Forest Ambush,” and “Council Meeting.”
  • Folder Organization: Within the “Orc Battle” bin, the footage is further organized based on specific action sequences, such as “Sword Fights” or “Archer Shots.”
  • Review Process: Pegasus generates a highlight reel summarizing the key moments in each video, allowing the director to quickly review and approve the best takes for the next stage of editing.

By using Twelve Labs, the production team can cut down on hours of manual labor, allowing them to focus on refining the narrative and enhancing the visual storytelling. The end result is a faster production timeline without compromising on creative quality.

Twelve Labs is not just an AI modeling company but a partner in your creative journey. By helping to automate the mundane and enhancing the essential, our AI technology helps filmmakers get quicker to their edits, protect their creative vision, and produce high-quality content faster. Whether it's a massive fantasy epic or a small indie drama, Twelve Labs makes sure the magic happens efficiently.

Experience the future of filmmaking with Twelve Labs—where AI empowers artistry.

‍

‍

Generation Examples
No items found.
No items found.
Comparison against existing models
No items found.

Related articles

Introducing Marengo 2.7: Pioneering Multi-Vector Embeddings for Advanced Video Understanding

We are excited to announce Marengo 2.7 - a breakthrough in video understanding powered by our innovative multi-vector embedding architecture!

Jeff Kim, Mars Ha, James Le
Building Advanced Video Understanding Applications: Integrating Twelve Labs Embed API with LanceDB for Multimodal AI

Leverage Twelve Labs Embed API and LanceDB to create AI applications that can process and analyze video content with unprecedented accuracy and efficiency.

James Le, Manish Maheshwari
Unlocking Video Insights: The Power of Phyllo and Twelve Labs Collaboration

The collaboration between Phyllo and Twelve Labs is set to revolutionize how we derive insights from video content on social media

James Le
A Recap of Our Multimodal AI in Media & Entertainment Hackathon in Sunny Los Angeles!

Twelve Labs co-hosted our first in-person hackathon in Los Angeles!

James Le